View PDF

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Information revelation in the Diamond-Dybvig
banking model
Ed Nosal and Neil Wallace
December 21, 2009
Abstract
Three recent papers, Green and Lin, Peck and Shell, and Andolfatto et.
al., study optima in almost identical versions of the Diamond-Dybvig
model. They di¤er about what agents know when they make early withdrawals. We view all three as special cases of a framework in which the
planner chooses how much to reveal. It is shown that (i) the Peck-Shell
conclusion, the best weakly implementable outcome can be subject to a
bank run, is robust to a planner choice about what to reveal; (ii) the
solution to the strong implementability problem can be something other
than reveal nothing and reveal everything.

1

Introduction

The Diamond-Dybvig model [2] treats the structure of a demand deposit contract as a solution to a mechanism-design problem— as an optimum chosen by
a benevolent planner. The model is a two-date, real (nonmonetary) economy in
which ex ante identical consumers are the only agents. There are two frictions
in the model. First, consumers face uncertainty about their realized types—
impatient or patient— and their realized types are private information. Second,
there is sequential service— early withdrawal requests have to be dealt with as
they arrive.
Prior to learning their types and the order in which they will arrive to
make early withdrawals, consumers exchange their endowment of the resource
for a deposit contract. The model’ banking sector can be viewed as extends
ing callable loans in the form of the resource to the model’ business sector,
s
which invests the resource in the commonly available and productive constantreturns-to-scale intertemporal technology. The early withdrawals of deposits
are necessarily matched by calls on loans and real investment liquidation. Two
nice features of the model are that deposit contracts have the demand feature—

1

meaning that the consumers decide when to withdraw— and display a low rate
of return relative to that of the technology.
The new question studied here is how much of the history of early withdrawal
requests, a history which the planner necessarily knows, should be revealed as
that history is unfolding. In the context of a fractional reserve banking system,
the analogous question is how much of the time path of system-wide reserve
losses should be revealed (in real time) as that path evolves.
We were led to that question by the seemingly disparate results in three
recent papers: Green and Lin [4] (GL), Peck and Shell [7] (PS), and Andolfatto
et al [1] (ANW). All three explore optima in almost identical versions of the
model. The common ingredients are a …nite number of agents, independentacross-agents determination of preference types (and, hence, aggregate risk),
and an exogenous and random order that determines the sequence in which
early withdrawal requests are made. The versions di¤er about what agents know
when they turn up in order to make early withdrawals. In GL, each knows his
place in the ordering; in ANW, each knows that and the announcements of those
who preceded him; in PS, each knows nothing. Here, we view these versions as
special cases of a more general framework.
In the more general framework and as in PS, each agent starts out knowing
nothing. However, the planner— who, as noted above, deals with the agents in
order and who, therefore, necessarily knows each agent’ place in the ordering
s
and the previous announcements— can choose how much to reveal. From that
perspective, GL study optima under the restriction that the planner reveals
place in the ordering, PS study optima under the restriction that the planner
reveals nothing, and ANW study optima under the restriction that the planner
reveals everything.
Two kinds of implementability constraints are common to all versions of
the model. One is physical feasibility implied by the intertemporal technology
and sequential service. The other is incentive constraints that arise from the
private information. The incentive constraints depend on what the planner
reveals and on whether the planner is trying to solve a weak implementability
problem or a strong implementability problem. (Recall that an allocation is
weakly implementable if it is the outcome of some equilibrium; it is strongly
implementable if it is the outcome of every equilibrium.)
In terms of our framework, the results in the above papers can be summarized as follows. For environments in which the incentive constraints implied
by planner revelation of everything are nonbinding, GL show that if the planner reveals either place in the ordering or everything, then the …rst best (best

2

subject only to physical feasibility) is strongly implementable. ANW show that
if the planner reveals everything, then weak implementability implies strong
implementability— whether or not incentive constraints bind. These strong implementability results imply that there are no bank runs. PS show that there
exist settings such that if the planner reveals nothing, then the best weakly
implementable allocation is not strongly implementable. In particular, there is
another equilibrium that is a bank run and gives a worse outcome.
Thus, from our perspective, ANW fail to consider the possibility that the
planner could achieve a better outcome by withholding information, while PS
fail to consider the possibility that the planner could achieve an equally good and
unique outcome by revealing some information. These are the issues addressed
here.1
First, we set out the model and a nesting result: the less the planner reveals,
the larger, in the weak sense, is the set of weakly implementable allocations.
Then we study two examples, one of which is the pertinent PS example. The
examples demonstrate two things. First, the PS conclusion survives in our more
general framework. That is, the PS result is robust to permitting the planner a
choice about what to reveal. Second, the solution to the strong implementability
problem can be something other than reveal nothing and reveal everything.

2

Environment

There are N ex ante identical agents, two dates, 1 and 2, and there is one
good per date. The economy is endowed with an amount Y > 0 of date-1 good
and has a constant returns-to-scale technology with gross rate-of-return R > 1.
That is, if Ci denotes total date-i consumption, then C2 R(Y C1 ).
The sequence of events and actions is as follows. Let N = f1; 2; :::; N g be the
set of possible places in the ordering and let T = fimpatient; patientg fi; pg
be the set of possible preference types. First nature selects a queue, denoted
tN = (t1 ; t2 ; :::; tN ) 2 T N , where tk 2 T is the type of the k-th agent in the
ordering. Nature’ draw is from a probability distribution g : T N . (In the
s
examples, all 2N queues are equally probable.) If an agent observed the realized
queue, which is not the case, then the agent would know his own place in
the ordering and preference type, his own (k; t) 2 N T , and would know
the preference types of others by place in the ordering. The realized queue is
1 The issue of how much to reveal does not arise in GL. Their result is general in one sense:
because the …rst best is achieved, it follows that revealing the information is undominated.
However, their result does not apply to the large class of environments with binding incentive constraints. See Lin [6] for the sense in which the general case has binding incentive
constraints.

3

observed by no one, not even the planner, but each agent privately observes
his type in the set T .2 Each agent maximizes expected utility. An agent of
type t 2 T has utility function u ( ; ; t), where the …rst argument is date-1
consumption and the second is date-2 consumption. For a given t, u is increasing
and concave. Then agents meet the planner in the order determined by the
queue.3 In a meeting, the planner knows the vector of announced types of the
agents with earlier places in the ordering (and the agent’ place in the ordering).
s
The planner announces part of what he knows to the agent. Then the agent
announces an element of T . The outcome of the meeting is the agent’ date-1
s
consumption. After all the date-1 meetings occur, the planner simultaneously
assigns date-2 consumption to each agent.4

3

Weak and strong implementability

Formally, the planner’ information revelation policy is a vector of partitions
s
P = (P0 ; P1 ; :::; PN 1 ), where the announcement to the k-th agent in line is the
element (set) in the partition Pk 1 that contains tk 1 , the realized history of
prior announcements.5 Here, Pk 1 is a partition of H = T 0 [ T 1 [ : : : [ T N 1 ,
the set of all possible histories of announced preference types, where T 0 =
fnull historyg (because there is no history of announcements that precedes the
k 1
…rst agent in line) and T k 1 = fi; pg
, k
2. In order to describe the
particular partitions used in GL, PS, and ANW, we need to distinguish between
a set A and the set that enumerates the elements of A, which we denote [A];
i.e., if A = fa1 ; a2 ; a3 g, then [A] = fa1 g ; fa2 g ; fa3 g. The least coarse partition
is Pk 1 = [T k 1 ]; H [T k 1 ]; . This is the ANW speci…cation and implies
that the planner reveals everything to the k-th agent in line. The most coarse
partition is Pk 1 fH; g. This is the PS partition and implies that the k-th
agent is told nothing. The partition that corresponds to the GL speci…cation
is Pk 1 = fT k 1 ; H T k 1 ; g, which implies that the planner announces k to
2 Thus, queue here is used not in the sense of a line of people, each of whom is in touch
with those nearby. Instead, our queue is like the order in which people arrive at a drive-up
window.
3 We are studying a model in which all types meet the planner at date 1. This is the version
that ANW study and that PS study in one of their appendices. We prefer this version because
in a version with many types, as described in Lin [6], almost everyone would have to meet the
planner.
4 Notice that we give the planner control of all the resources and that the only decision
that an agent makes is an announcement of type in the set T . This su¢ ces for our purposes.
There are generalizations of the model in which each agent starts out owning Y =N and can,
before he learns his type in T , defect to autarky. That option is not relevant for what we do.
5 Thus, we do not permit the planner to randomize announcements.

4

the k-th agent in line.6
A mechanism is (P; c), where c = (c1 ; c2 ; :::; cN ), ck = (ck ; ck ), and ck : T k !
1 2
1
R+ is date-1 consumption of the k-th agent in the ordering and ck : T N ! R+
2
is date-2 consumption of that agent. The domain of c is agent announcements.
PN k
PN k
We say that c is feasible if for all tN 2 T N , R(Y
k=1 c1 )
k=1 c2 . Let C
denote the set of all feasible c.
For a given (P; c), a strategy for the k-th agent in the ordering is k :
Pk 1 T ! T , where the …rst argument is what the planner announces and the
second is the true type of the agent. We let k = ( 1 ; 2 ; : : : ; k ).
Given (P; c) and N , we let gk (t) be the conditional distribution over T N
of the k-th agent in the ordering who is type t. Here an element of T N is a
queue as de…ned above and gk (t) is derived from g (the ex ante distribution
over queues), (P; c), and N via Bayes’rule. Now we can de…ne equilibrium.
De…nition 1 Given (P; c) with c 2 C, the strategy N and the associated belief given by gk (t) is a perfect Bayesian equilibrium if for each (k; hk 1 ; t) 2
(N H k 1 T ) and each ~ n 2 T;
Egk (t) u[cn (
1

n 1

;

n
n ); c2 (

n 1

;

Egk (t) u[cn (
1

n 1

; ~ n ); cn (
2

n 1

; ~n;

n;

N
n+1 ); t]

N
n+1 ); t];

(1)

where Egk (t) denotes expectation with respect to the distribution gk (t).
The next de…nition relies on the revelation principle.
De…nition 2 Given P , c 2 C is weakly implementable (by truth-telling) if for
~
each (k; hk 1 ; t) 2 (N H k 1 T ) and each t 2 T ,
Egk (t) u[cn (tn
1

1

; t); cn (tn
2

1

; t; tN ); t]
n+1

Egk (t) u[cn (tn
1

1

~ 2
; t); cn (tn

1

~ n+1
; t; tN ); t]:

(2)

In (2), tN is a random variable because it is the part of the queue that has
n+1
not been revealed when the planner encounters the k-th person. In addition,
part of (tn 1 ; n) may be random. Its distribution depends on how much the
planner reveals.
For our purposes, the following restrictive notion of strong implementability
su¢ ces.
6 In

appendix 1, these partitions are set out explicitly for the case N = 3.

5

De…nition 3 Given P , c 2 C is strongly implementable if truth-telling is the
unique equilibrium for (P; c).
Motivated in part by the comparisons between PS, GL, and ANW, we compare di¤erent P ’ according to coarseness. The following nesting result is an
s
immediate consequence of the law of iterated expectations.
Claim 1 Let P 0 be coarser than P 00 . If c 2 C satis…es de…nition 2 for P = P 00 ,
then c satis…es de…nition 2 for P = P 0 .
The planner’ objective in this model is ex ante expected utility; namely,
s
W(

N

; P; c) = Eg u[cn (
1

n 1

;

n
n ); c2 (

n 1

;

n;

N
n+1 ); t]:

(3)

The planner’ weak implementability problem is to choose (P; c) to maximize W
s
subject to c 2 C and satisfaction of de…nition 2. The planner’ strong imples
mentability problem adds satisfaction of de…nition 3 as an additional constraint.7

4

Two examples

For each of two examples, we compute the best weakly implementable allocation and the best strongly implementable allocation. One example is the PS
appendix B example and the other is a slight variant of it. The PS example
has N = 2, Y = 6, R = 1:05, equally likely queues, and u (c1 ; c2 ; impatient) =
7 ANW mistakenly claim that weak implementability implies strong implementability if the
planner reveals only place in the ordering. Their mistake can be explained using the above
formulation.
Suppose that N = 3 and that is the (independently and identically distributed) probability that a person is patient. Let c satisfy de…nition 2 when agents learn only their place in the
ordering. In a truth-telling equilibrium, the second agent knows that with probability the
…rst agent announced patient and with probability 1
announced impatient. Our nesting
implies that (2) is a weighted average of two “underlying” incentive conditions; one associated with the …rst agent announcing patient and the other associated with him announcing
impatient, with weights and 1
, respectively.
Consider now a candidate (run) equilibrium where the …rst two agents announce impatient
independent of type and the last agent announces truthfully— the only possibility for a run
equilibrium with N = 3 when the ordering is revealed by the planner. Will the second agent
defect? If patient, then that agent will announce patient only if the underlying incentive
condition associated with the …rst agent announcing impatient is slack, which, of course,.is
not implied by (2). Finally, inequality (2) says nothing about what the …rst agent does if he
believes that the second will always announce impatient.
More generally, if there are N agents and each learns only his place in the ordering, then
the incentive constraint (for truthful revelation) for the n-th agent is an average of 2n 1
underlying incentive conditions, one condition for each possible history of the previous n 1
agent announcements. Again, satisfaction of the average does not imply satisfaction of the
2n 1 separate constraints. In particular, the incentive condition for agent n associated with
the previous n 1 agents announcing impatient may not be satis…ed. If not, then a bank run
is possible.

6

Av(c1 ) and u (c1 ; c2 ; patient) = v(c1 + c2 ), with v(x) = x 1 and A = 10.
The alternative is identical except that v(x) = ln x. We denote by wk date-1
consumption for the …rst person to announce impatient whose position in the
ordering is k 2 f1; 2g. Given (w1 ; w2 ), the other components of the allocation
are determined residually from the resource constraint.
Table 1 reports (w1 ; w2 ) and ex ante welfare of the best weakly and strongly
implementable allocations under the di¤erent information revelation possibilities. (The full-information benchmark, in which the planner can observe the
agent’ type, is reported in the …rst row and other expected utilities are exs
pressed relative to the normalized expected utility for it.) With N = 2, there
are only three such schemes, which correspond to those studied by PS, GL,
and ANW. Moreover, with N = 2, weak implementability implies strong implementability under reveal-place-in-the-ordering and reveal-everything.8 Hence,
there is only one allocation described for each of those schemes. Also, under
reveal-nothing, there is one condition for strong implementability; namely, that
a patient type reports truthfully even if he thinks that the other agent if patient
will announce impatient.9
Table 1. Optimal allocations: w1 ; w2 and ex ante welfare
Information
The PS example
The alternative
Assumption
v(x) = x 1
v(x) = ln x
No private
(3.4483,4.5850)
(3.8710,5.4545)
information
1.0000
1.0000
reveal nothing
(3.0940,3.2011)
(3.0952,3.1981)
(weak solution)
0.9447
0.9121
reveal nothing
(3.1500,3.1500)
(3.3075,3.0000)
(strong solution)
0.9444
0.9106
reveal place in
(2.9758,3.3144)
(2.9842,3.3075)
the ordering
0.9435
0.9117
(2.9925,3.1500)
(2.9962,3.1500)
reveal everything
0.9346
0.9044
(3.0000,3.0000)
(3.0000,3.0000)
Autarky
0.9247
0.8966
Under reveal-nothing, the weak and strong solutions di¤er for the PS example, as they report. They also di¤er for the alternative example. However, from
8 To understand uniqueness when N = 2 and only place in the ordering is revealed, suppose
allocation w1 ; w2 is weakly implementable. If the candidate equilibrium has the …rst agent
announcing “impatient” independent of type, then a patient second agent will announce “patient”because R Y
w1 > Y w1 . Then, weak implementability implies that a patient …rst
agent will defect from proposed equilibrium play and will announce truthfully. As explained
in the last footnote, this result does not hold for N > 2.
9 The planner’ maximization problem for each of the information assumptions reported in
s
Table 1 is explicitly described in appendix 2.

7

the perspective of our model, a comparison between rows 2 and 3 is not enough.
What would su¢ ce is to con…rm that the best weakly implementable allocation
under the other information schemes gives lower ex ante welfare. And given
nesting, it is enough to show that the solution to the weak implementability
problem under reveal-place-in-the-ordering is worse than under reveal-nothing.
As shown in the fourth row of the table, both examples accomplish what PS
set out to show— even from the broader perspective taken here. That is, even
when we search over alternative information-revelation schemes by the planner,
it remains true that there are settings in which the best weakly implementable
allocation is not strongly implementable.10
The alternative example shows that the solution to the strong implementability problem can be something other than reveal-nothing or reveal-everything.
Hence, it shows that solving the strong implementability problem can involve a
nontrivial choice about what the planner should reveal.

5

Concluding remarks

The model above contains two extreme assumptions about the queue, assumption that are best discussed against the background of the following related
speci…cation. Suppose that each agent gets a random draw, (t; ), where, as
above, t is the preference type and where 2 [0; 1] is the time (of day) at which
the agent will encounter the planner. One extreme assumption is that each
agent does not observe his . (As PS say, the agent does not have a clock.) If
agents privately observe , then they have di¤erent priors about ordering in the
queue even if the planner reveals nothing. A second extreme assumption is that
agents cannot take costly actions to in‡
uence the time at which they meet the
planner. If they could take such actions, then we suspect that optima would
display less dependence on ordering in the queue— dependence that we tend not
to see.

6

Appendix 1: The partitions when N = 3

As in the text, let H = T 0 [ T 1 [ T 2 , where T 0 = fnull historyg and T k 1 =
k 1
fi; pg
, k 2. For N = 3, H = T 0 ; i; p; (i; i) ; (i; p) ; (p; i) ; (p; p) . Here are
the partitions for the three information schemes in ANW, GL, and PS.
1 0 In

both examples, at least some incentive constraints bind. The role of bindingness is
highlighted in Ennis and Keister [3] in which incentives do not bind. In their model, the …rst
best allocation is weakly implementable under both reveal-nothing and reveal-place-in-theordering, but is strongly implementable under the latter and not under the former.

8

For ANW (reveal everything), they are
P1
P2

=

P3

T 0 ; fi; p; (i; i) ; (i; p) ; (p; i) ; (p; p)g ;

=

=

fig ; fpg ; T 0 ; (i; i) ; (i; p) ; (p; i) ; (p; p) ;

f(i; i)g ; f(i; p)g ; f(p; i)g ; f(p; p)g ; T 0 ; i; p ;

:

For GL (reveal place in line), they are
P1
P2

=

P3

T 0 ; fi; p; (i; i) ; (i; p) ; (p; i) ; (p; p)g ;

=

=

fi; pg ; T 0 ; (i; i) ; (i; p) ; (p; i) ; (p; p) ;

f(i; i) ; (i; p) ; (p; i) ; (p; p)g ; T 0 ; i; p ;

:

For PS (reveal nothing), they are
P1 = P2 = P3 = fH; g:

7

Appendix 2: Maximization problems for the
examples

For the examples, the planner’ objective function is
s
U w1 ; w2

=

(1
(1
(1

2

)

Au w1 + Au Y

) Au w1 + v
) Au w

2

+v

Y
Y

w1

+

w1 R
w

2

R

+
+

2

2v (Y R=2) ;

where is the (independently and identically distributed) probability that a
person is patient, which is 0:5 in the examples.
For the kind of preferences in the examples, which are the same as those
originally studied, truth-telling is potentially binding only for patient people.
Therefore, the planner’ reveal nothing, weak implementability (de…nition 2)
s
problem is maximize U subject to
(1

) (:5) v

:5fv(w1 ) + (1

Y

w1 R + v

)v Y

Y

w2 R

w1 + v(w2 ) g:

+ v

Y
R
2
(4)

When nothing is revealed, the patient person believes that the other person is
impatient with probability 1
and patient with probability ; and that with
probability 0.5 he is either …rst or second in the ordering.

9

The planner’ reveal-nothing, strong implementability problem is maximize
s
U subject to (4) and
v

w1 R + v

Y

Y

w2 R

v(w1 ) + v Y

w1 :

(5)

Constraint (5) says that a patient person prefers to announce truthfully when
everyone else reports that they are impatient, given that it is equally likely that
the person is …rst or second in the ordering.
The planner’ reveal-place-in-the-ordering problem is maximize U subject to
s
(1

)v

Y

w2 R + v

Y
R
2

v(w1 )

(6)

and
(1

)v

Y

w1 R + v

Y
R
2

(1

)v Y

w1 + v(w2 ):

(7)

Constraint (6) applies to a patient agent who is …rst in the ordering, while
constraint (7) applies to one who is second in the ordering.
Finally, the planner’ reveal-everything maximization problem is maximize
s
U subject to (6) and
Y
R w2 :
(8)
2
Constraint (6) applies to a patient agent who is …rst in the ordering, and constraint (8) applies to a patient agent who is second in the ordering and follows a
person who announced patient. (The constraint for a patient agent who is second and who follows a person who announced impatient is Y w1 R Y w1 ,
which is always satis…ed.)

References
[1] D. Andolfatto, E. Nosal, N. Wallace, The role of independence in the GreenLin Diamond-Dybvig model, J. of Econ. Theory, 137 (2007) 709-715.
[2] D. Diamond, P. Dybvig, Bank runs, deposit insurance, and liquidity, J. of
Polit. Econ., 91 (1983) 401-19.
[3] H. Ennis, T. Keister, Run equilibria in the Green-Lin model of …nancial
intermediation, J. of Econ. Theory, 144 (2009) 1996-2020.
[4] E. Green, P. Lin, Implementing e¢ cient mechanisms in a model of …nancial
intermediation, J. of Econ. Theory, 109 (2003) 1-23.

10

[5] E. Green, P. Lin, Diamond and Dybvig’ Classic Theory of Financial Ins
termediation: What’ missing?, F. R. B. Minneapolis Q. R., (Winter 2000)
s
3-13.
[6] P. Lin, Equivalence between the Diamond-Dybvig Banking Model and the
Optimal Income Taxation Model, Economics Letters, Vol. 79 (2), May 2003,
193-198.
[7] J. Peck, K. Shell, Equilibrium bank runs, J. of Polit. Econ., 111 (2003)
103-123.

11


Federal Reserve Bank of St. Louis, One Federal Reserve Bank Plaza, St. Louis, MO 63102