View original document

The full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.

Three Keys to the City:
Resources, Agglomeration Economies, and Sorting*
BY GERALD A. CARLINO

etropolitan areas in the U.S. contain almost
80 percent of the nation’s population and
nearly 85 percent of its jobs. This high
degree of spatial concentration of people
and jobs leads to congestion costs and higher housing
costs. To offset these costs, workers must receive higher
wages, and higher wages increase firms’ costs. So why
do firms continue to produce in cities where the cost of
doing business is so high? Economists offer three main
explanations. First, cities developed and grew because
of some natural advantage, such as a port. Second, as
cities grew, the resulting concentration of people and
jobs led to efficiency gains and cost savings for firms,
creating agglomeration economies. Finally, the presence
of a talented and flexible labor force made it feasible for
entrepreneurs to start new businesses. This third reason
for the growth of cities is called sorting. In this article,
Jerry Carlino looks at recent developments in measuring
each of the sources of city productivity and discusses the
policy implications of this research.

M

Although metropolitan areas account for only 16 percent of the total
land area in the United States, they
Jerry Carlino is a
senior economic
advisor and
economist in
the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.org/research-and-data/
publications/.
www.philadelphiafed.org

contain almost 80 percent of the nation’s population and nearly 85 percent
of its jobs. This high degree of spatial
concentration of people and jobs leads
to congestion costs, such as increased
traffic and pollution, and higher housing costs. To offset these congestion
costs, workers must receive higher
wages, and higher wages increase firms’
costs.
*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

So why do firms continue to produce in cities where the cost of doing
business is so high? Economists offer
three main explanations.1 The first explanation is that cities developed and
grew because of some valuable natural
advantage, such as a source of raw
materials or a port that allowed businesses to save on transportation costs.
For example, because of its access to a
deep harbor and because of its central
location, Philadelphia was the largest
and most important trading and merchant center in North America during
the nation’s colonial period.
But, as Satyajit Chatterjee points
out in an earlier Business Review
article, a natural advantage, such as
a harbor, was not the main reason for
Philadelphia’s subsequent growth into
the fourth largest metropolitan area in
the country. As colonial Philadelphia
grew, the resulting concentration of
people and jobs led to efficiency gains
and cost savings for firms, efficiency
and savings that arose from being close
to suppliers, workers, customers, and
even competitors. This second reason
for cost savings in cities is referred to
as agglomeration economies. Finally,
as Joseph Gyourko points out, the early
growth of Philadelphia was aided by its
large and relatively highly skilled labor
force. The presence of a talented and
flexible labor force made it feasible for
entrepreneurs to start new businesses

The terms city, metropolitan area, and
their adjectives are being used to designate
a metropolitan statistical area (MSA). In
general, MSAs are statistical constructs used
to represent integrated labor market areas.
They typically are geographic areas combining a large population nucleus with adjacent
communities that have a high degree of
economic integration with the nucleus.
1

Business Review Q3 2011 1

in Philadelphia. This third reason for
the growth of cities is called sorting: A
disproportionate share of highly skilled
(more productive) workers choose to
live in large cities, making big cities more productive than small ones.
Other things equal, firms will have
little incentive to move if congestion
costs are balanced by the benefits of
a natural advantage, agglomeration
economies, and sorting.
At one time, economists tended
to lump together the advantages of
sorting and the advantages associated

retain highly skilled people.
In this article I will look at recent
developments in measuring each of the
sources of city productivity and discuss
the policy implications of this research.
SPATIAL CONCENTRATION
OF PEOPLE AND JOBS
IN CITIES: ROLE OF
NATURAL ADVANTAGE,
AGGLOMERATION
ECONOMIES, AND SORTING
A location may attract
households and firms because of

Historically, economists have focused on
agglomeration economies to explain the high
concentration of people and jobs found in
cities, of which there are two broad types:
business agglomeration economies and
consumer agglomeration economies.
with urban agglomeration economies
into a single measure. However, more
recently, economists have examined
how important each of the three
reasons is in accounting for city
productivity. Knowledge about the
relative importance of each of the
reasons is important to policymakers,
too. If agglomeration economies kick
in once a city reaches a critical size,
urban planners might want to pursue
policies that help a city reach that size.
There is also mounting evidence that
agglomeration economies depend on
a city’s ability to attract and retain
high-skill workers. Edward Glaeser
and Matthew Resseger find that
agglomeration economies are much
stronger in cities where workers are
relatively highly skilled. Given the
evidence that a high concentration
of skilled workers enhances city
productivity, policymakers may want
to consider policies that attract and

2 Q3 2011 Business Review

the presence of valuable natural
resources, such as petroleum, coal,
lumber, or minerals, and proximity to
a navigable river or a port. Although
the availability of resources and other
natural advantages varies from place to
place, a diversity of resources cannot
be the main reason for the existence
of cities. According to Edward Glaeser
and Janet Kohlhase, “The cost of
moving a ton by rail has declined in
real terms by more than 90 percent
since the late 19th century and the
rise in trucking has been even more
dramatic.” As a result, firms have
become increasingly “footloose”
with respect to a location’s natural
advantages, since easy access to rivers,
other water systems, and raw materials
has become less valuable over time.
In studying the spatial concentration
in manufacturing in 1987, Glenn
Ellison and Edward Glaeser found
that only about 20 percent of the

spatial concentration of manufacturing
plants can be accounted for by a
location’s natural advantages. Given
that employment in manufacturing is
continually being replaced with jobs in
the service sector, the role of natural
advantages in accounting for the
geographic concentration of industries
will continue to be less important than
it was even as recently as 50 years ago.
Some economists believe that
an increase in the capital stock of
the public sector leads to increases in
private-sector output and productivity
because public infrastructure is an
essential input into the production of
private output.2 For example, driver
productivity increases when a good
highway system allows truck drivers
to avoid circuitous back roads and
congestion and to bring supplies to
a firm and goods to market more
quickly. Similarly, well-maintained
roads reduce wear and tear on
commercial vehicles, lowering privatesector maintenance and replacement
of these vehicles. Similar arguments
can be made for the public provision of
police and fire protection, water supply
facilities, airports, and mass transit.
An increase in the public capital
stock, like an increase in any factor of
production, increases private-sector
output.
Historically, economists have
focused on agglomeration economies
to explain the high concentration
of people and jobs found in cities,
of which there are two broad types:

See the article by Randall Eberts and Daniel
McMillen for a review of the early empirical evidence on public infrastructure. This evidence
indicated a strong response of private-sector
output to increases in the capital stock of the
public sector. More recent studies have not
found such a strong link between the capital
stock of the public sector and productivity. For
example, looking at the role that public infrastructure plays in a state’s economic growth,
Andrew Haughwout finds that increases in a
state’s public capital stock did not dramatically
raise a state’s economic growth.

2

www.philadelphiafed.org

business agglomeration economies and
consumer agglomeration economies.
Business agglomeration economies
can increase the productivity of firms
and their workers. More recently,
economists have underscored the
importance of consumer agglomeration
economies, which improve the quality
of leisure activities, as a source of
the continuing growth of cities.
The bulk of the empirical evidence
on agglomeration economies has
focused on business agglomeration
economies (hereafter referred to simply
as agglomeration economies unless
otherwise noted), so we will start
there.
If agglomeration economies are
important, they will make workers in
large cities more productive compared
with workers in small cities and
rural areas. Since workers are paid
according to their productivity, wages
and the demand for labor reflect
the advantages of agglomeration
economies. Thus, early studies looked
at the impact of agglomeration
economies on average wages (wages
averaged across all workers in a city).
Since agglomeration economies are
not directly observable, many studies
have used some measure of urban size,
such as the size of a city’s population
or its population density (the city’s
population relative to its land area), as
a proxy for agglomeration economies.
The idea is that the benefits of
agglomeration economies increase with
a city’s population size or its population
density.
Studies from the 1970s and
1980s found that a doubling in
city population size could lead to a
substantial 8 to 10 percent increase
in manufacturing productivity.3 More
recent evidence indicates that the

See the article by Randall Eberts and Daniel
McMillen for a review of the early empirical
evidence on agglomeration economies.

3

www.philadelphiafed.org

findings from these early studies most
likely overstate the actual productivity
gains associated with urban size. The
contribution of population size to
urban productivity may be overstated if
the other factors thought to influence
urban productivity are not taken into
consideration. An important problem
with these studies is that they did
not control for one aspect of city
population: the very real possibility
that the more productive places will

determinants of where people choose
to live. For example, growth in real
income increases the demand for a
greater variety of goods and services
(more theaters, varied restaurant
cuisine, and professional sports teams).
This implies that large cities with
more choices will attract high-income
households that put a high value on
variety. Members of these high-income
households also tend to be highly
skilled individuals. The concern is

If agglomeration economies are important,
they will make workers in large cities
more productive compared with workers
in small cities and rural areas.
tend to draw people. Are cities large
because they are more productive
or more productive because of their
size? In a 2010 article, Pierre-Philippe
Combes and his co-authors refer to
this issue of reverse causation as the
endogenous quantity of labor. This issue
was first raised by Ronald Moomaw in
his critique of the early literature and
first dealt with in a study by Antonio
Ciccone and Robert Hall. Ciccone and
Hall proposed using population from
the distant past (in their case for 1850)
instead of using current population
to control for reverse causation. The
idea is that the population from 1850
is likely to be correlated with the
population size of today but not with
productivity today. We will have more
to say about this source of reverse
causation later.
Another concern is that more
highly skilled workers may sort
themselves into cities because large
cities offer greater opportunities for
consumption. Rising real incomes
mean that quality-of-life issues have
become more and more important as

that highly skilled workers tend to earn
higher wages, and this could account
for some of the positive correlation
found between city population size
and average wages in cities. In their
2010 article, Pierre-Philippe Combes
and his co-authors refer to this
sorting of relatively high-skill (highly
productive) workers in large cities as
the endogenous quality of labor.
In sum, there can be two
important sources of overestimation
of agglomeration economies: More
productive places may attract more
people, and more productive people
may sort themselves into large cities.
That is, large cities may draw people,
especially highly skilled ones, leading
to a potential overestimation of city
size’s effect on city productivity. It
is important for any study of urban
agglomeration economies to control for
both of these sources of upward bias.4

See the article by Pierre-Philippe Combes,
Gilles Duranton, and Laurent Gobillon for a
discussion of a variety of solutions to address
the overestimation of agglomeration economies.

4

Business Review Q3 2011 3

WHAT’S THE EVIDENCE?
One of the facts that support
the existence of urban agglomeration
economies is the positive association
between average wages in a city and
a city’s population size. The idea is
that if workers are paid according
to their productivity (that is, there
is perfect competition in local labor
markets), wages and the demand
for labor reflect the advantages of
agglomeration economies. The figure
shows that there is indeed a positive
correlation between average annual
wages (total annual wages relative
to the total number of workers) and
population in a sample consisting
of over 300 metropolitan statistical
areas (MSAs) in 2005. Population
size alone explains about 16 percent
of the variation in average wages
across MSAs. The positive correlation
depicted graphically in the figure is
shown numerically in column 1 of the
table, which shows that a doubling
of MSA population size is associated
with a 6.1 percent increase in average
wages.5 As we will see, this estimate
falls to 3.8 percent once we control
for both sources of upward bias.6 As

Average wages could be higher in large cities
if large cities tend to have a mix of industries
that would pay higher wages even if they were
located in medium size and small cities. If so,
estimates of agglomeration economies will be
overstated if we do not control for differences
in industry mix across cities. All regressions
reported in the table control for the 1970
employment shares in each of nine broad industries. We used 1970 industry employment shares
to mitigate any feedback from average wages in
2005 on current industry employment shares.
The industries consist of agriculture; mining;
construction; manufacturing; wholesale trade;
retail trade; finance, insurance, and real estate;
services; and government (transportation is the
excluded sector). All of the regressions include
controls to indicate an MSA’s region. The
regions are New England; Mideast; Great Lakes;
Plains; Southeast; Southwest; and Rocky Mountain (the Far West is the excluded region).

5

See Table A in the appendix for a summary of
the regression underlying the discussion in the
text.

6

4 Q3 2011 Business Review

FIGURE
Wages Increase with City Size
Log of average nominal compensation
4.5

4

3.5

In_avg_nominal_comp

3
Fitted values

2.5
10

12

14

16

18

Log of Population (2005)

Average wages are equal to average worker compensation.

I have already indicated, estimates
of agglomeration economies will be
overstated if people move to highproductivity MSAs (the reverse
causation issue). Column 2 of Table
1 shows the results when we use the
1920 level of an MSA’s population to
identify the effect of population (our
proxy of agglomeration economies)
on a city’s average wages.7 After
controlling for reverse causation, the
estimate for the effect of a doubling of
city population size on average wages

The reason for using 1920 population is that
a city’s population today tends to be highly
positively correlated with its population from
long ago, but the forces giving rise to a city’s
productivity today are quite different from
those of the distant past. For example, in 1920,
high productivity in manufacturing would have
resulted in the growth of a city and a high level
of population. It’s highly likely that the level
of population in 2005 will be highly correlated
with the level of population from 85 years
earlier, but it’s unlikely that the drivers of productivity in manufacturing matter very much
for the services-oriented cities of today.

7

falls from 6.1 percent to 3.9 percent.
What would happen to our
estimate of the city size wage premium
after we control for the share of an
MSA’s population with a college
degree? There is a strong positive
correlation between the share of the
adult population with a college degree
and city size.8 In fact, if a city were to
double its share of the adult population
(persons 25 years old and over) with a
college degree, its average wages would
increase almost 63 percent. While
it is highly unlikely that most cities
would be able to double their college
share, a 10 percent increase would
still bring nice returns in terms of
average wages. For example, in 2000,
almost 28 percent of the Philadelphia
metropolitan area’s population had a
college degree. If Philadelphia’s college

The simple correlation between the college
share and the log of population is 0.71.

8

www.philadelphiafed.org

TABLE
Effect on Average Nominal Wages Resulting from a Doubling of an
MSA’s Population Size†
(1)
Population, 2005††

(2)

(3)

(4) †††

3.9

3.8

3.6

6.1

Population, 1920††
Controls for the share of 1920 population with a college degree

No

No

Yes

Yes

Controls for natural advantage and infrastructure†††

No

No

No

Yes

No. of MSAs

313

309

309

254

Results reported after controlling for the 1970 employment shares in each of nine broad industries and for eight broad regions and for the MSA's
region. See the appendix for details.

†

Indicates variable is in logs.

††

†††
A city’s distance to commercially navigable rivers in 1890 is used to control for a city’s natural advantage. The square miles of interstate highway
system planned for in 1947 for a city is used to control for infrastructure in that city.

share increased 5 percent, to just over
29 percent, we estimate that average
wages in Philadelphia would increase
3.2 percent. Put differently, relatively
small changes in an area’s college share
can lead to relatively large changes in
its average wage.
This positive correlation between
a city’s average wages and its college
share could lead to an overestimation
of the city size wage premium if highability and highly productive people
sort themselves into large cities (the
issue of endogenous quality of the
population). Including the college
share in the analysis is one way to
control for the sorting in an MSA’s
population. Column 3 of the table
shows that the estimates of the city
size wage premium are only slightly
affected after controlling for an area’s
college shares, falling to 3.8 percent
from 3.9. Thus, at least for average city
wages, it is more important to control
for reverse causation (the migration

www.philadelphiafed.org

of workers into cities) than it is to
account for sorting (the self-selection
of highly skilled workers into large
cities).
As discussed earlier, some
economists believe that an increase in
the capital stock of the public sector
leads to increases in private-sector
output and productivity because
public infrastructure is an essential
input into the production of private
output. In addition, some natural
advantages (such as access to a
port, rivers, or lakes) that gave rise
to large cities in the past may still
influence productivity (and wages)
today. Column 4 of the table shows
that the estimate of the city size wage
premium falls only slightly (from 3.8
to 3.6) after we control for both an
MSA’s urban infrastructure and its
natural advantages.9 This finding is
consistent with those reported by
Andrew Haughwout: Increases in a
state’s public capital stock did not

dramatically raise state economic
growth.
What does our estimate of an
urban wage premium of 3.8 percent
mean for wages in dollar terms?
A typical city in our sample had a
population of about 680,000 (about

Recall that estimates of the city size wage
premium could be overstated if we fail to
control for urban infrastructure. Following the
seminal work of Nathaniel Baum-Snow, we
take the miles of highways planned for an MSA
in the 1947 national interstate highway plan.
These planned highway miles are divided by the
square miles of an MSA’s land area to arrive at
the proxy variable used for MSA infrastructure.
We used 1947 planned miles of highways, since
it’s likely that miles of highways today are highly
correlated with planned miles, while productivity today is not likely to have caused the planned
miles in 1947. We thank Matthew Turner for
providing the data for the planned highway
miles; see the article by Gilles Duranton and
Turner for details. We use an MSA’s distance
to commercially navigated waterways in 1890
as our proxy for an MSA’s natural advantages.
We thank Jordan Rappaport for providing these
data; see the article by Rappaport and Jeffrey
Sachs for details.

9

Business Review Q3 2011 5

the size of Springfield, Massachusetts)
in 2005 and an average annual wage
of almost $34,700 in 2005. A doubling
in the size of a typical city to a city
consisting of almost 1.4 million people
(about the size of the Nashville,
Tennessee, or the Austin, Texas MSA)
would result in an increase in average
annual wages of about $1,320. If the
Philadelphia MSA grew to the size of
the New York City MSA, the average
wage in the Philadelphia MSA is
estimated to increase by about $2,500.
If the Allentown MSA grew to the
size of the New York City MSA, the
average wage in Allentown would
increase by just under $5,500. While
it’s unlikely that either Philadelphia
or Allentown will ever reach the
population size of New York City, these
examples demonstrate that the urban
wage premium can be substantial.
While firms care about what they
must pay workers in nominal dollars,
workers care about the purchasing
power of the wages they receive.
Although money wages are higher
in New York City than in either
Philadelphia or Allentown, the cost
of living is much higher in New York
City, too. (See Adjusting Wages for City
Cost of Living Differentials.)
Moving from Aggregate Data to
Micro Data. In attempting to measure
agglomeration economies, we dealt
with the sorting issue by controlling
for worker characteristics by what we
could observe in the aggregate data,
namely, the share of a city’s adult
population with a college degree. But
there are plenty of other observable
and unobserved worker characteristics
that need to be considered in
attempting to get the most accurate
estimate of agglomeration economies.
Some of these characteristics, such
as a worker’s years of experience and
his occupation, can be observed.
Yet a number of unobserved worker
characteristics, such as motivation,

6 Q3 2011 Business Review

dedication, and innate abilities, may
also influence a worker’s wages.10 The
role of agglomeration economies in
urban productivity may be overstated
if the more experienced workers or
those with the most innate ability
tend to sort themselves into large
cities. Recently, economists have been
using large data sets containing highly
detailed information on individual
workers (micro data) rather than

doubling of urban density is associated
with an overall urban wage premium of
about 5 percent. When they control for
just reverse causation, the urban wage
premium falls to 4 percent. If, instead,
they control only for sorting, the
urban wage premium shrinks from 5
percent to 3.3 percent. That is, sorting
matters in that it accounts for about
one-third of the overall wage premium.
The premium shrinks to 2.7 percent

:KLOH¿UPVFDUHDERXWZKDWWKH\PXVWSD\
workers in nominal dollars, workers care about
the purchasing power of the wages they receive.
aggregate data (summed across all
workers in an area) in an attempt to
account for the role that observed
and unobserved worker traits play in
productivity. For example, Edward
Glaeser and David Maré report that
workers in large U.S. cities have wages
that are 33 percent higher than those
of workers outside of cities. But they
find that the urban wage premium
shrinks dramatically once they control
for individual worker characteristics.
In an important 2010 study,
Pierre-Philippe Combes and his
co-authors use French micro data to
gather evidence on the relationship
between urban density and the urban
wage premium.11 They find that a

Recent work on skills in cities by Marigee
Bacolod, Bernardo Blum, and William Strange,
among others, acknowledges that skills are multifaceted and, therefore, may not be adequately
summarized by using a measure of education,
such as a city’s college share.

10

Some economists use population size as a
proxy for agglomeration economies, while other
economists use population density (population
of an MSA divided by the MSA’s land area) as
a proxy for agglomeration. As the appendix
to this article shows, the findings for aggregate
average wages are quite similar whether we use
population size or population density.

11

after controlling for both sorting and
reverse causation in regard to labor.12
In comparison, using data for U.S.
cities, we found a somewhat larger
urban premium of 3.8 percent when
looking at population size (the table on
page 5 or Table A in the appendix) or
a premium of 3.3 percent when looking
at population density (Table B in the
appendix). The smaller premiums
found in the study using French data
may be largely due to better controls
on worker characteristics afforded by
the use of worker-level data.13
Loosely applying the 2.7 percent
urban wage premium to the aggregate
data indicates that the premium

12
Similar to studies finding an urban wage
premium in the neighborhood of 2 percent using
French micro data, a study by Giordano Mion
and Paolo Naticchioni, using micro data from
Italy, finds that a doubling of density increases
wages by 1 to 2 percent.

Using panel data for 22 U.S. cities for the period 1985-2006, Morris Davis, Jonas Fisher, and
Toni Whited find an urban wage premium of
2 percent. They also find that this urban wage
premium raises national long-run consumption
growth by 10 percent. Also using data for the
U.S., Baum-Snow and Pavan find that agglomeration economies and sorting each account for
about one-half of the urban wage premium.

13

www.philadelphiafed.org

Adjusting Wages for City Cost of Living Differentials

I

n the text, we looked at the effect agglomeration economies have on average nominal wages because this is the
ZDJHWKDW¿UPVFDUHDERXW6LQFH¿UPV
must compete in national and international markets, an area’s nominal wage
LVLPSRUWDQWIRU¿UPV¶FRVWRIGRLQJEXVLQHVVDQGPD\
LQÀXHQFHWKHLUGHFLVLRQVDERXWZKHUHWRORFDWHDSODQW
From the viewpoint of workers, the possible advantages
RIZRUNLQJLQDQDUHDZLWKKLJKQRPLQDOZDJHVSDUWO\
GHSHQGRQKRZH[SHQVLYHLWLVWROLYHWKHUH2WKHUWKLQJV
equal, workers should be indifferent between an area
where wages and prices are at the national average and
RQHZKHUHERWKWKHFRVWRIOLYLQJDQGZDJHVDUHVD\
SHUFHQWDERYHDYHUDJH,QWKLVFDVHUHDOZDJHVDUHHTXDO
LQERWKDUHDV7KXVZRUNHUVZLOOFKRRVHDORFDWLRQLQ
UHVSRQVHWRUHDOZDJHGLIIHUHQWLDOV

,QWKH¿JXUHZHSORWWKHFRVWRIOLYLQJLQDQ06$
DJDLQVWWKH06$¶VSRSXODWLRQVL]H$VWKH¿JXUHVKRZV
WKHFRVWRIOLYLQJLVSRVLWLYHO\DVVRFLDWHGZLWKFLW\VL]H
6LQFHWKHFRVWRIOLYLQJWHQGVWRULVHZLWKFLW\VL]HWKH
gap in netFLW\VL]HZDJHSUHPLXPV DQDUHD¶VZDJH
premium due to agglomeration economies adjusted for
its cost of living) across cities will not be as large as the
JURVVZDJHSUHPLXP2WKHUWKLQJVHTXDOZHZRXOGH[pect workers to migrate from areas with low real wages
to areas with high real wages and that this process would
HYHQWXDOO\OHDGWRUHDOZDJHVWKDWDUHODUJHO\HTXDOL]HG
DFURVVFLWLHV,QUHDOLW\UHDOZDJHVPD\QRWEHHTXDOL]HG
if workers trade off real wages for amenities, accepting
ORZHUUHDOZDJHVLQKLJKDPHQLW\SODFHVDQGGHPDQGLQJ
KLJKHUUHDOZDJHVLQORZDPHQLW\ORFDWLRQV

* Data for the cost of living by MSA are for 2005 and were obtained from the American Chamber of Commerce Research Association (ACCRA).
The data show a moderate positive correlation of 0.2884 between the log of the cost of living and the log of MSA population size. The correlation
between cost of living and city size falls to 0.2248 once we exclude the four outlier MSAs (Bridgeport-Stamford-Norwalk, CT; Honolulu, HI; San
Diego-Carlsbad-San Marcos, CA; and San Jose-Sunnyvale-Santa Clara, CA) shown in the upper-center portion of the figure.

FIGURE
Cost of Living Increases with City Size
Log of Cost of Living in Cities, 2005
5.5

5

4.5
In_accra
Fitted values

4
10

13

16

Log of Population (2005)

www.philadelphiafed.org

Business Review Q3 2011 7

between the Philadelphia MSA and
the New York City MSA falls from
$2,500 to about $1,800 in nominal
terms. Thus, the most comprehensive
studies — those using micro data —
find that the urban wage premium
exists but it is much smaller than
previously thought. The findings
in the 2010 article by Combes and
co-authors suggest that an important
share of the measured agglomeration
economies are, in fact, attributable to
the sorting of highly skilled workers in
denser locations.14
SKILLS AND CITIES
So far, we have summarized
studies showing that productivity
increases along with the population
size or density of an area. We have
seen that agglomeration economies are
part of the story in any explanation
of greater city productivity. We have
also seen that there is a strong positive
correlation between productivity
in cities and the tendency for more
skilled workers to locate in large
cities. Economists cite several reasons
why skilled workers matter so much
for urban productivity. The high
concentration of people in cities
facilitates the exchange of knowledge
among people. These exchanges,
called knowledge spillovers, are
likely to be enhanced in cities with
highly skilled workers, who are better
able to articulate and communicate

Another potential way in which agglomeration economies could be overstated is if only
the strongest (most productive) firms survive
in large cities. That is, the existence of a large
number of firms in large cities gives rise to
greater competition among firms and may lead
to an exodus of less productive firms. This
“selection” of the most productive firms in
large cities could result in an overestimation
of agglomeration economies if researchers fail
to account for this potential source of bias. A
2009 study by Pierre-Philippe Combes and coauthors, using French establishment-level data,
finds that this selection bias does not appear
to be important in estimating agglomeration
economies.

14

8 Q3 2011 Business Review

ideas and may be better at adapting
to new technologies. In a study of
local innovative activity (measured
by an MSA’s patents per capita) that
I co-authored with Robert Hunt,
we found that a skilled work force
(measured by the percent of the adult
population with a college degree) was
by far the most powerful determinant
of innovative activity, even after
controlling for other R&D inputs and
other city characteristics. Specifically,

Philadelphia was the largest and most
important trading and merchant
center in North America. However,
in the early 19th century, New York
overtook Philadelphia as the leading
center, but Philadelphia successfully
reinvented itself and became a major
center of highly skilled manufacturing
activity. Up until the mid-19th
century, Philadelphia was also able to
benefit from its central location among
North American cities. But the rise

Economists cite several reasons why skilled
workers matter so much for urban productivity.
we found that a 10 percent increase in
the college share is associated with an
almost 9 percent increase in patents
per capita.
A city may be highly innovative,
but it may have trouble surviving if
the benefits of this innovation largely
accrue to other regions. As technology
changes, cities need to adapt by
reinventing themselves. Having a
highly skilled labor force may be a
crucial ingredient in the reinvention
process. Edward Glaeser and Albert
Saiz point out that skilled workers
may adjust more rapidly to negative
economic shocks and educated workers
may find it much easier to adapt
their activities to changing economic
incentives presented by emerging
technologies. In fact, Glaeser and Saiz
argue that generating new technologies
locally is not as important as having
the ability to adapt to them. In a 2009
study, Jeffrey Lin provides evidence
that the spatial concentration of
skilled workers increases the rate of
adaptation to new technologies.
In another study, Joseph Gyourko
points out how Philadelphia has
successfully reinvented itself several
times. Until the mid-19th century,

of rail transportation in the mid-19th
century threatened Philadelphia’s
survival by drastically reducing the
cost of shipping goods and the price of
traded goods, allowing other cities to
compete with Philadelphia.
However, Philadelphia figured
out how to turn this potential liability
into an asset and reinvented itself
by exploiting the city’s proximity
to the coal fields of northeastern
Pennsylvania. The rise in coal as
an energy source not only increased
the volume of shipping through
Philadelphia (as witnessed by the
development of the Philadelphia
and Reading Railroad), but it also
facilitated the transition to steampowered machinery, a move that
reinforced the city’s position as an
important manufacturing center.
The reinvention of Pittsburgh is
a more contemporary example. As
President Obama noted on September
8, 2009, Pittsburgh has “transformed
itself from the city of steel to a
center for high-tech innovation —
including green technology, education
and training, and research and
development.” Pittsburgh was chosen
to host the G-20 Summit in 2009 both

www.philadelphiafed.org

in recognition of and to highlight this
transformation.
The evidence suggests that a
city’s prosperity and growth depends
crucially on its ability to attract and
retain highly skilled workers. Recently,
economists have started to more
closely examine the role of consumer
agglomeration economies in the
growth and development of cities.
Jesse Shapiro has shown that the
amenities that cities offer are especially
attractive to high-skill workers, who,
as we have already discussed, can
stimulate employment and population
growth.
In a study I conducted with Albert
Saiz, we used the number of leisure
tourist visits to cities as a proxy for
the amenities offered in these cities.
The idea is that leisure visitors are
attracted by an area’s special traits,
such as proximity to the ocean, scenic
views, historic districts, architectural
beauty, and cultural and recreational
opportunities. But these are some of
the very characteristics that attract
households to cities when they choose
these places as their permanent homes.
We found that the decadal population
growth rate for the typical city during
the 1990s would be 2.2 percentage
points higher and its decadal job
growth would be 2.6 percentage points
higher in a city with twice the level of
leisure tourists as another city. While

www.philadelphiafed.org

more evidence is needed, my research
with Saiz suggests that consumer
agglomeration economies can be a
future source of growth for cities.
CONCLUSION
Progress has been made in
obtaining better estimates of both
business and consumer agglomeration
economies. Currently, the best
evidence suggests that a doubling
of city size increases productivity
between about 3 to 4 percent. Still,
the limitations of the data preclude us
from speculating on the exact channels
that explain business agglomeration
economies. For example, we do
not know the extent to which
agglomeration economies arise from
the sharing of specialized inputs by
many firms in a common city.
Another possibility is that cities
facilitate learning, since the exchange
of ideas among individuals is enhanced
in dense locations. Yet another
possibility is that cities allow for better
matches among workers and firms
and better matching improves overall
city productivity. Recent studies have
identified the importance of some
of these mechanisms. For example,
in a Business Review article, Jeffrey
Lin describes his paper with Hoyt
Bleakley in which they evaluate one
potential mechanism: better matching
between job seekers and firms in dense

MSAs. Still, no study that I’m aware
of considers the relative importance
of the various mechanisms. It is
difficult to formulate specific policy
recommendations without precise
estimates of the relative importance
of these various channels for
agglomeration economies.
It is natural for local policymakers
to think about the benefits of
agglomeration economies for their own
cities. But if city A increases its
population size at the expense of other
cities, any gains from agglomeration
economies in city A might be offset by
reductions in agglomeration economies
in other cities. This suggests that
agglomeration economies can have
different policy implications for
national as opposed to local
policymakers. As Edward Glaeser
points out, “The existence of
agglomeration economies does not
itself give guidance about optimal
regional policy.” It is difficult to
formulate a national regional policy
based on estimates of how
agglomeration economies affect cities
on average. Policymakers would need
good estimates of how agglomeration
economies affect different cities.
Precise estimates of agglomeration
economies for specific cities are an
important next step for future research
and for policy design. BR

Business Review Q3 2011 9

REFERENCES
Bacolod, Marigee, Bernardo S. Blum, and
William C. Strange. “Elements of Skill:
Traits, Intelligences, Education, and Agglomeration,” Journal of Regional Science,
50 (2010), pp. 245-80.
Baum-Snow, Nathaniel. “Did Highways
Cause Suburbanization?” Quarterly Journal
of Economics, 122 (2007), pp. 775-805.
Baum-Snow, Nathaniel, and Ronni Pavan.
“Understanding the City Size Wage Gap,”
unpublished manuscript, Brown University
(February 2010)
Bleakley, Hoyt, and Jeffrey Lin. “ThickMarket Effects and Churning in the Labor
Market: Evidence from U.S. Cities,” Federal Reserve Bank of Philadelphia Working
Paper 07-23 (2007).
Carlino, Gerald A., and Robert Hunt.
“What Explains the Quantity and Quality
of Local Inventive Activity?” in G. Burtless
and J. Pack, eds., Brookings-Wharton Papers
on Urban Affairs. Washington: Brookings
Institution Press (2009), pp. 65-109.
Carlino, Gerald A., and Albert Saiz. “City
Beautiful,” Federal Reserve Bank of Philadelphia, Working Paper 08-22 (September
2008).
Chatterjee, Satyajit. “Agglomeration Economies: The Spark That Ignites a City?”
Federal Reserve Bank of Philadelphia Business Review (Fourth Quarter 2003).
Ciccone, Antonio, and Robert E. Hall.
“Productivity and the Density of Economic
Activity,” American Economic Review, 86
(1996), pp. 54-70.
Combes, Pierre-Philippe, Gilles Duranton,
Laurent Gobillon. “The Identification of
Agglomeration Economies,” unpublished
manuscript (April 2010).
Combes, Pierre-Philippe, Gilles Duranton,
Laurent Gobillon, and Sebastien Roux.
“Estimating Agglomeration Economies
with History, Geology, and Worker Effects,” in Edward L. Glaeser, ed., Agglomeration Economies. Chicago: University of
Chicago Press, 2010.
10 Q3 2011 Business Review

Combes, Pierre-Philippe, Gilles Duranton,
Laurent Gobillon, and Sebastien Roux.
“The Productivity Advantages of Large
Cities: Distinguishing Agglomeration from
Firm Selection,” unpublished manuscript
(February 2009).
Davis, Morris, Jonas Fisher, and Toni
Whited. “Macroeconomic Implications of
Agglomeration,” unpublished manuscript,
University of Wisconsin (December 2009).
Duranton, Gilles, and Matthew A. Turner.
“Urban Growth and Transportation,”
unpublished manuscript, University of
Toronto (November 2008).
Eberts, Randall W., and Daniel P. McMillen. “Agglomeration Economies and Urban
Public Infrastructure,” in P. Cheshire and
E. Mills, eds., Handbook of Regional and
Urban Economics, Volume 3: Applied Urban
Economics. New York: Elsevier Sciences,
1999.
Ellison, Glenn, and Edward L. Glaeser.
“Geographic Concentration of Industry:
Does Natural Advantage Explain Agglomeration?” American Economic Association
Papers and Proceedings Review, 89 (1999),
pp. 311-16.
Glaeser, Edward L. “Introduction,” in
Edward L. Glaeser, ed., Agglomeration
Economies. Chicago: University of Chicago
Press, 2010.
Glaeser, Edward L., and Matthew Resseger.
“The Complementarity Between Cities
and Skills,” Journal of Regional Science, 50
(2010), pp. 221-224.
Glaeser, Edward L., and Janet E. Kohlhase.
“Cities, Regions and the Decline of Transport Costs,” Papers of the Regional Science
Association, 83 (2004), pp. 197-228.
Glaeser, Edward L., and Albert Saiz. “The
Rise of the Skilled City,” in W. Gale and
J. Pack, eds., Brookings-Wharton Papers
on Urban Affairs. Washington: Brookings
Institution Press (2004), pp. 47-94.

Glaeser, Edward L., and D.C. Maré. “Cities
and Skill,” Journal Labor Economics, 19
(2001), pp. 316-42.
Gyourko, Joseph, “Looking Back to Look
Forward: Learning from Philadelphia’s 350
Years of Urban Development,” in G. Burtless and J. Pack, eds., Brookings-Wharton
Papers on Urban Affairs. Washington:
Brookings Institution Press (2005), pp.
1-58.
Haughwout, Andrew. “The Paradox of Infrastructure Investment,” Brookings Review,
18:3 (Summer 2000).
Lin, Jeffrey. “Urban Productivity from Job
Search and Matching,” Federal Reserve
Bank of Philadelphia Business Review (First
Quarter 2011).
Lin, Jeffrey. “Technological Adaptation,
Cities, and New Work,” Federal Reserve
Bank of Philadelphia Working Paper 09-17
(2009).
Mion, Giordano, and Paolo Naticchioni.
“The Spatial Sorting and Matching of
Skills and Firms,” Ca n a d i a n Jo ur n al of
Eco n o m i cs, 42 (2009) pp. 28-55.
Moomaw, Ronald L. “Productivity and
City Size: A Critique of the Evidence,”
Quarterly Journal of Economics 96 (1981),
pp. 675-88.
Puga, Diego. “The Magnitude and Causes
of Agglomeration Economies,” Journal of
Regional Science, 50 (2010), pp. 203-19.
Rappaport, Jordan, and Jeffrey Sachs.
“The United States as a Coastal Nation,”
Journal of Economic Dynamics and Control,
8 (2005), pp. 5-46.
Shapiro, Jesse M. “Smart Cities: Quality of
Life, Productivity, and the Growth Effects
of Human Capital,” Review of Economics
and Statistics, 88 (2006), pp. 324-35.

www.philadelphiafed.org

APPENDIX

A

s pointed out in the main text, agglomeration economies increase worker productivity, and in
competitive labor markets, this increased productivity will show up in the wages workers are paid.
Thus, it has become customary for economists to estimate a wage equation of the following type:
OQ $YHUDJH7RWDO:DJHVi

C D OQ 06$3RSi E controlsi

where estimates of D are the parameters of interest and the controls (such as city’s college share and its mix of
industries) differ in different studies. The findings reported in this study are based on the estimation of the following
wage equation:
OQ $YHUDJH7RWDO:DJHVi

8

C D OQ 06$3RSi E &ROOHJH(GXFDWHG i G j ¦ ,QGXVWU\0L[ j ,i

7

j 

Ik ¦ 5HJLRQ k ,i  M OQ 3ODQQHG+LJKZD\0LOHVi J OQ 1DYLJDEOH5LYHUVi 
k 

where

Average Total Wagesi = Total wages and salaries divided by total number of workers for 2005 in MSA i
MSA Popi = Two alternative measures are used:
in Model 1: MSA Popi = the level of population in MSA i (either for 2005 or for 1920);
in Model 2: MSA Popi = population density = population in MSA i divided by square miles of land area in
MSA i (either for 2005 or for 1920).
Percent College Educated = Percent of 1920 population with at least a college degree in MSA i
Industry Mixi = 1970 employment shares in each of nine broad industries in MSA i
Regioni = A dummy variable indicating each MSA’s region
Planned Highways Miles = 1947 planned miles of interstate highways for MSA i relative to square miles of land
area in MSA i
Navigable Riversi = Distance from navigable rivers in 1890 for MSA i
The dependent variable refers to average annual total private-sector wages divided by the number of privatesector workers in an MSA in 2005. The dependent variable is a proxy for MSA productivity. In general, deeply lagged
values of the independent variables are used in this article. This reduces the simultaneity and reduces concerns about
direction-of-causation issues, since 2005 values of the dependent variable are not likely to affect deeply lagged values of
the independent ones. Two population measures are used as proxy variables for agglomeration economies. In Model 1,
population size is used because sometimes researchers use MSA population size as a proxy for agglomeration economies.
Alternatively, in Model 2, we use population density as the proxy variable because more recent studies have chosen
density measures over measures of size. For comparative purposes, 2005 values for population size/density are used and
reported. Since 2005 values are likely to be endogenous, we will use MSA population size and MSA population density
in 1920, since this reduces the simultaneity and reduces concerns about direction-of-causation issues.
The industry mix variables consist of the 1970 employment shares in each of nine broad industries: agriculture;
mining; construction; manufacturing; wholesale trade; retail trade; finance, insurance, and real estate; services; and

www.philadelphiafed.org

Business Review Q3 2011 11

APPENDIX (continued)
government (transportation is the excluded sector). The region variables consist of a set of dummy variables to account
for the MSA’s region. The regions are New England; Mideast; Great Lakes; Plains; Southeast; Southwest; and Rocky
Mountain (the Far West is the excluded region). We use planned highway miles as a proxy for urban infrastructure.
Specifically, we use the miles of highways planned for an MSA in the 1947 national interstate highway plan. These
planned highway miles are divided by the square miles of an MSA’s land area to arrive at the proxy variable used for
MSA infrastructure. Finally, we use an MSA’s distance to commercially navigated waterways in 1890 as our proxy for an
MSA’s natural advantages.
The models were estimated using ordinary least squares (OLS) methods with White robust standard errors to take
heteroskedasticity into account.* The results of the regression using population size are presented in Table A and a
portion of the results are given in the table in the text. All of the variables in the model have the expected sign, and the
coefficients on the variables for population size and college share are highly significant. Since the estimated coefficients
can be interpreted as percentage changes, column 1 of Table A shows that a doubling of an MSA’s population size is
associated with a 6.1 percent increase in average wages. As indicated, our estimate of agglomeration economies can
suffer from reverse causation bias. Therefore, column 2 of Table A shows the results when we use the 1920 level of an
MSA’s population to identify the effect of population on a city’s average wages. After controlling for reverse causation,
the estimate for the effect of a doubling of city population size on average wages falls from 6.1 percent to 3.9 percent.
Next, we add the 1920 college share variable to the regression to control for a sorting bias. Column 3 of Table
A shows that the estimates of the city size wage premium are only slightly affected after controlling for college shares,
falling to 3.8 percent from 3.9 percent. Finally, column 4 of Table A shows that the estimate of the city size wage
premium falls only slightly (from 3.8 percent to 3.6 percent) after controlling for both an MSA’s infrastructure and its
natural advantage.
Table B summarizes the findings for the regression results when we use population density measures instead of
population size measures. The results for density presented in Table B are quite similar to the results reported in Table
A for size. At least for the aggregate data we considered, it makes little difference for the estimates of the urban wage
premium whether size measures or density measures are used to proxy for agglomeration economies.

*
Alternatively, we used a two-stage least squares (2SLS) procedure to estimate the parameters of the model. The 2SLS procedure confirmed that
1920 values for population size and population density are strong instruments for 2005 values of these variables. The findings from the 2SLS regressions are mostly similar to those based on the OLS method described in the text, and Hausman tests do not identify any systematic differences
between the OLS and 2SLS coefficients in these regressions. We therefore present the results from the OLS regressions.

12 Q3 2011 Business Review

www.philadelphiafed.org

APPENDIX (continued)
Table A. Effect on Average Nominal Wages Resulting from a
Doubling of an MSA’s Population Size†
(1)
Population, 2005†

(2)

(3)

(4)

3.9*

3.8*

3.6*

62.8*

65.9*

6.1*

Population, 1920††
Share of 1920
Population with a
College Degree
1947 Planned
Highway Miles††

1.1

Distance from
Navigable Rivers in
1890††
No. of MSAs

R2

-0.004

313

309

309

254

0.6630

0.6207

0.6448

0.6567

Table B. Effect on Average Nominal Wages Resulting from a
Doubling of an MSA’s Density†
(1)
Population Density,
2005†

(2)

(3)

(4)

3.4*

3.3*

3.6*

61.4*

66.8*

7.1*

Population Density,
1920††
Share of 1920
Population with a
College Degree
1947 Planned
Highway Miles††

0.6

Distance from
Navigable Rivers in
1890††
No. of MSAs

R2

-0.004

313

309

309

254

0.6541

0.6060

0.6291

0.6430

*Indicates statistically significant from zero at the 1 percent level.
Results reported after controlling for the 1970 employment shares in each of nine broad industries and for the MSA’s region.

†

Indicates variable is in logs.

††

www.philadelphiafed.org

Business Review Q3 2011 13

The Effectiveness of Government Spending
in Deep Recessions: A New Keynesian Perspective*
BY KEITH KUESTER

$

s the recent recession unfolded, policymakers
in the U.S. and abroad employed both
monetary and fiscal stabilization tools to
help mitigate the downturn. One of the tools
that can be used by fiscal policymakers is to actively
purchase more goods and services: the idea being that
the government’s demand can offset the weak demand by
households and firms. For such a policy to be effective,
one needs to know the extent to which government
spending can stimulate the economy. One of the models
frequently used by economists who study business
cycles suggests that the answer depends very much on
the extent to which monetary policy can be employed
to stabilize the economy. In this article, Keith Kuester
reviews the literature on the effectiveness of government
spending during severe recessions.

The U.S. economy is emerging
from the deepest recession since the
Great Depression. From late 2007
to the trough in the second quarter
of 2009, output fell by more than 5
percent. At its peak, the unemployment rate had more than doubled
from pre-recession levels. Many other
economies witnessed similar declines.

As the recession unfolded, policymakers in the U.S. and abroad employed
both monetary and fiscal stabilization
tools to help mitigate the downturn.
One of the tools used by fiscal policymakers was to actively purchase more
goods and services, the idea being that
the government’s demand can offset
Fiscal stimulus packages, such as the American
Recovery and Reinvestment Act of 2009, very
broadly consist of one or both of two categories:
outright purchases of goods or services by the
government (government spending henceforth)
and changes in transfers or taxes. This article is
concerned with government spending.

1

Keith Kuester is a
senior economist
in the Research
Department of
the Philadelphia
Fed. This article
is available free
of charge at www.
philadelphiafed.
org/research-anddata/publications/.
14 Q3 2011 Business Review

*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

the weak demand by households and
firms.1 For such a policy to be effective,
one needs to know the extent to which
government spending can stimulate
the economy, especially when the
economy is in a severe recession.
One of the models frequently
used by economists who study business cycles suggests that the answer
depends very much on the extent
to which monetary policy can be
employed to stabilize the economy.
“Conventional” monetary policymaking typically operates by targeting
a certain level for an overnight interest
rate. In the United States, for example,
the Federal Reserve targets the federal
funds rate. Monetary policy can reduce
this interest rate in a recession to help
stimulate private demand. The figure
on the next page shows the level of the
effective federal funds rate for more
than half a century. Grey areas mark
periods of recession. As can be seen, in
the last recession, the Federal Reserve
cut the federal funds rate essentially to
a level of zero.
At that point, lowering the federal
funds rate further is no longer feasible
2
When interest rates are at zero, central banks
can still try to influence aggregate demand
using “unconventional” monetary policy tools.
In exceptional circumstances, such interventions can be warranted. Central banks can, for
example, engage in purchases of financial assets
to try to reduce interest rates in certain sectors.
For example, during the recent recession, the
Fed purchased mortgage-backed securities
issued by the federal housing agencies. Fed
Chairman Ben Bernanke, in a speech, called
this “credit easing.” Central banks can also increase the quantity of money and thereby try to
influence aggregate demand, a strategy known
as “quantitative easing.” Chairman Bernanke’s
speech discusses the set of tools available to the
Federal Reserve beyond conventional interest
rate policy. The Business Review article by
Michael Dotsey assesses some of the alternative
policy tools in greater depth.

www.philadelphiafed.org

because the nominal interest rate
cannot fall below zero.2 The zero lower
bound on nominal interest rates occurs
because cash yields a zero interest rate.
Consider, for example, interest rates
on loans. Imagine that you borrow a
dollar today. You can store it as cash.
If the interest rate is negative, you
pay back less than you borrowed, and
you do not need all of the cash you
received initially in order to repay the
loan. As a result, you would have made
money from nothing. At the same
time, the lender would take a sure loss.
Therefore, no lender would offer loans
with a negative interest rate. Similarly,
interest rates on deposits cannot fall
below zero either. You would be better off keeping your cash rather than
depositing the money into a savings
account that pays a negative interest
rate. For these reasons, nominal interest rates cannot fall below zero.3
As the figure shows, in December
2008 the federal funds rate reached
this level (of very close to zero) for
the first time in the postwar period.
The lack of historical evidence with
overnight interest rates at zero suggests that previous experience may be
only a limited guide to the effectiveness of government spending when
monetary policy is constrained by the
lower bound on interest rates. In order
to ascertain the efficacy of government spending in the latest recession,
researchers have therefore relied on
theoretical arguments.

3
Clearly, the cash would need to be stored and
could be stolen or destroyed, a fact that the argument above ignores. People may be willing to
pay a fee to avoid the risk and the storage cost.
Interest rates on some accounts could therefore
fall somewhat below zero to the extent that
this fee is reflected in the interest rate. What
matters for the logic that follows is that there is
a lower bound for interest rates, the existence
of which places constraints on what monetary
policy can do to stabilize economic fluctuations.
Of lesser importance is whether the bound is
exactly at zero, as assumed in the exposition
that follows, or slightly below zero.

www.philadelphiafed.org

FIGURE
Effective Federal Funds Rate
(monthly average of daily data)
20
18
16
14
12
10
8
6
4
2
0

54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 00 02 04 06 08 10

Effective federal funds rate, percent annualized. Grey areas mark official recession dates as
determined by the National Bureau of Economic Research.
Source: Haver.

The literature reviewed in this
article assumes that only conventional
monetary policy is used. It argues that
in a situation in which monetary policy
is constrained by the lower bound on
interest rates, government spending
may be more effective than it usually
is. This reasoning is based on the class
of so-called New Keynesian models
that have become one of the benchmark models for economists who study
business cycle fluctuations. See, for instance, the article by Richard Clarida,
Jordi Galí, and Mark Gertler for an
introduction to this class of models.4
THE NEW KEYNESIAN MODEL
In their simplest form, New
Keynesian models describe three economic relationships. The first relationship says how firms and households
4
The article by Michael Woodford and the one
by Lawrence Christiano, Martin Eichenbaum,
and Sergio Rebelo present a more technical
overview of the arguments in this article.

adjust their demand for goods and
services in response to changes in the
real rate of interest. The real interest rate is the nominal interest rate
minus the expected rate of inflation.
Basically, the higher the real interest rate, the more goods consumers
can buy in the future by forgoing a
purchase today. Higher real interest
rates therefore induce households to
consume less and save more. Higher
real interest rates also mean that firms
must earn a higher rate of return on
a project in order for the project to be
cost-effective. A higher real interest
rate therefore means less investment,
too. In sum, a higher real interest rate
means that private demand — the sum
of consumption by households and of
investment by firms — is lower.
The second relationship in the
New Keynesian model concerns the
link between inflation and how much
firms produce. This relationship is
central to the model and has been

Business Review Q3 2011 15

given the name the New Keynesian
Phillips curve. This Phillips curve is
derived from a structural model of
firms’ price-setting behavior that has
two key elements. First, firms have
some pricing power. They can choose
to sell more of their product by setting
a lower price, or they can choose to sell
somewhat less but at a higher price.
Second, firms adjust their prices in
response to events that have an impact
on the economy, but the price adjustment is sluggish. That is, not all firms
immediately adjust their prices to the
full extent. These two features of the
model allow monetary policy to affect
output in the short run.5
According to the New Keynesian Phillips curve, if firms face lower
demand for their goods or services,
they will be inclined to reduce their
prices to some extent, since they face
lower costs of production. The costs
tend to be lower, for example, since
less demand means less revenue, which
can allow firms to negotiate lower
wages with their workers. Also, if firms
expect future demand to be weaker or
future inflation to be lower, they will
be inclined to reduce their current
prices in order not to get too far out of
line with their competitors’ prices and
the price level in general. As a result,
according to the model, inflation falls
when aggregate demand (and thus aggregate output) falls.
The third and final relationship
in the model describes how monetary
policy is conducted. To conduct
monetary policy, central banks
generally vary a short-term interest rate
in response to economic conditions.
Indeed, the literature that this article
discusses assumes that monetary

5
Keith Sill’s Business Review article describes
this relationship in much more detail. It also
explores the extent to which the resulting pricesetting relationship can be used to infer the
degree of inflationary pressure in an economy.

16 Q3 2011 Business Review

policy is carried out by using only
the conventional monetary policy
means of setting the overnight interest
rate. This process usually involves
lowering short-term interest rates
when economic growth is weak or
when inflation or expected inflation is
below some desired level. Conversely,

rate reduces aggregate demand, as
discussed above. This, in turn, brings
down inflation through the New
Keynesian Phillips curve relationship.
Conversely, typically, the central bank
reduces nominal rates by enough to
make sure that the real interest rate
falls when inflation falls below the

Another property of well-designed monetary
SROLF\LVWKDWLILQÀDWLRQDU\SUHVVXUHV
increase, central banks will raise the nominal
interest rate by more than the amount by
ZKLFKH[SHFWDWLRQVRILQÀDWLRQLQFUHDVH
it involves raising short-term interest
rates when economic growth is strong
or when inflationary pressures build
up. It has been theoretically shown in
a wide class of economic models that
low and stable inflation allows the
economy to employ resources more
efficiently, which, in turn, is conducive
to moderate long-term interest rates
and maximum employment.6 Such
behavior therefore describes good
conduct of monetary policy in this
model environment.
Another property of well-designed
monetary policy is that if inflationary
pressures increase, central banks will
raise the nominal interest rate by more
than the amount by which expectations of inflation increase.7 This
behavior implies that the real rate rises
when inflationary pressures increase.
Such an increase of the real interest

6
The goals of monetary policy are spelled out
in the Federal Reserve Act, which specifies that
the Board of Governors and the Federal Open
Market Committee should seek “to promote
effectively the goals of maximum employment,
stable prices, and moderate long-term interest
rates.”
7
See, for example, the discussion in the paper
by Clarida, Galí, and Gertler.

desired level (or if economic activity
is depressed). Thus, in general, the
deeper a recession, the lower the real
interest rate.
However, remember that nominal
interest rates cannot fall below zero.
Regardless of how low inflation may be
expected to go or how severe a recession is, the central bank cannot reduce
the nominal interest rate any further
than to a level of zero. Importantly,
once that bound on the interest rate is
reached, the more depressed economic
activity is, and thus the lower inflation
is, according to the model, the higher
is the real rate of interest. Note that
this is the opposite of the relationship
between inflation and the real interest
rate that applies in “normal times.”
The reason is that monetary policy is
constrained when the lower bound on
the nominal interest rate is reached
and cannot follow its usual stabilization practice. In such a circumstance,
when the nominal interest rate is
zero, the real interest rate is just the
negative of the expected inflation rate.
Lower inflation expectations then
mean a higher real rate of interest.
These observations allow us to
characterize how private demand is

www.philadelphiafed.org

related to inflation. It turns out that
demand is negatively related to inflation in normal times but positively
related if monetary policy is constrained by the zero lower bound. The
explanation for this is as follows: In
“normal times,” when inflation falls
below a level that monetary policymakers deem consistent with price stability,
monetary policy lowers the nominal
interest rate by enough, so that the real
interest rate falls. In response to this,
households save less and demand more
consumption goods. Firms invest more.
In “normal times,” therefore, because
monetary policymakers want to ensure
stable inflation, private demand tends
to rise when inflation falls and tends to
fall when inflation rises.
If the zero lower bound is binding,
in contrast, monetary policy cannot
ensure that this is the case. The relationship between inflation and private
demand — according to the model
— is reversed! In this circumstance,
lower inflation implies a higher (rather
than a lower) real interest rate, since
conventional monetary policy cannot
react by lowering the nominal interest
rate. As a result, lower inflation implies
less aggregate demand for goods and
services.
This puts us in a position to discuss the effect of government spending in the model and to see why that
effect can crucially depend on whether
monetary policy is constrained.
GOVERNMENT SPENDING
RAISES INFLATION IN THE
NEW KEYNESIAN MODEL
The Phillips curve relationship
is important for the logic that follows
because it means that government
spending in the model is inflationary.
The reason is as follows. Higher government spending means that the government buys more goods and services
from firms. Just as with higher private
demand, the additional demand gener-

www.philadelphiafed.org

ated by the government means that
firms have to produce more. Workers
work longer hours, and firms use their
capacity more intensively. As a result,
wages and production costs increase,
and firms raise their prices. Therefore
inflation increases. This is so regardless of whether monetary policy is
constrained by the zero lower bound
on interest rates. What differs in the
two regimes is how the real interest
rate and private demand react to this
increase in government spending.
In “normal times,” if inflation
rises, the central bank increases the
nominal interest rate such that the
real interest rate rises. As a result,
households save more and consume
less. Firms invest less. In short, private
demand falls if government spending
rises. Therefore, economic activity
rises by less than the amount by which
government spending has increased:
Government spending has crowded
out private demand because of higher
real interest rates. The model therefore
suggests that, normally, output rises
by less than one dollar if government
spending rises by a dollar. The technical term for this is that the “government spending multiplier” is less than
one.8
THE GOVERNMENT SPENDING
MULTIPLIER AT THE LOWER
BOUND
Suppose now that a negative
shock leads to a very strong reduction
in private demand. Examples for such
shocks are manifold. For instance, a
collapse in asset prices could make
households feel less wealthy, or
financial turbulence could increase
credit spreads and risk premiums.
Or households’ or firms’ confidence

For a review regarding the existing empirical
evidence on the effectiveness of government
spending for “normal times,” see the paper by
Robert Hall.

8

in future economic prospects may
be diminished for other reasons.
As private demand crumbles, and
inflationary pressures succumb, the
central bank reduces the nominal
interest rate to counteract the
recessionary impulse.
If the recessionary impulse is
exceptionally deep, the central bank
would want to reduce the nominal
interest rate to less than zero. But
it cannot do so: The lower bound
on nominal interest rates becomes
binding. As a result, unless the central
bank now resorts to nonconventional
monetary policy means, which this
article does not take into account, the
real rate of interest is higher than what
the central bank would like to achieve,
and aggregate demand is lower than
desired.
Let us look at the effect of
government spending under such
circumstances. Higher government
spending means more demand and
thus higher inflation. Since the zero
lower bound is binding, the higher
inflation rate induced by the increase
in government spending means that
the real rate of interest will be lower
than it would have been without the
increase in government spending.
This is so because at the zero lower
bound, the real rate of interest is just
the negative of the rate of inflation.
Note that a lower real interest rate is
precisely what monetary policy would
have liked to achieve but could not
by using only conventional monetary
policy means.9 The central bank thus
does not raise the nominal interest
rate in response to an increase in
government spending.

9
This, of course, raises the question of why
central banks would want to confine themselves
to using only conventional interest rate policy
in the first place. Footnote 2 presents a brief discussion of nonconventional policy and provides
references for further reading.

Business Review Q3 2011 17

The lower real interest rate
means that private consumption and
investment increase. In the model,
government spending, through the
lower real interest rate, thus crowds in
private consumption and investment
when the zero lower bound is binding.
In sum, not only does government
spending rise, but so does private
demand. Aggregate demand and
output thus rise by more than the
amount of government spending.
This is at the core of why government
spending multipliers may be bigger
than one and therefore bigger than
usual if the zero lower bound is
binding.
REFINEMENTS AND CAVEATS
The above analysis has ignored
the fact that both households and
firms base their decisions not only on
the current economic environment
but also on their expectations about
the future. The anticipation effects
of fiscal policy are important in the
model environment. For example, in
order to affect private demand today,
government spending need not occur
immediately; a credible announcement of future spending can suffice.
The reason is that a future increase
in spending increases demand in that
period and will therefore increase inflation in that period. This affects the
real interest rate in the future and thus
also the long-term real interest rate
that households and firms face today.
This means that government plans for
future spending can affect saving and
investment decisions today. Broadly
speaking, announcing future government spending “crowds in” private demand today if the zero lower bound is
expected to still be binding at the time
of the higher spending and crowds
it out otherwise. As Robert Hall, for
example, emphasizes, these anticipation effects — at the zero lower bound
— can lead to a stronger (cumulative)

18 Q3 2011 Business Review

response of output for a given dollar
amount of the increase in government
spending, suggesting that the credible
commitment to future government
spending alone can — via the effect
on the long-run real interest rate —
help stabilize current output. However,
if the zero lower bound is not expected

The anticipation
HIIHFWVRI¿VFDOSROLF\
are important in the
model environment.
to be binding at the time of future
spending, the long-term real interest
rate rises, and such an announcement
crowds out private demand today. This
would be the case, for example, when
the increase in government spending is
persistent.
The above reasoning helps to
explain some of the quantitative
differences in the effectiveness of
government spending that different
studies find in a zero lower bound
environment. Much of it hinges on
the different timing of the increase in
government spending. For example,
Christiano, Eichenbaum and Rebelo
find multipliers that are much bigger
than one. In a similar model environment, the study by John Cogan, Tobias
Cwik, John Taylor, and Volker Wieland reports multipliers of “just above”
1 percent for the first quarter of spending. Their estimates of the multipliers
fall quickly to levels well below one in
subsequent quarters of fiscal stimulus
through government spending.
The difference between these two
findings can be explained by the differences in the assumptions about the
spending plans. Christiano and his coauthors assume that the government
spending program ends once monetary
policy ceases to be constrained by the

zero lower bound. In contrast, Cogan
and his co-authors look at spending
programs that last well beyond that
point. As a result, in their simulations, there are many periods in which
government spending increases the
real interest rate and thus crowds out
private demand, both at the time of
spending and in the initial periods
in which the zero lower bound is still
binding.
For similar reasons, Christopher
Erceg and Jesper Lindé emphasize that
the size of the government spending
packages matters for their cost-effectiveness. In the New Keynesian model
environment, the bigger the government spending package, the earlier the
zero lower bound may cease to bind.
This means that government spending thereafter will – again – crowd out
private demand. As a result, Erceg
and Lindé stress that the first dollar of
government spending in a zero lower
bound situation increases output by
more than the second dollar and so
forth.
This suggests that if the New
Keynesian model is a good guide for
policy, fiscal stimulus may be most effective if it is well targeted in the sense
that it is contingent on the disruption
in the economy still being present and
the zero lower bound still being binding.10 In line with this, several papers
argue that the deeper the economy is
into a recession and the longer the recession is anticipated to last, the more
effective will be fiscal stimulus through
an increase in government spending.11
More recently, fiscal consolidation
has received growing interest. Turning

That the benefit of fiscal stimulus depends on
how persistent the economic disruption will be
implies that it may be difficult for policymakers
to ascertain the appropriate timing and amount
of fiscal stimulus.

10

11
See, for instance, the paper by Michael
Woodford.

www.philadelphiafed.org

the above arguments upside down, my
paper with Giancarlo Corsetti, André
Meier, and Gernot Mueller argues
that a credible upfront commitment
to cut government spending in some
future period, when the economy has
already left the zero lower bound, can
stimulate demand while the zero lower
bound is still binding. The reason is
that cuts in government spending reduce inflation. If well timed, they can
thus reduce long-term interest rates.
Such a commitment provides further
stimulus to an economy that is still
caught in the zero lower bound (that
is, in times of a deep recession), and
it helps to finance fiscal deficits. My
co-authors and I stress that the timing
of such spending reversals matters,
however. If the consolidation comes
too soon, we argue, the associated
deflationary tendencies occur while
the lower bound on interest rates is
still binding, putting upward pressure
on real interest rates and reducing the
government spending multiplier.
All this said, the above analysis simplifies matters in a number of
dimensions. Therefore, some caveats
seem in order. First, the economic effects of government spending depend
on the entire path of government
spending, not just current spending.

www.philadelphiafed.org

Second, the implications for tax rates
have not been fully explored. If future
declines in government spending do
not offset all of the increase in government spending in earlier periods,
tax rates must eventually increase
to balance the government’s budget.
Taxes, however, distort the economy.
Increased taxes on labor income, for
example, would tend to reduce the
supply of labor. To the extent that
these taxes are expected to be higher
after the zero lower bound ceases to
bind, future productive capacity will
be reduced. In addition, inflationary
pressures increase in the future. Both
effects induce households to consume
less initially, which weakens the effectiveness of the initial fiscal stimulus.12
Third, the arguments are largely based
on theory and model relationships that
have been deduced for “normal times.”
Given that the zero lower bound very
rarely binds, empirical evidence on
government spending multipliers in
such a situation is scarce. This means
that, in practice, macroeconomists remain quite uncertain about the precise
quantitative effects of temporary in-

The paper by Erceg and Lindé shows some
simulations; the paper by Gauti Eggertsson
discusses tax policy at the zero lower bound.

12

creases in government spending when
monetary policy is constrained by the
zero lower bound. This is an exciting
avenue for future research.
CONCLUSIONS
This article has assessed the effect
of temporary increases in government spending on economic activity
through the lens of a benchmark New
Keynesian model. Several caveats
notwithstanding, the literature finds
that when monetary policy is constrained by the zero lower bound on
interest rates, such fiscal stimulus may
be more effective than in weaker recessions. The literature also highlights
that such a policy must be carefully
designed to have the desired effect:
Fiscal stimulus is most effective if it is
contingent on the disruption to the
economy still being present and the
zero lower bound still being binding.
That said, none of the studies claim
that higher government spending is
a panacea for tackling the causes of
why the economy ended up in a deep
recession in the first place. In addition,
the precise magnitude of the impact of
government spending on the economy
remains uncertain. BR

Business Review Q3 2011 19

REFERENCES
Bernanke, Ben S. “The Crisis and the
Policy Response,” speech at the Stamp
Lecture, London School of Economics,
London, England, January 13, 2009;
available at http://federalreserve.gov/
newsevents/speech/bernanke20090113a.
htm.
Christiano, Lawrence J., Martin
Eichenbaum, and Sergio Rebelo. “When
Is the Government Spending Multiplier
Large?” NBER Working Paper 15394
(October 2009).
Clarida, Richard, Jordi Galí, and Mark
Gertler. “The Science of Monetary Policy:
A New Keynesian Perspective,” Journal of
Economic Literature, 37 (December 1999),
pp. 1661-1707.

Cogan, John F., Tobias Cwik, John
B. Taylor, and Volker Wieland. “New
Keynesian Versus Old Keynesian
Government Spending Multipliers,” Journal
of Economic Dynamics and Control, 34:3
(March 2010), pp. 281-95.
Corsetti, Giancarlo, Keith Kuester,
André Meier, and Gernot Mueller. “Debt
Consolidation and Fiscal Stabilization of
Deep Recessions,” American Economic
Review Papers and Proceedings, 100 (2010),
pp. 41-45.
Dotsey, Michael. “Monetary Policy in a
Liquidity Trap,” Federal Reserve Bank
of Philadelphia Business Review (Second
Quarter 2010).
Eggertsson, Gauti B. “What Fiscal Policy
Is Effective at Zero Interest Rates?,” in
Daron Acemoglu and Michael Woodford,
eds., NBER Macroeconomics Annual 2010,
Volume 25. Chicago: University of Chicago
Press, May 2011, pp. 59-112.

20 Q3 2011 Business Review

Erceg, Christopher J., and Jesper Lindé. “Is
There a Fiscal Free Lunch in a Liquidity
Trap?” CEPR Discussion Paper 7624
(January 2010).
Hall, Robert E. “By How Much Does
GDP Rise if the Government Buys More
Output?” Brookings Papers on Economic
Activity (Fall 2009), pp. 183-231.
Sill, Keith. “Inflation Dynamics and the
New Keynesian Phillips Curve,” Federal
Reserve Bank of Philadelphia Business
Review (Fourth Quarter 2010).
Woodford, Michael. “Simple Analytics
of the Government Expenditure
Multiplier,” American Economic Journal:
Macroeconomics, 3:1 (January 2011) pp.
1-35.

www.philadelphiafed.org

What's It Worth?
Property Taxes and Assessment Practices*
BY TIMOTHY SCHILLER

5

esidential property taxes are both a major
source of local government financing and
a significant cost of owning a home. Tax
limitation measures and relatively moderate
gains in house prices during most of the 1990s tended to
keep property taxes from rising rapidly in those years. But
from the late 1990s to the mid-2000s, house prices once
again rose sharply. Property taxes followed a similar path,
bringing them to greater public attention once again.
Now that house prices appear to have shifted to a level
or downward trend in most parts of the country, there
seems to be increasing concern that real estate valuations
for property taxes are not promptly reflecting declining
values. In this article, Tim Schiller focuses on how tax
authorities measure value and calculate tax liabilities, the
shortcomings of some of these processes, and the remedies
that have been, or can be, implemented to make real
estate assessment more accurate and equitable.
Residential property taxes are
both a major source of local government financing and a significant cost
of owning a home. Homeowners view
rising house prices favorably, but rising
When he wrote
this article,
Tim Schiller
was a senior
economic analyst
in the Research
Department of
the Philadelphia
Fed. This article
is available free of
charge at www.philadelphiafed.org/researchand-data/publications/.
www.philadelphiafed.org

property taxes, which are based on
house values, are not regarded in the
same light. When house prices move
up rapidly, public concern about the
resulting upward pressure on property
taxes increases. Periods of rapid increases in house prices occurred in the
late 1970s and middle 1980s, and state
and local property taxes increased in
those same years. (See Figures 1 and
2.) The rising real estate property tax
burdens during that time led many
*The views expressed here are those of the
author and do not necessarily represent
the views of the Federal Reserve Bank of
Philadelphia or the Federal Reserve System.

states to adopt measures limiting their
growth. An early and widely copied
measure was California’s Proposition
13, enacted in 1978 and amended in
1986 to be even more favorable to
homeowners. Proposition 13 limited
annual increases in assessed value to
the annual change in the consumer
price index or 2 percent, whichever
was lower. Proposition 13 also required
houses to be reassessed at market value
when they were sold.
Tax limitation measures and relatively moderate gains in house prices
during most of the 1990s tended to
keep property taxes from rising rapidly
in those years. But from the late 1990s
to the mid-2000s, house prices once
again rose sharply. Property taxes
followed a similar path, bringing them
to greater public attention once again,
and by 2007, limits on residential
property tax assessments were in place
in 20 states.1 Now that house prices
appear to have shifted to a level or
downward trend in most parts of the
country, there seems to be increasing
concern that real estate valuations
for property taxes are not promptly
reflecting declining values.2 And

See the report by Mark Haveman and Terri
Sexton.

1

Analysis of property tax collections and houseprice appreciation between 1980 and 2008 indicates that collections increased less rapidly than
house prices during this period, in part perhaps
because of the limits on increased assessments.
Collections increased about 4 percent for each
10 percent increase in house prices for the
nation as a whole. However, it appears that tax
collections increased more for a given amount of
house-price appreciation in areas where appreciation was slower and that tax collections have
fallen by less than the 4 versus 10 percent ratio
as house prices have declined in recent years.
See the article by Byron Lutz.

2

Business Review Q3 2011 21

FIGURE 1
House Price Annual Change
Percent
25
20

US

DE

NJ

PA

15
10
5
0
-5
-10
76

78

80

82

84

86

88

90 92

94

96

98

00

02

04

06 08

Source: Federal Housing Finance Agency

FIGURE 2
Annual Change in Property Tax Collections
Percent
20

15

US

DE

NJ

PA

10

5

0

-5

71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 01 03 05 07

Source: U.S. Census Bureau

22 Q3 2011 Business Review

whether house prices are rising, falling,
or flat, there are public complaints
that property tax burdens have been
inequitable across property owners,
with similar houses subject to unequal
taxes.
Taxes on real property, such as
houses, are ad valorem taxes; they are
based on the monetary value of the
property. Consequently, a fundamental
issue in the subject of real estate taxation is the valuation, or appraisal, of
properties, which is part of the overall
real estate tax assessment procedure.
The accuracy of valuations at the time
they are made, changes in valuation
over time, and the equity of valuations
among properties are the major points
of concern. With rapid fluctuations
in residential property values over the
past 10 years or so — first rising, then
falling — valuation has come under
increasing attention. This attention
is especially justified during periods
of rapid change in house prices and
fluctuations in the pace of house sales,
both of which make accurate appraisals more difficult.3
This article takes a look at real
estate tax assessment practices that
are common among local government
jurisdictions in the U.S. — counties,
municipalities, school districts, and
special-purpose districts — which
obtain most of their revenue from
property taxes. The focus is on how
tax authorities measure value and calculate tax liabilities, the shortcomings
of some of these processes, and the
remedies that have been, or can be,
implemented to make real estate assessment more accurate and equitable.
FUNDAMENTALS OF
ASSESSMENT
Valuation of properties is a
critical part of property tax assessment.

3

See the article by Leonard Nakamura.

www.philadelphiafed.org

Assessment is the process by which
a taxing authority identifies taxable
properties, determines who is
responsible for paying taxes on them,
assigns values to them for taxation,
and calculates the tax liability of
the property. These last two steps
— valuation and computation of tax
liability — are frequently conflated in
the public discourse on the subject of
property taxes, but it is important to
view them separately when analyzing
the process of property taxation.4
In most states, the responsibility
for property tax assessment resides
with the county government. Among
the three Third District states, this
is the case in Pennsylvania and
Delaware. In a few states, both county
and municipal governments have
assessment authority. This is the case
in New Jersey, the other Third District
state. In most states, a statewide
agency has authority to set assessment
standards, assist local assessors, and
monitor local assessment processes.
However, in a few states that have
small numbers of local assessment
jurisdictions, there are no state-level
supervisory agencies. In the Third
District, Delaware has no state-level
supervision; assessment is conducted
by each of the state’s three counties.
In Pennsylvania, the state supervisory
agency is the State Tax Equalization
Board, and in New Jersey it’s the
Division of Taxation.5
The assessment basis for real
estate tax, required by most states’
laws, is an estimate of a property’s
value. There are three approaches to
this estimation: market value, rental
value, and replacement value. The

See the book by Richard Almy, Alan Dornfest,
and Daphne Kenyon.

4

Equalization is a process to ensure that all
properties are assessed at the same percent of
value. It is discussed in more detail later in this
article.

5

www.philadelphiafed.org

market value method (also known as
the sales comparison and capital value
methods) determines the value of the
property on the basis of the price at
which it could be sold in the open
market in an arm’s length transaction
(a sale between unrelated parties
in which there is no discounting or
inflating of value intended to favor
the seller or buyer). The rental value
(also known as the income method)
analyzes the income stream or rent

tax rate. The most common form of
varying tax rates is the classification
of property types into groups, usually
according to the function the
property serves, with different rates
for each group. For example, assigning
properties to such classifications as
residential, commercial, industrial, or
agricultural — with different tax rates
for each class — is common. Other
classifications include historic sites and
raw land.

The tax liability of a property is determined
by applying the tax rate applicable to that
property to the value of that property.
produced by the property to estimate
the amount that might be invested
in the property in order to obtain the
projected income. The replacement or
construction cost method estimates
the cost of constructing the building
to be valued using current costs for
similar materials and design features,
with an adjustment to account for
physical depreciation of the building
being valued. Market value is generally
used for owner-occupied residential
properties for which recent sale prices
of a sufficient number of similar
properties are available. The rental
value approach is, of course, most
often used for properties that are
commonly rented, such as apartment
buildings and commercial buildings.
The replacement cost method is
usually used for new construction,
for which there are few comparable
properties to make a sales comparison
approach feasible.
The tax liability of a property is
determined by applying the tax rate
applicable to that property to the
value of that property. Most taxing
jurisdictions have more than one

Finally, some uses of property
qualify for total or partial exemption
from property taxes. In many,
if not most, jurisdictions in the
United States, the following types
or uses of property are exempt:
charitable, educational, and religious
organizations; governments; and
hospitals. Exemptions can also apply
to property owners. Common, usually
partial, exemptions of this type are
for homeowners in general (known
as homestead exemptions) or for
homeowners meeting certain criteria
of age, income, or disability. Taxing
jurisdictions use other means to reduce
effective property taxes, such as rebates
and property tax credits against state
income taxes. These are common
across the country, including in the
three Third District states. (Property
tax reductions are discussed in more
detail below.)
THE VALUATION PROCESS
The structure of the assessment
process and the tax rates and classifications used by a taxing jurisdiction
set the framework in which proper-

Business Review Q3 2011 23

ties are valued and their tax liability
is determined. These broad features
apply in general to all properties, and
they are altered only occasionally.
Valuation, on the other hand, applies to each individual property, and
assigned valuations can be changed
with more frequency than the features
of the overall property tax system.
Thus, valuation is of more immediate
concern to individual property owners,
and the details of the valuation process
are of vital interest to most.
The valuation process has several sequential steps. It begins with
identifying properties and describing
their features, including aspects of the
property that might add to or detract
from their value, such as ancillary
rights and easements. Information
about the property is analyzed in order
to account for all of the features that
affect its value, such as size, age, and
location. The market value of these
features is estimated for the market in
which the property is located. After these preliminary steps, one or a
combination of the valuation methods
described earlier is used to compute
the property’s assessed value. Property
owners may appeal the assessed value
and, if successful, have the property’s
assessed value changed (lowered). The
burden of proof is on the property
owner to show that the assessed value
is too high. Common bases for appeals
are that the assessment used erroneous data about the property or that the
assessed value is greater than that of
comparable properties by more than
the legally allowed variance (commonly 15 percent). After the assessment is
finalized the tax rate applicable to the
class of property (see below) is applied,
taking exemptions into account, to
compute the tax liability.
Residential properties are not
typically valued individually on a caseby-case basis. Instead, appraisers use
large data sets of residential property

24 Q3 2011 Business Review

information to calculate typical values
for similar properties, and they may apply adjustment factors for some variations in features from one property to
another. This process is known generically as mass appraisal, and when done
with computerized systems, the entire
process is referred to as computerassisted mass appraisal (CAMA). The

be estimated for the properties in the
group, such as location, size of lot, and
the number of bathrooms, garages, and
stories, and so forth. Actual sales price
data are obtained for properties in the
group of properties subject to mass
appraisal. The software then estimates
how much each feature contributed
to the value of each sold property: so

Residential properties are not typically valued
individually on a case-by-case basis. Instead,
appraisers use large data sets of residential
property information to calculate typical
values for similar properties, and they may
apply adjustment factors for some variations
in features from one property to another.

use of statistical techniques in this
process has increased as computerization of assessment procedures has
advanced, and now many jurisdictions, including some within the Third
District, use such a technique. Mass
appraisal systems are used because they
are economically efficient and because
they are a means of valuing properties
on a consistent, equitable basis.
Under a mass appraisal system,
the actual sales price of any given
property is not the basis of its value
for property tax assessment. Instead, a
group of similar properties is evaluated
as of a common date using common
data elements and a standard — usually statistical —method. Properties
included in the group should be those
located in the same market area, that
is, properties that might be considered
by a potential buyer looking to purchase a property in a given geographic
area. The data elements used are those
features for which market values can

much for each bath, so much for each
quarter acre of lot size, and so forth.
Then these valuations are applied to
all of the houses in the neighborhood.
Because location is an important factor
in determining a property’s market
value, the geographic neighborhood
should be compact enough to reflect
similar values. Some jurisdictions
have legal requirements that land and
structures on a property be valued
separately. This can be done either by
an independent estimate of the land
value or by using computerized statistical models that include techniques for
separating land and structure values.
ACCURACY AND EQUITY IN
APPRAISALS
As noted at the beginning of this
article, equity in property appraisals
is a perennial concern for property
owners, assessors, and the supervisory agencies charged with review of
assessment practices and enforcement

www.philadelphiafed.org

of laws regarding property taxation.
Equity is the assurance that similar
properties are similarly appraised. An
essential prerequisite for this is full
and accurate data on properties with
respect to those features that affect
a property's value. An initial step in
ensuring overall accuracy in the assessment process is to make certain
that each property to be assessed is
accurately described in both the data
entered in the mass appraisal system as
well as the jurisdiction’s property tax
records. (These records — called a cadastre — include the location, description, and ownership of the property.)
Assessors or trained data collectors compile these data by physically
inspecting properties. The inspection
focuses on measurable features such
as land area, square footage of the
structure, number of garage spaces,
and so forth, including factors that
affect the market value of properties
in the locations covered (e.g., riparian
rights of riverfront properties, views in
scenic areas, and so forth). In addition
to objective and measurable features,
qualitative features related to such
things as materials used in construction and condition of the structure
need to be taken into account. These
subjective evaluations should be made
by experienced appraisers with the
requisite knowledge.
Data collection should be an
ongoing process and subject to quality control procedures and feedback
from property owners. Typical quality
control edits will produce alerts for
missing or inconsistent data. Periodic
review of recorded data against actual
properties will help ensure that the
data being used for tax assessment are
accurate and current. However, frequent on-site inspections are costly for
assessment agencies and inconvenient
for property owners, so less intrusive
means can be used for updating data,
for example, street-view and aerial

www.philadelphiafed.org

photography. In addition, assessment
agencies can receive copies of building
permits to inform them of additions
and improvements that will prompt
reappraisals. For both initial appraisals and reappraisals, property owners
should receive reports with all of the
relevant appraisal data and be given an
opportunity to verify or correct each
data element.6

that only those prices that represent
fair sale prices should be used. Fair sale
prices are those that obtain in transactions in an open market, between a
willing buyer and seller — both acting
prudently and knowledgably — without undue stimulus and with the price
unaffected by special financing or sales
concessions.7 Furthermore, only prices
representing so-called “arm’s length”

Frequent on-site inspections are costly for
assessment agencies and inconvenient for
property owners, so less intrusive means
can be used for updating data, for example,
street-view and aerial photography.

A property’s appraised value is determined by its actual features and by
the market value of those features. So,
in addition to the need for accuracy
with respect to the physical description of the property (including location
features), there is a need for accuracy
in determining market value. As noted
earlier, the market value approach is
the common method of valuing property for taxation in the United States,
and this is usually done by using a sales
comparison approach, that is, basing
the estimated value of a property on
actual sale prices of similar properties.
However, care must be used in selecting the actual sale prices used, because
not all sales are transacted at true market values. In fact, both economic and
legal definitions of true market value
govern the values that can be used
for the sales comparison approach.
Basically, both definitions emphasize

transactions should be used, that is,
transactions between unrelated parties
in which neither is altering the price to
benefit the other.
Despite the emphasis on sale prices in the market value approach, the
recent sale price of a property should
not be used as a basis for reassessing
it. This practice — known as “sales
chasing” — can result in unrepresentative and inequitable appraised values
because some properties (the recently
sold ones) are reappraised, while others
(properties not sold) are not. Furthermore, because at nearly all points in
time most properties in an area have
not been recently sold and therefore
do not have a recent sale price, sales
chasing gives undue weight to the sale
prices of a few (recently sold) properties in the determination of the typical
or representative value of similar properties. The resulting lack of uniformity

See the standard on mass appraisal issued
by the International Association of Assessing
Officers.

7

6

See the sales validation guidelines in the standard on ratio studies issued by the International
Association of Assessing Officers.

Business Review Q3 2011 25

in valuation will reduce the validity of
mass appraisal methods.
Besides the question of the correct
transaction to use in the market value
approach, there is also the question
of the correct selection of properties
to use for comparison. In addition to
using properties with similar physical features, the properties used for
comparison should be in the same
geographic or market area, should be
of similar age or condition, and should
in nearly all respects be considered as
reasonable alternatives for a prospective purchaser.
Market values change over
time. Indeed, it is during periods of
rapidly changing market values that
homeowners and other property
owners are most likely to question
the accuracy of their properties’
appraisals. Thus, just as frequent
appraisals help ensure accuracy with
respect to the data pertaining to the
appraised properties, they also help
ensure that market values are current.
In fact, the International Association
of Assessing Officers recommends
annual assessments when the market
value or sales comparison approach is
used, and most states require taxing
jurisdictions to conduct reassessments
on a regular schedule, ranging from
annually to at least once every two to
five years. However, the practice of
reassessing properties whenever they
are sold — which is the practice in
California and in some jurisdictions
in other states — is detrimental to
equity, especially when overall market
prices are changing rapidly, because
it results in similar properties being
appraised at different values solely on
the basis of whether they have been
recently sold. (In fact, such a practice
is equivalent to sales chasing if it
produces assessments at or near the
sales price of the individual property
rather than the average assessed value
of similar properties.) Instead, short-

26 Q3 2011 Business Review

term general price trends affecting a
group of properties subject to mass
appraisal can be used to obtain rough
estimates of current values for an
annual assessment update, although
for longer periods of time or during
periods of rapid or volatile price
changes, it is preferable to conduct
complete reassessments, including
physical reviews of properties, every
four to six years.
The basic means of evaluating the

whether statutory requirements for
appraisal values are being met, and
to determine time trends in market
values. As part of a general revaluation
of properties, a ratio study is used to
review current appraisals, establish preliminary values of new appraisals, and
evaluate final appraisals in conjunction with the appeals process for new
appraisals.
Because the purpose of the ratio
study is to evaluate the validity of the

Because the purpose of the ratio study
is to evaluate the validity of the appraisal
process, the sample of properties used
should be representative of the total group
of properties covered by that appraisal
process, and it is important that the sale
prices used be true fair market prices.
accuracy and equity of appraisals is the
ratio study, which is in common use
throughout the United States. As the
name implies, a ratio study measures
the ratio of appraised or assessed
values to an independent measure of
market values, usually represented
by sale prices, ideally sales that have
occurred in a recent, short period of
time. Like mass appraisal, a ratio study
is based on a sample of properties in a
group for which actual sale prices can
be obtained. Also as in mass appraisal,
properties sampled in a ratio study can
be stratified. Stratification can be by
type of property, geographic area, and
so forth. The purpose of stratification
is to identify and ultimately correct
lack of uniformity in appraisal-to-market value ratios that might be found
across different strata of properties.
Ratio studies are conducted to evaluate the mass appraisal method used in
the assessment process, to determine

appraisal process, the sample of properties used should be representative of
the total group of properties covered
by that appraisal process, and it is
important that the sale prices used be
true fair market prices. Furthermore,
the properties used should be reviewed
to make sure that sales chasing has not
occurred. This is because sales chasing
will result in a spurious accuracy of
appraised value —close to actual sale
prices — that is not truly indicative of
the accuracy of the appraisal process
itself.
The ratio study answers two primary questions: 1) How close to 1.00
(when the appraised value equals the
full market value) is the average ratio
of the properties under review? 2) How
much variation is there in the ratio
from property to property?
The first question addresses the
accuracy of appraisal values in general.
If the ratio is 1.00 or close to 1.00,

www.philadelphiafed.org

appraisals are generally accurately
measuring market value, although they
might not be equally accurate for each
individual property. It is also important
to determine the average ratio in those
jurisdictions in which there are legal
requirements that the average ratio
be 1.00 or some other legally specified
value (less than 1.00).
The second question, about variation, addresses equity. Two kinds of
equity need to be considered. One,
called horizontal equity, is measured
simply by how much variation there is
in the ratio from property to property, with greater variation indicating
greater horizontal inequity. The other
kind of equity, called vertical equity,
is a measure of possible systematic
differences in the appraisal-to-market
value ratio between high-value and
low-value properties. Greater ratios for
low-value properties are regressive, and
greater values for high-value properties
are progressive. Ideally, there should
be neither progressivity nor regressivity in the ratio because the purpose of
appraisals is to establish market value
only, not to indirectly apply differing
tax liabilities.
In many states, legal requirements
address acceptable measures of the
ratio with respect to both its level and
its variation. In practice, actual assessed values may or may not be equal
to 100 percent of full market value.
In many states, laws or court rulings
permit a lower ratio, usually known
as the common level ratio, which can
vary from one taxing jurisdiction to
another. However, within a taxing
jurisdiction, little or no variation in
the ratio from property to property is
permitted within each property classification. Supervisory agencies enforce
this requirement in a process known as
equalization (more specifically referred
to as direct equalization), and reference
to deviations from the common level
ratio can be used in the assessment ap-

www.philadelphiafed.org

peal process for individual properties.
As noted earlier, enforcement is one
of the responsibilities of the State Tax
Equalization Board in Pennsylvania
and the Division of Taxation in New
Jersey. Delaware does not have direct
equalization.
Besides its use in determining
assessed values for tax purposes, the
common level ratio is used in determining the distribution of state
government financial assistance to
local school districts in many states,
including the three Third District
states. This process is often referred to
as indirect equalization because it does
not affect assessed values of individual
properties, and it is usually done by the
agencies responsible for direct equalization (where this occurs). Property
values are critical to public school
financing because local property taxes
are a primary source of this financing.
When states provide subsidies to local
school districts, they provide more
funds to those districts that have less
taxable property, measured by total
property value. The state cannot simply use the values determined by local
assessors because localities can have
different assessment ratios. Therefore,
the state must make adjustments to assessed values in order to measure each
district’s total taxable value. This is
done by using the common level ratio
to determine the total value of properties in each district, regardless of the
ratio used for local tax purposes, and
then using this total value to compute
the amount of state aid to which each
district is entitled.
ASSESSMENTS AND TAX
LIMITATION
To calculate the tax liability of
a property once its assessed value is
determined, the tax rate for the class
of property to which it belongs is multiplied by the assessed value. The tax
rate is usually expressed in units called

mills, which represent one-thousandth
of a dollar, so that a millage rate of 1
would mean $1 of tax for each $1,000
of assessed value. As noted at the beginning of this article, public concern
about the burden of property taxes has
grown, and this concern has engendered more critical interest in assessments. However, it is the combination
of the tax rate and the assessed value
that determines the tax bill, and attempting to accommodate all concerns
about the property tax burden by
means of the assessment can be ineffective and even counterproductive.
Property tax limitation through
limits on assessed values originated in
California with the passage of Proposition 13 in 1978. Besides limiting the
property tax rate, Proposition 13 limited increases in assessed value to the
change in the consumer price index or
2 percent a year, whichever is lower. By
2006, at least 20 states had statewide
or local limits of some sort on the rate
at which assessed values could increase
each year, most often setting a fixed
percentage or an upper limit at the rate
of change in the consumer price index.
None of the three Third District states
has such limits. In states with limits,
residential properties are covered, and
in some states, other types of properties have limits as well. Most states
with limits have exceptions for acquisitions, resetting assessed value to reflect
market value when a property is sold.
A limit on the amount by which
assessed value can be increased might
have appeal as a way to set a limit on
the amount by which the tax burden
can increase, but an assessment limitation has ramifications that can seriously reduce or negate its usefulness in
mitigating tax increases.8 The most obvious drawback is that if the total tax
levy is fixed or rising, any adjustment
See the article by Richard Dye and Daniel
McMillen.

8

Business Review Q3 2011 27

that reduces the tax liability of some
properties by lowering their assessed
value below what it would have been
in the absence of a limit must be offset
by increasing the tax liability of other
properties that do not get reductions in
assessed values below what they would
have been in the absence of a limit.
Not obvious is the fact that the tax
burden can be shifted among properties even when they are all covered by
the assessment limitation. This can
occur when properties appreciate at
different rates while the total tax levy
to which the properties are collectively
subject remains unchanged or increases. Properties that appreciate furthest
above the assessment limit will have
their proportion of the total tax levy
reduced below what it would have been
in the absence of the limit, and properties that appreciate less far above the
limit will have their proportion of the
total tax levy increased above what
it would have been without the limit.
This result has in fact occurred in
several states and other taxing jurisdictions. Consequently, some properties
that were intended to benefit by the
limit do not, in fact, get the benefit.
This reduces the usefulness of assessment limits as a deliberate policy tool
to provide property tax relief. However,
there are other means of doing so that
enable the taxing authority to direct
benefits more precisely to intended
beneficiaries.
OTHER FORMS OF TAX RELIEF
Several alternatives to assessment
limits can restrict property tax burdens
— which is the goal of assessment
limits — without unintended consequences. Like all forms of tax relief,
alternatives to assessment limits shift
the tax burden from favored groups to
others. However, these other means of
relief do not operate through fortuitous
changes in property values, as assessment limits do; instead, they can be

28 Q3 2011 Business Review

directed to specific types of property or
property owners.9
Property classification is a method
by which many jurisdictions place
different tax burdens on different
types of properties, with the intent of
placing lighter tax burdens on some
types of property relative to others.
In this method, properties are placed
in different categories depending on

higher taxes on some uses of property are a disincentive to those uses,
whether intended to be so or not.
Tax revenue limits are another
alternative to assessment limits as a
means of constraining increases in
property taxes. Several states, including some with assessment limits, also
have revenue limits. In the Third
District, all three states have revenue

,QWKH7KLUG'LVWULFWFODVVL¿FDWLRQLVQRW
widely used among taxing jurisdictions,
although favorable treatment of
agricultural land is common.
their use. Generally, jurisdictions that
use property classification distinguish
between residential and commercial
uses of property, and some jurisdictions
have other classes, such as agricultural
or charitable uses. With property classification, taxes imposed on properties in different categories are varied
through the application of varied assessment ratios (ratio of assessed value
to market value) or varied tax rates.
Most jurisdictions that use classification favor residential and other types
of properties with lower assessment
ratios or tax rates and apply higher assessment ratios or tax rates to commercial properties. In the Third District,
classification is not widely used among
taxing jurisdictions, although favorable treatment of agricultural land is
common. A drawback to property classification is that tax burdens will be
disproportional to value; thus, the goal
of tax equity is subordinated to the
goal of tax limitation. Furthermore,

See report by Terri Sexton and the article by
Joan Youngman.

9

limits. (Pennsylvania also has tax rate
limits, as do many other states.) Tax
revenue limits set maximum amounts
by which the total property tax levy
in a jurisdiction can be increased.
Revenue limits by themselves affect
only the total tax collection, not the
tax burden on individual properties.
This is especially the case when property values are changing over time.
For example, if increases in value are
not equal across all properties, those
properties that appreciate more rapidly
will be subject to a greater proportion
of the total tax levy. Thus, just as in
the case of property classification, tax
equity is not addressed by revenue limits. To limit tax burdens on individual
properties, tax revenue limits need to
be supplemented by limits on individual tax liabilities.
Another means of providing
property tax relief is the use of full or
partial exemptions from property tax
liability. Full exemptions are granted
primarily for property owned by
federal, state, and local governments,
and by educational, charitable, and
religious institutions. An exemption

www.philadelphiafed.org

for owner-occupied housing, known as
the homestead exemption, is usually a
partial exemption. Homestead exemptions are one of the oldest and most
common ways in which taxing jurisdictions limit the property tax burden
on owner-occupied housing. They are
available in nearly every state, including the Third District states. In some
jurisdictions, the exemptions are available to all homeowners; in others, they
are available only to certain people,
such as veterans, senior citizens, or the
disabled. The exemption is commonly
applied by reducing the assessed value
of the owner-occupied property by
either a fixed percentage or a fixed
dollar amount. A percentage exemption will limit increases in tax liability
as assessed values increase — a major
concern that motivates efforts to limit
assessments — but a dollar-amount
exemption will not limit tax increases
unless it is raised in line with any rise
in assessed values.
Rebates of property taxes and
credits of property taxes against other
taxes for homeowners are forms of
relief that are similar to exemptions.
In some jurisdictions, these various
forms of relief apply to property types
other than homesteads. In the Third
District, these forms of relief are available to most homeowners, and they are
also provided in different amounts for
the elderly and disabled. Some form of
property tax relief is also provided in
the Third District states (and others)
for some types of property other than
homesteads, such as agricultural land.
When any kind of property tax relief is based on the individual propertyowner’s income, it is known as a circuit
breaker. Circuit breakers are available
in over half the states, although they
may be officially known by some other
name, typically as a rebate or credit. In
most states that have circuit-breaker
programs, the state government
provides revenue to the local jurisdic-

www.philadelphiafed.org

tions to replace funds not collected
from property owners receiving the
tax relief. With circuit breakers the
amount of property tax relief is related
to income in one of three ways: 1)
single threshold; 2) multiple thresholds; or 3) sliding scale. With a single
threshold, the maximum amount
of property tax is limited to a fixed
percentage of income for all property

Deferral programs
are available in
taxing jurisdictions
in around half of
the states, including
Pennsylvania and
Delaware in the
Third District.
owners. With multiple thresholds, the
percentage limit rises with income.
This feature imparts some progressivity
to the property tax, increasing it as a
percentage of income as income rises.
With a sliding scale, a range of income
brackets is established, and all property owners whose income falls within
a certain bracket receive the same
percentage reduction in property taxes,
with the percentage of reduction being
greater for lower-income brackets and
less for higher-income brackets. Thus,
sliding scale circuit breakers are also
progressive. In some states, progressivity is introduced by limiting the
amount of tax relief provided by circuit
breakers to houses below an assessed
value limit.
In the Third District, circuit
breakers are available in Pennsylvania
and New Jersey. Pennsylvania’s program, a property tax rebate program,
is available only to the elderly; it is
a sliding scale program with four

brackets and an income ceiling for
eligibility. New Jersey’s program, a
homestead credit/rebate program, is
not restricted by age, although it does
provide more relief to the elderly. It
is a sliding scale program with three
brackets for homeowners 65 years and
older and two brackets for those under
65. It also has an income ceiling. Both
the Pennsylvania and New Jersey programs are available to renters as well
as homeowners in recognition that
part of their rent covers property tax.
In both states, the amount of tax relief
available under the renters’ program is
less than the amount available under
the homeowners’ program.
Another sort of property tax
relief is provided by tax deferral, which
allows property owners to delay paying
property taxes until their property is
sold or their estate is settled. These are
often restricted to elderly, disabled, or
low-income property owners. Deferral
programs are available in taxing jurisdictions in around half of the states,
including Pennsylvania and Delaware
in the Third District.
SUMMARY
Rising property tax burdens
in the latter half of the last century
brought greater public attention to the
issue of residential property assessment. Limits on increases in assessed
value became a major part of efforts to
limit increases in homeowners’ property tax bills. As of 2006, statewide
or local limits on increases in assessed
value of residential property were in effect in 20 states. However, assessment
limits, by themselves, cannot limit tax
bills unless tax rates are also limited.
In fact, unless the total tax burden is
restricted, assessment limits without
tax rate limits can result in increased
tax bills for some homeowners and reduce equity across properties. This has
been the experience in several states
and tax jurisdictions in the wake of

Business Review Q3 2011 29

assessment limits as total tax burdens
have shifted more toward slowly appreciating properties than rapidly appreciating properties. There are remedies
for many of the problems associated
with rising assessments and property

taxes. Principal remedies are revenue limits, exemptions, rebates, and
deferrals. These measures can limit
increases in the property tax burden in
ways that do not have the unintended
consequences of assessment limits ap-

plied without such measures. However,
ultimately, limits on property taxes
can be secured only by substituting
other sources of revenue or by limiting
spending by the taxing jurisdictions
that rely on property taxes. BR

Almy, Richard, Alan Dornfest, and
Daphne Kenyon. Fundamentals of Tax
Policy. Kansas City, MO: International
Association of Assessing Officers, 2008.

International Association of Assessing
Officers. Standard on Mass Appraisal of
Real Property. Kansas City, MO, IAAO,
2008.

Sexton, Terri A. “Property Tax Systems
in the United States: The Tax Base,
Exemptions, Incentives, and Relief,”
Center for State and Local Taxes,
University of California, Davis (2003).

Dye, Richard F., and Daniel P. McMillen.
“Surprise: An Unintended Consequence of
Assessment Limitations,” Land Lines (July
2007), pp. 8-13.

International Association of Assessing
Officers. Standard on Ratio Studies. Kansas
City, MO, IAAO, 2007.

REFERENCES

Haveman, Mark, and Terri A. Sexton.
“Property Tax Assessment Limits: Lessons
from Thirty Years of Experience,” Policy
Focus Report, Lincoln Institute of Land
Policy (2008).

30 Q3 2011 Business Review

Lutz, Byron F. “The Connection Between
House Price Appreciation and Property
Tax Revenues,” National Tax Journal
(September 2008), pp. 555-72.

Youngman, Joan M. “The Variety of
Property Tax Limits: Goals, Consequences,
and Alternatives,” State Tax Notes
(November 19, 2007), pp. 541-57.

Nakamura, Leonard. “How Much Is That
Home Really Worth? Appraisal Bias and
House-Price Uncertainty,” Federal Reserve
Bank of Philadelphia Business Review (First
Quarter 2010), pp. 11-22.

www.philadelphiafed.org

RESEARCH RAP

Abstracts of
research papers
produced by the
economists at
the Philadelphia
Fed

You can find more Research Rap abstracts on our website at: www.philadelphiafed.org/research-and-data/
publications/research-rap/. Or view our working papers at: www.philadelphiafed.org/research-and-data/
publications/.

EXPLORING LIMITED ATTENTION
AND CHECKING OVERDRAFTS
The authors explore dynamics of limited attention in the $35 billion market for
checking overdrafts, using survey content
as shocks to the salience of overdraft fees.
Conditional on selection into surveys, individuals who face overdraft-related questions
are less likely to incur a fee in the survey
month. Taking multiple overdraft surveys
builds a “stock” of attention that reduces
overdrafts for up to two years. The effects
are significant among consumers with lower
education and financial literacy. Consumers
avoid overdrafts not by increasing balances
but by making fewer debit transactions and
cancelling automatic recurring withdrawals. The results raise new questions about
consumer financial protection policy.
Working Paper 11-17, “Limited and
Varying Consumer Attention: Evidence from
Shocks to the Salience of Bank Overdraft
Fees,” Victor Stango, University of California,
Davis, and Jonathan Zinman, Dartmouth
College, and Visiting Scholar, Federal Reserve
Bank of Philadelphia
ENTREPRENEURS AND
AGGREGATE AND IDIOSYNCRATIC
RISK IN THE PRESENCE OF
BORROWING CONSTRAINTS
This paper studies the quantitative
properties of a general equilibrium model
where a continuum of heterogeneous
entrepreneurs are subject to aggregate as
well as idiosyncratic risks in the presence
www.philadelphiafed.org

of a borrowing constraint. The calibrated
model matches the highly skewed wealth
and income distributions of entrepreneurs.
The authors provide an accurate solution
to the model despite the significant nonlinearities that are absent in the economy
with uninsurable labor income risk. The
model is capable of generating the average
private equity premium of roughly 3 percent
and a low risk-free rate. The model also
produces procyclicality of the risk-free rate
and countercyclicality of the average private
equity premium. The countercyclicality of
the average equity premium is largely driven
by tightening (loosening) of financing constraints during recessions (booms).
Working Paper 11-18, “Private Equity
Premium in a General Equilibrium Model of
Uninsurable Investment Risk,” Francisco Covas, Board of Governors of the Federal Reserve
System, and Shigeru Fujita, Federal Reserve
Bank of Philadelphia
IMPACT OF REDUCING TARIFFS
ON WELFARE, TRADE, AND THE
ORGANIZATION OF PRODUCTION
The authors study the effects of tariffs
in a dynamic variation of the Melitz (2003)
model, a monopolistically competitive model with heterogeneity in productivity across
establishments and fixed costs of exporting.
With fixed costs of starting to export that
are on average 3.7 times as large as the costs
incurred to continue as an exporter, the
model can match both the size distribution
of exporters and annual transition in and
Business Review Q3 2011 31

out of exporting among U.S. manufacturing establishments. The authors find that the tariff equivalent of
these fixed costs is nearly 30 percentage points. They
use the calibrated model to estimate the effect of reducing tariffs on welfare, trade, and export participation.
The authors find sizeable gains to moving to free trade
equivalent to 1.03 percent of steady-state consumption.
Considering the transition dynamics following the cut
in tariffs, they find that the model predicts economic
activity overshoots its steady state, with the peak in
output coming 10 years after the trade reform. Because
of this overshooting, steady-state changes in consumption understate the welfare gain to trade reform. The
authors also find that simpler trade models that abstract
from these export dynamics provide a poor approximation of the aggregate responses from their more general
model.
Working Paper 11-19, “Establishment Heterogeneity,
Exporter Dynamics, and the Effects of Trade Liberalization,” George Alessandria, Federal Reserve Bank of Philadelphia, and Horag Choi, Monash University
CONSTRUCTING ERROR BANDS FOR
IMPULSE RESPONSES IN VARs
There is a fast growing literature that partially
identifies structural vector autoregressions (SVARs) by
imposing sign restrictions on the responses of a subset
of the endogenous variables to a particular structural
shock (sign-restricted SVARs). To date, the methods
that have been used are only justified from a Bayesian
perspective. This paper develops methods of constructing error bands for impulse response functions of
sign-restricted SVARs that are valid from a frequentist
perspective. The authors also provide a comparison of
frequentist and Bayesian error bands in the context of
an empirical application — the former can be twice as
wide as the latter.
Working Paper 11-20, “Inference for VARs Identified
with Sign Restrictions,” Hyungsik Roger Moon, University
of Southern California; Frank Schorfheide, University of
Pennsylvania, and Visiting Scholar, Federal Reserve Bank
of Philadelphia; Eleonora Granziera, Bank of Canada; and
Mihye Lee, University of Southern California
POLITICAL FRICTIONS AND THE
CONSUMPTION VOLATILITY PUZZLE
Standard real business cycle theory predicts that
consumption should be smoother than output, as ob-

32 Q3 2011 Business Review

served in developed countries. In emerging economies,
however, consumption is more volatile than income. In
this paper the authors provide a novel explanation of
this phenomenon, the “consumption volatility puzzle,”
based on political frictions. They develop a dynamic
stochastic political economy model where parties that
disagree on the size of government (right-wing and leftwing) alternate in power and face aggregate uncertainty. While productivity shocks affect only consumption
through responses to output, political shocks (switches
in political ideology) change the composition between
private and public consumption for a given output size
via changes in the level of taxes. Since emerging economies are characterized by less stable governments and
more polarized societies, the effects of political shocks
are more pronounced. For a reasonable set of parameters the authors confirm the empirical relationship
between political polarization and the ratio of consumption volatility to output volatility across countries.
Working Paper 11-21, “Partisan Cycles and the Consumption Volatility Puzzle,” Marina Azzimonti, Federal
Reserve Bank of Philadelphia, and Matthew Talbert,
University of Texas, Austin
INVESTIGATING THE TRUST PREFERRED
SECURITIES CDO MARKET
This paper investigates the development, issuance,
structuring, and expected performance of the trust preferred securities collateralized debt obligation (TruPS
CDO) market. Developed as a way to provide capital
markets access to smaller banks, thrifts, insurance
companies, and real estate investment trusts (REITs) by
pooling the issuance of TruPS into marketable CDOs,
the market grew to $60 billion of issuance from its
inception in 2000 through its abrupt halt in 2007. As
evidenced by rating agency downgrades, current performance, and estimates from the authors’ own model,
TruPS CDOs are likely to perform poorly. Using data
and valuation software from the leading provider of
such information, they estimate that large numbers of
the subordinated bonds and some senior bonds will be
either fully or partially written down, even if no further
defaults occur going forward. The primary reason for
these losses is that the underlying collateral of TruPS
CDOs is small, unrated banks whose primary asset is
commercial real estate (CRE). During their years of
greatest issuance from 2003 to 2007, the booming real
estate market and record low number of bank failures

www.philadelphiafed.org

masked the underlying risks that are now manifest. Another reason for the poor performance of bank TruPS
CDOs is that smaller banks became a primary investor
in the mezzanine tranches of bank TruPS CDOs, something that is also complicating regulators’ resolutions
of failed banks. To understand how this came about,
the authors explore in detail the symbiotic relationship
between dealers and rating agencies and how they modeled and sold TruPS CDOs. In their concluding comments, the authors provide several lessons learned for
policymakers, regulators, and market participants.
Working Paper 11-22, “The Trust Preferred CDO
Market: From Start to (Expected) Finish,” Larry Cordell,
Michael Hopkins, and Yilin Huang, Federal Reserve Bank
of Philadelphia
EFFECTS OF ASYMMETRIES IN RE-ELECTION
PROBABILITIES ON PUBLIC POLICY AND
THE ECONOMY
This paper studies the effects of asymmetries in
re-election probabilities across parties on public policy
and their subsequent propagation to the economy. The
struggle between opposing groups — that disagree on
the composition of public consumption — results in
governments being endogenously short-sighted: Systematic under investment in infrastructure and overspending on public goods arise, as resources are more
valuable when in power. Because the party enjoying
an electoral advantage is relatively less short-sighted, it
devotes a larger proportion of government revenues to
productive public investment. Political turnover, together with asymmetric policy choices, induces economic
fluctuations in an otherwise deterministic environment.

www.philadelphiafed.org

The author characterizes the long-run distribution of
capital and shows that output increases on average with
political advantage, despite the fact that the size of the
government expands as a percentage of GDP. Volatility,
on the other hand, is non-monotonic in political power
and is an additional source of inefficiency.
Working Paper 11-23, “The Dynamics of Public
Investment Under Persistent Electoral Advantage,” Marina
Azzimonti, Federal Reserve Bank of Philadelphia
CORE INFLATION MEASURES AS
PREDICTORS OF TOTAL INFLATION
Policymakers tend to focus on core inflation
measures because they are thought to be better
predictors of total inflation over time horizons of
import to policymakers. The authors find little support
for this assumption. While some measures of core
inflation are less volatile than total inflation, core
inflation is not necessarily the best predictor of total
inflation. The relative forecasting performance of
models using core inflation and those using only total
inflation depends on the inflation measure and time
horizon of the forecast. Unlike previous studies, the
authors provide a measure of the statistical significance
of the difference in forecast errors.
Working Paper 11-24, “Core Measures of Inflation
as Predictors of Total Inflation,” Theodore M. Crone,
Swarthmore College; N. Neil K. Khettry, Murray, Devine
& Company; Loretta J. Mester, Federal Reserve Bank
of Philadelphia, and the Wharton School, University of
Pennsylvania; and Jason A. Novak, Federal Reserve Bank
of Philadelphia

Business Review Q3 2011 33