menu ☰
menu ˟

The Great Miscalculator

25 Mar 2019

Arnold Kling

In the minds of economists, market outcomes are deterministic.
The price system is an efficient calculating machine, leading
households and firms to make reliable decisions. To be sure,
economists can identify “market failures,” but these are isolated,
predictable, and amenable to correction with taxes or regulation.
This standard economic story takes the fundamental soundness of the
market for granted.

But in the real world, the market cannot possibly make the sort
of reliable calculations that economists expect from it. Market
outcomes are highly contingent on strategies, beliefs, and past
choices that are somewhat arbitrary. The market is not as well
informed as we would like to believe, which in turn makes
policymaking more problematic than we would like to think. Actual
markets miscalculate an awful lot.

This distance between our expectations of markets and their
actual abilities has numerous implications. It argues for humility
about economic analysis and public policy, and for a sense of
perspective about what the tools of economists offer us. The work
of economists and policymakers is not entirely without such
humility, of course. And the distance between economic theory and
practice is hardly an unknown problem. But the most important
implication of this view of the market as a great miscalculator is
actually badly underappreciated. More fully accounting for the
limits of markets as calculators would suggest that policy should
focus, above all, on the fragility of the economy.

Our prosperity has come to rely on certain key companies,
business practices, habits of thought, and assumptions about
government finances. When there is a major shift in beliefs or
conventions, these institutions can come under sharp, severe, and
sudden stress. This will happen; there is no getting around it. But
when it does, just how will these important firms, practices, and
norms handle the problem? Whether they will degrade gracefully or
fail catastrophically ought to be a foremost concern of
policymakers.

Economists and
policymakers tell themselves that they are dealing with a robust,
predictable system with easily recognized points of failure and
that they have reliable regulatory tools for steering it toward
more optimal results. In reality, they are dealing with a fragile,
complex system.

THE SYSTEM OF EQUATIONS

Anyone who attempts to learn economics in college will encounter
a curriculum that uses mathematics. The further one advances in the
subject, the greater the reliance on mathematical models and
equations.

Economists have come to have faith in mathematics because they
see the market as a mechanism for solving one giant math problem,
to be taken up in two key steps. In step one, they take certain
conditions as given: the initial endowments of resources held by
individuals, as well as their tastes; the engineering relationships
that determine the feasible production outputs; and the rules,
regulations, and tax policies set by government.

In step two, they find a set of prices that will balance supply
and demand for every form of input and output. When prices have
adjusted to the point where at those prices no consumer
wishes to buy more and no firm wishes to supply more, the economy
is said to have arrived at equilibrium.

This approach leaves no room for contingency. Historians are
often keenly aware of idiosyncratic factors at work, as shown by
such expressions as “alternative history” or “historical accident.”
But economic analysis is conducted outside of time and history.
Nearly all models are written as if one could arrive here from Mars
and be able to predict and explain economic outcomes looking just
at conditions as they exist today, with no knowledge of the path
that got us here.

In short, historians understand intuitively that they are
describing processes that are too complex to be captured by
scientific laws and mathematical models. Economists pretend
otherwise. But economics is history. We observe
circumstances that are peculiar to a particular time and place.
Every firm and each household is following habits and norms that
were developed in response to past experiences, many of which have
long been forgotten.

On paper, one can find the equilibrium in which supply and
demand are everywhere balanced by solving a set of equations; this
is what economists do when they articulate their theories. In the
real world, as Friedrich Hayek pointed out, the information needed
to describe tastes and technologies is too dispersed for any one
person to actually carry out the computation. The market itself
collects the information and finds the solution. The mathematical
economist is merely performing a (partial) simulation.

According to the neoclassical or mainstream economic paradigm,
the market grinds out its solution by undertaking marginal
analysis. For every input and output, the market calculates the
value of the output that an additional unit of input could produce.
This is called the marginal product. If the marginal product of,
say, an apple picker exceeds that of an automobile assembler, then
market prices will guide one or more workers out of automobile
plants and into apple orchards.

Both market-friendly and interventionist economists share the
assumption that individual productivity is well defined and can be
calculated by the households and firms responsible for allocating
resources. Economists go on to identify situations in which market
calculations will yield suboptimal results, as when a factory owner
does not bear the social costs of the pollution that the factory
causes. Such situations are deemed “market failures.”
Interventionists suggest that government can correctly identify
market failures and undertake policies to steer the market toward a
better social outcome. Market-friendly economists focus instead on
the flaws in the policy process and on the hope that private
entrepreneurs will see opportunities to start businesses that
reduce the waste that market failures otherwise would generate.

But even market-failure theory rests on a foundation of
mathematically precise calculations of individual productivity.
Although a market might be imperfect in the theoretical sense, it
is still treated as generating predictable, deterministic outcomes.
The case for government intervention is based on the presumption
that taxes or regulation can shift the outcome from one that is
suboptimal to one that is better in a predictable and deterministic
way.

In reality, market outcomes are not nearly this predictable and
deterministic. They are contingent. A given set of pre-conditions,
including government policies, does not entail a predictable
economic outcome. Many alternative outcomes can arise, depending on
individuals’ strategies and beliefs. This is a straightforward
fact, and it would not be easy for economists to deny it in
particular situations. And yet the practical premise of much of
contemporary economics is rooted in denying it.

OVERHEAD LABOR

Marginal-productivity theory is at the center of mainstream
economics. It asserts that economic decisions are based on the
measurement of the incremental output produced by an additional
hour of work. In theory, everyone’s work can be converted into the
equivalent of the number of bushels of apples picked or the number
of cars assembled in an hour.

But think of yourself and your associates. Do any of you produce
measurable output? It is more likely that you are engaged in
intellectual or managerial work that does not directly yield
output.

In 2018, there were 150 million Americans employed in the
non-farm business sector. Of these, only 9 million were production
and non-supervisory workers in manufacturing. That is, just 6% of
the non-farm labor force consisted of workers directly producing
goods. In 1948, these blue-collar workers were 28% of the labor
force.

Over 90% of employed Americans are not blue-collar production
workers. What are they doing? Some service-sector workers, such as
manicurists or lawn mowers, produce output that can be readily
counted. But the majority of us are providing indirect support to
the provision of goods and services. Project teams at firms, for
example, often are not creating measurable outputs; they are
building capabilities that the firms hope to use to generate
revenue. From the security guard to the graphic designer to the tax
accountant, we are overhead labor.

For a traditional manufacturing firm, the number of production
workers is closely tied to unit sales. Production labor can be
incrementally increased or decreased as needed. But overhead labor
is not adjusted strictly according to sales volume.

Some overhead is necessary regardless of the level of output;
you cannot get rid of tax accountants just because sales are down
10%. Other overhead is discretionary. Suppose that your company is
undertaking an initiative to develop a new product or service. If
sales of existing offerings are down and your financial position is
less robust than you expected, you may choose to cancel the
initiative in order to conserve cash. But you also have the option
of continuing with the initiative and retaining the overhead
workers who are undertaking the task.

Important segments of the economy are dominated by overhead
costs. For example, pharmaceutical companies spend relatively
little actually manufacturing pills. Research, testing, and
marketing are all more important cost components. For an airline,
the cost of flying an additional passenger is trivial compared to
the cost of equipment, fuel, personnel, maintenance facilities, the
reservation system, and so on. For a telecommunications-service
provider, the cost of transmitting an additional gigabyte of data
is trivial compared to the cost of building and maintaining the
firm’s infrastructure. For a hospital, the cost of undertaking an
additional diagnostic test or procedure is small relative to the
cost of managing, equipping, and maintaining the facility.

Businesses in these industries cannot present their customers
with prices that reflect the marginal costs of production. If every
consumer paid the marginal cost of manufacturing a pill or flying
on an airplane, the revenue would not be sufficient to cover
overhead costs.

The significance of overhead costs relative to incremental
production costs has greatly increased in the internet era. News
and entertainment used to require such media as paper or vinyl
discs, which were costly to produce and ship. Now, the cost of
distribution to an additional customer over the internet is
essentially zero, so the challenge is to recover the cost of
creating the content. Writers, editors, artists, and producers must
obtain revenue through digital subscriptions, advertising,
donations, or other means.

When a firm’s costs are dominated by overhead, price
discrimination becomes an attractive strategy, even a necessity.
The airline will try to attract price-sensitive customers with a
low price while charging a higher price to those customers who are
more committed to flying at a particular time rather than searching
for a bargain. The hospital bill will include superficially
outrageous charges for products like orange juice or aspirin,
because the hospital is arbitrarily allocating its overhead costs
to these items. If it were forced to charge only for the cost of
certain items or procedures, it would have to raise the fees for
other billable goods or services.

There is a sense in which the dominant role of overhead costs
creates market failures. That is, the market is not allocating
resources by making marginal calculations. The quantity of overhead
labor is adjusted by management discretion, rather than being
shifted up and down directly in response to incremental demand
fluctuations. Consumers face prices that are marked up to cover
overhead costs, and these prices can far exceed the incremental
cost of supplying more of the good or service.

But unlike textbook market failures, the problem of heavy
overhead costs cannot be corrected by a policymaker who understands
the source of the failure. Each firm must try to develop management
priorities that make effective use of overhead labor. Each firm
must come up with pricing strategies that exploit those most
willing to pay, in order to recover overhead costs. There is no tax
or regulation that can solve those problems more easily.

This also complicates the problem of treating ordinary market
failures. For example, suppose that the government wishes to use a
tax on airline fuel as a tool to get passengers to internalize the
pollution cost of flying. If the airline allocates this additional
cost to price-insensitive passengers and leaves its discounts for
price-sensitive passengers in place, then the total air miles flown
may remain approximately unchanged in response to the tax.

Because most labor is overhead labor, the market’s calculations
must be viewed as more approximate than conventional economics
assumes. That in turn means that policies that try to correct
textbook market failures risk achieving fewer intended
consequences, and having more unintended consequences, than would
be the case if markets operated as the textbooks claim.

UNDERSTATING CONTINGENCY

For neoclassical economists, when consumer tastes are given,
material conditions determine what output is produced and what
inputs are used to produce it. There can be only one equilibrium
outcome. But in reality, strategies and beliefs exert powerful
effects. Outcomes are highly contingent.

I am old enough to remember when service-station attendants
pumped gas into your car. Then, in the 1970s, when oil prices shot
up, stations adopted the self-serve model wherever it was legal to
do so. This change in strategy was not reversed when oil prices
plummeted in the 1980s. It seems that the timing of the sudden
conversion to self-serve was accidental rather than determined by
material conditions.

Restaurants, to take another example, are one of the most
competitive industries in the United States. Yet we do not observe
prices determined by marginal cost. Instead, these businesses
typically use the strategy of trying to recover much of their
overhead cost by charging higher markups on beverages than on
food.

The evolution of business practices and industry structure can
seem inevitable in hindsight. But this is misleading. The
personal-computer industry is famous for the role of start-ups,
including Apple, Microsoft, and Dell. But with slightly different
business decisions, it could instead have been the province of
Xerox and IBM.

Until the mid-1990s, consumer online access was dominated by
proprietary services, such as CompuServe and America Online, each
with its own separate content. But eventually these gave way to the
inter-operable networks known as the internet. Yet in today’s
social networking, we see no such inter-operability. Instead, we
see separate platforms, notably Twitter, LinkedIn, and
Facebook.

It is not obvious why some business strategies succeed and
others fail. Google has attempted to extend its reach from web
search to email, web browsing, computer-operating systems, mapping,
video, and social networking. Its success in each realm has varied.
With different approaches to strategy and execution, it might have
dominated social networking and flopped in email, rather than the
other way around. None of this was inevitable, or even readily
predictable.

Google’s approach has usually been to offer software and content
at no charge to consumers, with revenue coming from advertising.
Indeed, under industry leaders Google and Facebook, much of the
content on the internet is supported by advertising. But in theory
there are other plausible business models, including micropayments,
bundled subscriptions, and patronage. We could have arrived at an
outcome where one of these alternative business models prevailed.
That in turn might have led to a very different industry
structure.

Contingency plays a particularly large role in finance.
Financial markets can arrive at many different outcomes, depending
on the pattern of self-fulfilling beliefs. The financial crisis of
2008 reflected a sharp swing in investor sentiment regarding
mortgage-related securities and other debt instruments backed by
bank loans. Prior to the crisis, there was overconfidence in these
instruments. At the height of the crisis, in the fall of 2008,
there was probably an excess of pessimism about what many
securities were worth.

Investor beliefs also affect the performance of businesses.
Amazon would never have gained such a prominent role in retail
without the confidence of investors who kept pouring capital into
the company, patiently enduring years in which earnings were low or
non-existent.

In macroeconomics, most economists posit a tight relationship
between the money supply and the overall price level. But money’s
role in inflation is also mediated by beliefs. Households and firms
have many choices regarding the means of payment. Credit cards and
electronic funds transfers are increasingly important means for
carrying out transactions. Young people in the United States rarely
conduct business in cash. Even for purchases under $10, they prefer
to employ credit cards. They are starting to use smartphone apps to
make payments, and in fact some other countries may be ahead of the
U.S. in their adoption of mobile-payment technologies.

The widespread use of paperless payment mechanisms has broken
the direct link between the supply of money and the ability of
people to undertake transactions. As a result, there is no tight
mechanical relationship between the quantity of money as controlled
by the Federal Reserve and the overall behavior of prices.

Instead, prices are determined in large part by habit. People
accept payment in dollars and sign contracts for future payments
based on what they assume they will be able to buy with those
dollars in the future. They know that the general trend in some
industries, such as health insurance and college education, is for
prices to rise. They know that in other industries, such as
computers and communication, quality-adjusted prices are
decreasing. They know that the prices of commodity-based goods,
such as gasoline, can experience more short-term fluctuations than
other prices. But in general, people assume that the purchasing
power of a dollar will be about the same next month as it is today.
These expectations become self-fulfilling, as many businesses keep
their prices unchanged for long periods.

The U.S. government could, by running a substantial budget
deficit financed by money creation, eventually dislodge these
habitual expectations for prices and generate high and variable
inflation. But short of that, most wages and prices are likely to
continue to move within a narrow range.

This degree of contingency - and the importance of preferences,
habits, and expectations - suggests that the social order studied
by economists is not nearly as stable and predictable as mainstream
theories assume. And it should move policymakers to ask whether our
economy is prepared for that stability to be shaken or lost.

THE QUESTION OF FRAGILITY

The neoclassical economist sees the economy in a deterministic
equilibrium and asks how that equilibrium can be improved. If
instead we looked at economic outcomes as contingent, we would ask
how catastrophic failure can best be prevented. Instead of assuming
that the economy is robust, we would look for sources of fragility.
Instead of hunting for market failures to avert, we would look for
fragilities to mitigate.

The infrastructure for delivering electrical power offers a
helpful analogy. If we assume that the grid is robust, we might
look for ways to squeeze more efficiency out of it. If instead we
see it as fragile, we would look for ways to introduce redundancy
and to isolate points of potential failure.

In the economy, we might identify three sources of potential
catastrophic failure: the government budget; the financial sector;
and large firms in the technology sector.

At the moment, investors treat government debt as a safe asset.
Everyone who owns government bonds expects to be paid in good funds
on time. This belief is self-fulfilling, in that it allows
government to “roll over” its debt, meaning that it can pay off
debts as they come due by undertaking new borrowing. A sovereign
debt crisis occurs when enough investors doubt that a government
will always be able to roll over its debt. No one wants to be
holding government debt just before it goes into default. Fear of
default will make investors reluctant to hold government debt.
Thus, the fear causes the event that is feared to take place.

At present, the U.S. government budget is on an unsustainable
path, as the Congressional Budget Office repeatedly indicates.
Barring a change in the outlook for taxes and spending, deficits
will get larger and larger until eventually there will not be
enough tax revenue available even to cover interest on the
debt.

For now, investors take the view that a crisis is a long way
off. They assume that, for the near future, they can expect
government debt to be repaid. But if that expectation were to
change, it would trigger a self-fulfilling crisis. This would send
the interest rate on debt soaring, forcing the government to
immediately correct its fiscal path by some combination of reneging
on spending obligations, sharply raising taxes, partially
defaulting on the debt, and engaging in inflationary finance.

The social and political consequences of a sovereign debt crisis
are severe. Germany’s social fabric did not recover from the
hyperinflation of the 1920s, which wiped out the savings of many
middle-class households. Greece was torn apart by its sovereign
debt crisis, even though it was given substantial support from the
European Union. Of course, the United States is much larger than
Greece, but this cuts both ways: It means our economy is more
robust, but also that there is no entity that could bail out this
country in a crisis.

The best way to address the fragility of the budget would be to
put government finances on a sustainable path. The most important
step would be to rein in future entitlement spending. Other options
include reducing non-entitlement spending and raising taxes.
Needless to say, our political system is no mood to do any of that.
But leaders looking to mitigate the risks of fragility would make
it a priority.

The financial sector contends with another form of fragility. As
we have seen, the failure of significant financial entities can
cause a general freeze-up, as the creditors of that firm and the
creditors of similar firms try to clarify their balance sheets and
reduce exposure. It can take a long time to resolve the bankruptcy
of a large, complex financial institution, and until things are
sorted out, the creditors of that firm cannot know how much they
will be paid.

It is far from clear what can be done to mitigate this risk.
Throughout our country’s history, regulators have attempted many
approaches to this problem, and none of them has eliminated
fragility. As Charles Calomiris and Stephen Haber point out in
their 2014 book, Fragile by Design, until the 1980s banking
policy was dominated by the public’s fear of large national banks.
The result was a fragmented financial system, with banks unable to
cross state lines. While the small size of individual banks limited
the consequences of any single bank failure, American banks were
not robust, and the system as a whole was not well diversified. In
the 1930s, many banks failed at once. In the 1980s, many savings
and loan associations failed at once.

By 2008, the United States had a much more concentrated
financial system, including large national banks as well as other
major financial institutions - from Bear Stearns, Lehman Brothers,
and Freddie Mac to Fannie Mae and AIG. These firms had obligations
to other firms that were so voluminous and so complex that doubts
about their ability to meet those obligations threatened many other
financial institutions, both here and abroad.

The Dodd-Frank legislation that was enacted in the wake of the
financial crisis of 2008 envisions two approaches to the problem of
large, complex financial entities. One approach is tighter
regulation of firms designated as systemically important. The other
approach has such firms develop “living wills” that would specify
how they would be reorganized should they fail. The hope was that
implementing such reorganizations would obviate the need for
bailouts.

There are good reasons to be skeptical of both of these
approaches. It is unlikely that mere human beings at government
agencies can develop sufficient expertise, insight, and proficiency
to render any firm too regulated to fail. As for the “living
wills,” it is hardly straightforward to break up and reorganize a
major bank when everything is going well, let alone during a
crisis.

Many economists believe that the capital structure of banks, and
perhaps of non-financial institutions as well, is too heavily
weighted toward debt and away from equity. These economists cite
the relatively minor economic impact of the stock-market crash of
2000-2001 in comparison to the financial crisis of 2008. A decline
in stock prices is a more graceful form of failure than a debt
crisis.

But it is not easy to force a more robust financial structure on
banks. A bank with a low ratio of debt to equity can still be
highly leveraged. For example, a bank with low debt but a large
exposure to financial derivatives would be susceptible to failure
that throws counterparties into confusion, with consequences quite
similar to those of the failures of financial institutions in 2008.
It might be better to aim for a system in which financial firms can
fail gracefully. Some scholars have proposed adding a new chapter
in the bankruptcy code, which would allow for the rapid transfer of
ownership of a failed bank to a single class of debt-holders,
leaving other creditors of the bank unaffected.

For the system as a whole, it might be better to reduce the size
of the largest institutions. In less than 50 years, we have gone
from a system that was too fragmented to one that is probably too
concentrated. Even though the Canadian banking system is highly
concentrated, the largest U.S. banks now have much larger balance
sheets than those of their counterparts in Canada. The U.S. could
discourage the growth of the largest banks by, for example,
limiting the total amount of insured deposits permitted for any one
banking entity.

Next, consider large firms in a different sector: technology.
The traditional anti-trust approach is to ask how industry
concentration affects consumers. But consumer welfare is difficult
to assess in industries with high fixed costs and low marginal
costs. In the case of advertising-supported content, for example,
the consumer may be paying nothing directly. But do Google or
Facebook exploit monopoly power with advertisers, and if so, how
can we tell how this affects social welfare?

It may be impossible to answer such questions. And perhaps they
are not the right questions to be asking in the first place.
Instead, we might wish to ask the same questions of large tech
companies that we ask of large financial firms: Could they fail
gracefully, or would the failure of such a firm create an
economy-wide crisis?

From this perspective, it seems that we need not fear Facebook
or Twitter. The rest of the economy does not seem to be intimately
tied to those firms. But for Google, Amazon, and Apple, it may be a
different story. Each of those firms is embedded in large business
ecosystems. Could each ecosystem largely survive the failure of the
core firm? If the answer is “no,” then policymakers should look for
ways to try to promote greater redundancy and resiliency in those
ecosystems. Steps might include making it easier for other firms to
compete in businesses that are at the center of those ecosystems.
We might want to try to avoid a situation in which the economy
depends heavily on a single provider of cloud computing or
streaming video.

FAILING GRACEFULLY

These observations point toward a fundamentally novel conception
of the role of economic policy, and especially economic regulation.
The idea that policymakers and regulators should seek to help
markets that are highly effective calculators avert predictable
failures is based on a conception of the economy that could not
withstand scrutiny, and that perhaps no one really believes.

A more plausible conception of what markets are good at would
suggest not more interventionist regulation but an approach that
seeks to shore up the economy where it is most fragile and weak.
Knowing that markets aren’t infallible calculators at all, and that
contingent judgments have immense implications, policymakers should
be on the lookout for weak points, and should hunt and eradicate
sources of especially concentrated and dangerous fragility.

Economists and policymakers tell themselves that they are
dealing with a robust, predictable system with easily recognized
points of failure and that they have reliable regulatory tools for
steering it toward more optimal results. In reality, they are
dealing with a fragile, complex system. It is less appropriate to
seek to optimize this system, and more appropriate to worry about
keeping it from failing catastrophically.

Arnold Kling is an adjunct
scholar with the Cato Institute and a member of the Financial
Markets Working Group at the Mercatus Center at George Mason
University.

Click here to view the full article which appeared in CATO Journal