The Region

Theory Ahead of Rhetoric: Economic Policy for a "New Economy"

An excerpt from the essay of the Cleveland Fed's annual report discusses the nature of business cycles and the role of monetary policy in the so-called "new economy."

Published September 1, 2000  |  September 2000 issue

In the last issue of The Region, Minneapolis Fed President Gary Stern discussed the nonaccelerating inflation rate of unemployment (NAIRU), and suggested that while it may be too early to discard the notion that labor market conditions may be a precursor of inflation, the recent performance of the economy has shifted the burden of proof to those who continue to assert NAIRU's value in understanding and predicting inflation. The June Region also included three articles that established a framework for thinking about the so-called "new economy" in terms of policymaking. The following article extends these discussions and further investigates the nature of business cycles and the role of monetary policy. This article is an excerpt from the Federal Reserve Bank of Cleveland's 1999 Annual Report.

The language of monetary policy is replete with concepts and empirical constructs inherited from an era when damping business-cycle fluctuations was the sine qua non of successful economic policy. The deep theoretical weaknesses of these ideas—embodied in notions such as "potential" output, "the" noninflationary rate of unemployment, growth "speed limits" and the like—have manifested themselves with a vengeance over the past decade, prompting casual observers to hail the so-called new economy. In fact, it's not that the economy is new, but that the policy lexicon is old.

Numerical estimates of a nation's economic potential began as simple trend lines drawn, after the fact, through the ups and downs of the aggregate data, as Paul Samuelson discusses in his classic college textbook on economics:

"If we draw a smooth trend line or curve, either by eye or by some statistical formula, through the growing components of NNP [net national product], we discover the business cycle in the twistings of the data above and below the trend line."

In 1961, a novel technique for estimating the economy's potential was developed by Arthur M. Okun and later was given official sanction by the President's Council of Economic Advisers. Okun's procedure connected the "problem" of the business cycle to the "problem" of unemployment. He conjectured that "a 4 percent unemployment rate is a reasonable target under existing labor market conditions," and estimated that for every percentage point the unemployment rate rises above this optimal level, the economy (measured by real GNP) will fall 3 percent below its potential. Using this three-to-one rule, Okun argued that policymakers could translate the rate of joblessness into a measure of actual output in relation to national potential.

At about the same time Okun was giving policymakers a target for national economic performance, a New Zealander named Alban W. Phillips was documenting a negative correlation between the rate of joblessness and another undesirable economic condition, inflation. In what is now known as the "Phillips curve," economists observed that underperforming economies tend to see inflation fall, while overperforming economies see inflation rise. Eventually, Okun's and Phillips' ideas would be connected and captured in what economists call the "nonaccelerating inflation rate of unemployment," or NAIRU. According to this labor-market indicator of potential GDP, sustained movements in measured unemployment below the NAIRU portend accelerating prices, while unemployment rates above the NAIRU precede disinflation.

The costly experiment

A major problem with the use of potential output and NAIRU as the basis for economic policy emerged in the 1960s. In 1964, despite three years of strong growth (averaging about 4 percent annually after inflation), the Council of Economic Advisers estimated that the economy was operating well under its "potential." In its 1964 report, the Council claimed that "only a significant acceleration of expansion can enable the Nation to make full use of its growing labor force and productive potential." The report proposed a major tax reduction program that "would add $30 billion to total output and create 2 to 3 million extra jobs" and called for monetary policy to work in conjunction with the fiscal authority to stimulate demand conditions.

By 1966, the Council reported that "the economy [had] caught up with its potential" and heralded the closing of the gap as "a great achievement." But in subsequent reports, the Council noted that the economy had probably overshot its potential in mid-1965 and was operating above it during the latter half of the 1960s. That view was based not only on GNP statistics, but also on the unexpected acceleration of inflation during 1968-69.

During the first two years of the 1970s, attempts were made to curb inflation by restraining the demands that were presumably pushing the economy above its potential. But those steps proved less effective than hoped. In 1971, inflation was slightly above its level of two years earlier and the unemployment rate nearly doubled. In August 1971, President Nixon took more drastic measures by imposing a 90-day freeze on wages and prices, followed by still other price-control measures that continued through the spring of 1974. In the end, the dismal economic performance of the 1970s—a succession of fits and starts leading to ever-higher unemployment and inflation—introduced the term "stagflation" into public discourse.

What went wrong? Economists now accept that the policy prescriptions suggested by the Phillips curve failed to account for the important role that expectations play in the observed inflation-unemployment trade-off. As inflation's trend escalated, people changed their behavior. The patterns in the data that economists had used to derive their trade-off theories—and that policymakers had relied on in responding to economic conditions—did not remain stable when inflation expectations changed. Specifically, the lower rates of joblessness that policymakers believed could be "bought" with higher inflation were not realized for long, as employees adjusted their wage demands upward to compensate for their rising cost of living.

It became clear—painfully so—that there is no fixed mapping of the rates of unemployment and inflation that is independent of the public's inflationary expectations. In the 1975 Economic Report of the President, the Council declared, "In the long run ... there would not appear to be a mechanism linking the rate of unemployment to any one rate of stable wage or price increase." Although this statement seems, in isolation, to cast off the Okun's law-NAIRU-Phillips curve troika as a meaningful policy guide, that certainly wasn't the result. This passage laments not the Phillips-curve framework but the inability to use it better.

This belief persists today. A growing number of economists are coming to the conclusion that the policy failures of the late 1960s and 1970s (and perhaps other episodes) can be attributed less to the inadequacy of the framework than to the inherent uncertainty of determining the economy's potential. To many, the undisputed improvement in monetary policy from the 1980s through the 1990s was the happy consequence of simply learning the economy's true potential. The promise for sustaining this improvement, then, was to be found in better statistical techniques and enhanced information collection.

There is another interpretation: Potential output or the NAIRU cannot be made more useful concepts, even with better measurement or better econometrics. The policy successes of the past two decades have not been the result of more precise knowledge of NAIRU or potential GDP, but rather from a more determined concentration on long-term goals and a deeper appreciation of the dynamic forces driving modern economies.

Losing the forest amid the trees

An intriguing analogy to the postwar history of U.S. monetary policy can be found in the Forest Service's war against fires. It began with a simple enough question: How do we reduce the number of forest fires? Many solutions, each having a measurable degree of success, resulted. Educate the public about the harm caused by forest fires, put more resources into fighting forest fires and encourage the development of fire-retarding technologies. And fires were, in fact, reduced—initially.

Unfortunately, it turned out that reducing forest fires had the unexpected consequence of allowing underbrush to grow more dense, creating an unnatural change in the ecological balance of the forests. Fires are a naturally occurring phenomena that serve to clean up the accumulated debris on the forest floor, thereby creating opportunities for wildlife and growth that would otherwise have been squeezed out by the heavy undergrowth.

Even more ironic, the excess buildup of debris increased the severity of fires when they did occur, so that the occasional fire was more catastrophic than the smaller fires the Forest Service had hoped to contain. In the end, the well-intended policy considered too narrow a model of the forest. Instead of asking how to prevent forest fires, the Forest Service should have asked, what is the function of fires in the forest ecology?

The lesson of this example is that it is easy to lose the forest amid the trees—in this case, literally. It is absolutely understandable that the dominant question to come out of the Depression would be, how do we avoid a catastrophic collapse of economic activity? Likewise, it was reasonable that the creation of the Federal Reserve System would be motivated by the question, how do we avoid a catastrophic collapse of the financial sector?

However, as we understand them today, these two questions are likely related. In an important and influential paper published in 1983, Ben Bernanke of Princeton University proposed that the systemic collapse of financial intermediation converted what might have been a significant, but otherwise unexceptional, downturn into the Great Depression. Embracing this view leads one to ask about reforming the institutional structure of financial institutions and markets, questions far removed from that of how to eliminate the business cycle as it has been understood since Keynes.

In fact, the post-Depression view that ups and downs in economic activity are, by and large, pathological, begged the real question: What is the role of business-cycle fluctuations in the macroeconomic ecology? It would be some 40 years before economists would address this question in earnest, but attendant on its answer came a discernible shift toward the establishment of long-term goals for monetary policy.

Lessons in long-run policy dynamics

If you ask us to name the three theoretical developments that have had the most significant influence on economic policy thinking in the past 30 years, we answer: rational expectations, time inconsistency and "real" business cycles.

The first two would raise few eyebrows among academics. Rational expectations, brought to modern macroeconomics by Nobel Laureate Robert E. Lucas Jr., introduced forward-looking behavior into policy discussions in a formal and systematic way. This sounded the death knell for the Phillips curve as an exploitable tool of policy and spawned a rich, varied literature on the vital role of expectations in the dynamics of economic activity.

Related to rational expectations, time inconsistency predicted adverse consequences from economic policies that failed to commit to clear and consistent long-term objectives. This was an old but underappreciated principle that applied to the formulation of economic policies. Because of dynamic rational expectations, short-run policies that, individually, appear to be reasonable (if not optimal) in the short run, are decidedly less than optimal when considered over time.

These two contributions emphasize the importance of rules, as opposed to discretion, in economic policy. But not any rule will do. The policy rule must commit to future actions today and the policymaker must be held accountable to them. In the case of monetary policy, the problem of time inconsistency implies that the monetary authority should emphasize transparent, credible policies regarding the future purchasing power of money. Without commitment, the rule on which inflation expectations are formed is not credible, since the public knows that at any point, the monetary authority will be tempted to renege on its long-run promise in the interest of short-run expedience.

Clearly these ideas have taken hold, and they provide much of the current intellectual underpinnings of central banks' behavior all over the world—not least because they explain how policy had previously erred. In the United States, the economic stabilization policies of the 1960s and 1970s which caused instability in the purchasing power of money produced a reduction in the national welfare. Inflation, the nation learned, redistributes wealth capriciously. If the general price level unexpectedly rises because of an excess supply of money, people who made decisions based on the expectation of a stable purchasing power of money lose. Savers come to realize that they lent money at too small a return when they are paid back in dollars that have less purchasing power than before. And employees will regret that their dollar-denominated earnings did not anticipate the drop in the dollar's purchasing power. These are just two examples of the countless bad decisions caused by unexpected inflation.

At this point, the importance of dynamics is revealed as a crucial shortcoming of the original Phillips-curve approach. Losers, it turns out, don't like to lose. Once people have experienced a loss caused by capricious changes in the purchasing power of their money, they take precautions to prevent future losses. That is, they alter their behavior and redirect their resources to protect against losses from future inflation, leaving the economy with fewer resources to devote to production.

These reallocations can take many forms: People may buy land or homes as an inflation hedge, or financial institutions may raise borrowing rates to compensate for the risk associated with the uncertain purchasing power of a dollar. Indeed, any decision with a dollar-denominated outcome will involve an added cost associated with uncertainty about the future purchasing power of money. In short, knowing that the purchasing power of a dollar is stable will lead to better allocation of resources than is possible in an environment that suffers from inflation.

The real business cycle approach to economic modeling

While the ideas of rational expectations and time inconsistency have had a profound impact on monetary policy over the past two decades, can the same be said of real business cycle theory? After all, here is a line of research originating in two articles—Kydland and Prescott (1982) and Long and Plosser (1983)—that pointedly omitted money altogether. That is, these models had no clear role for the monetary authority.

Real business cycle theory now refers generically to a class of models in which aggregate outcomes are the sum of the decisions made by individual firms and households operating in fully dynamic environments with explicitly modeled constraints, opportunities, market structures and coordination mechanisms. These models incorporate money, taxes and a variety of market frictions and imperfections.

Despite a promising body of research incorporating the older Keynesian notions of market imperfections—sticky prices and such—the lessons of the original real business cycle models have survived. These models are still "real" in the sense that their economic fluctuations come from informed decisions of perfectly competitive, efficiently functioning households and businesses as they respond to changes in productivity. Real business cycle models can account for the economic patterns we actually observe—large fluctuations in output around a statistical trend. Furthermore, these fluctuations are quantitatively significant, suggesting that the bulk of typical business-cycle fluctuations might best be characterized as the economy's optimal response to random external forces that—fortunate or unfortunate—are not appropriate objects of policy response.

Indeed, the real business cycle framework leads to the conclusion that the concept of potential output is hollow. It is always possible to measure some average or trend level of output after the fact. But if one views the path of the economy, approximately and excepting extreme circumstances, as the dynamic unfolding of a sequence of optimal outcomes given the inherited structure of the economy, then actual and potential output become one and the same.

Further theoretical advances have subjected the NAIRU to the same fate as potential output. So-called "search-theoretic" models, of the kind pioneered by Mortensen and Pissarides (1994), generate variations in equilibrium unemployment analogous to output fluctuations in the real business cycle tradition, making the notion of NAIRU equally vacuous. As with potential output, it is always possible (after the fact) to correlate some level of unemployment with accelerating inflation. But without an explicit description of how economic policies can be used to alter the matching of workers and jobs in the labor market, that correlation is meaningless to economic policymakers.

Aligning rhetoric with reality

A critical feature of the real business cycle framework and its offspring is the intentional and explicit connection to the theory of economic growth. The economist or policymaker viewing the world through the lens of dynamic general-equilibrium intuition is never far-removed from the long-run consequences of his or her reasoning. And this is the true legacy of the empirical failure of traditional postwar thinking and the attendant theoretical advances in macroeconomics from the early 1970s on: the breakdown of support for activist stabilization policies in favor of policies and institutional structures that tether the short-run behavior of policymakers to long-run economic welfare.

That monetary policy can wreak havoc on financial markets and can be a disruptive influence on the economy is unquestioned. This was a hard lesson learned. But whether a central bank can systematically and predictably "create" prosperity is another matter entirely.

This is not to say that monetary policy does not have an important role to play in the economy, but that "good policy" is not synonymous with accurate demand management. An effective policy is one that aims to promote long-run national growth, not one that manages movements around a statistical growth trend.

In the short run, it is important to strike a balance between the quantity of money demanded in the economy and the amount the central bank supplies. Such a balance keeps the purchasing power of money constant. If policy is backed by commitment, thus making it time consistent, the Federal Reserve promotes economic prosperity by reducing the risk associated with dollar-denominated decisions. In so doing, it helps to promote the creation of wealth. While Congress requires the Federal Reserve to promote effectively the goals of maximum employment, stable prices and moderate long-term interest rates, it does not specify how these objectives are to be accomplished. Over time many Federal Reserve officials have come to regard the attainment of price stability as the most effective means of achieving these legislated goals.

We contend that this perspective has been absolutely pervasive in U.S. monetary policy over the past decade. The resolutely forward-looking focus on potential price pressures reflects the increasingly popular view that maintaining a relatively stable and predictable purchasing power of money is the primary welfare-enhancing role of monetary policy. The increasing openness of Federal Reserve decision-makers—reflected in announced policies aimed at more rapid and transparent dissemination of Federal Open Market Committee decisions—needs to be appreciated in light of the established importance of credibility in the policymaking process. The more frequent unwillingness of policymakers to aggressively respond, in the absence of discernible inflationary pressures, to output and unemployment levels merely because they diverge from presumed estimates of potential and the NAIRU, suggests the waning influence of these ideas on the establishment of economic policy.

If the principles guiding monetary policy have changed, why do some analysts still talk about "overheating," "growth above potential," unemployment rates that are "too low" and "wage pressures"? One explanation is that our assertion is wrong, and old-style stabilization policy is still the order of the day, at least for some policymakers.

Another explanation is that the rhetoric of monetary policy has failed to keep pace with theory and practice. Although policymakers may have conquered the fine-tuning impulse, they have yet to fully abandon the language that accompanies it. In a world where expectations matter, the language of policymakers can have consequences. As we confront the real challenges that financial innovation, rapid globalization and the new economy will bring, these are complications we can ill afford. It is time to align rhetoric with reality.

Top

 
Latest

The Region: Interview with Michael Woodford