Interview conducted June 15, 2010
All scholars strive to make important contributions to their discipline. Thomas Sargent irrevocably transformed his.
In the early 1970s, inspired by the groundbreaking work of Robert Lucas, Sargent and colleagues at the University of Minnesota rebuilt macroeconomic theory from its basic assumptions and micro-level foundations to its broadest predictions and policy prescriptions.
This “rational expectations revolution,” as it was later termed, fundamentally changed the theory and practice of macroeconomics. Prior models had assumed that people respond passively to changes in fiscal and monetary policy; in rational expectations models, people behave strategically, not robotically. The new theory recognized that people look to the future, anticipate how governments and markets will act, and then behave accordingly in ways they believe will improve their lives.
Therefore, the theory showed, policymakers can’t manipulate the economy by systematically “tricking” people with policy surprises. Central banks, for example, can’t permanently lower unemployment by easing monetary policy, as Sargent demonstrated with Neil Wallace, because people will (rationally) anticipate higher future inflation and will (strategically) insist on higher wages for their labor and higher interest rates for their capital.
This perspective of a dynamic, random macroeconomy demanded deeper analysis and more sophisticated mathematics. Sargent pioneered the development and application of new techniques, creating precise econometric methods to test and refine rational expectations theory.
But by no means has Sargent limited himself to rational expectations. Among his dozen books and profusion of research articles are key contributions to learning theory (the study of the foundations and limits of rationality) and to economic history, including influential work on monetary standards and international episodes of inflation.
Interviewed here by now-retired Research Director Art Rolnick, a colleague since the 1970s at the University of Minnesota and Minneapolis Fed, Sargent explores issues ranging from polar models of banking regulation and crisis to causes of persistently high unemployment to a compelling defense of modern macro. Underlying the entire conversation is the “vocabulary of rational expectations,” observes Sargent. “In our dynamic and uncertain world, our beliefs about what other people and institutions will do play big roles in shaping our behavior.”
Modern macroeconomics under attack
Rolnick: You have devoted your professional life to helping construct and teach modern macroeconomics. After the financial crisis that started in 2007, modern macro has been widely attacked as deficient and wrongheaded.
Sargent: Oh. By whom?
Rolnick: For example, by Paul Krugman in the New York Times and Lord Robert Skidelsky in the Economist and elsewhere. You were a visiting professor at Princeton in the spring of 2009. Along with Alan Blinder, Nobuhiro Kiyotaki and Chris Sims, you must have discussed these criticisms with Krugman at the Princeton macro seminar.
Sargent: Yes, I was at Princeton then and attended the macro seminar every week. Nobu, Chris, Alan and others also attended. There were interesting discussions of many aspects of the financial crisis. But the sense was surely not that modern macro needed to be reconstructed. On the contrary, seminar participants were in the business of using the tools of modern macro, especially rational expectations theorizing, to shed light on the financial crisis.
Rolnick: What was Paul Krugman’s opinion about those Princeton macro seminar presentations that advocated modern macro?
Sargent: He did not attend the macro seminar at Princeton when I was there.
Rolnick: Oh.
Sargent: I know that I’m the one who is supposed to be answering questions, but perhaps you can tell me what popular criticisms of modern macro you have in mind.
Rolnick: OK, here goes. Examples of such criticisms are that modern macroeconomics makes too much use of sophisticated mathematics to model people and markets; that it incorrectly relies on the assumption that asset markets are efficient in the sense that asset prices aggregate information of all individuals; that the faith in good outcomes always emerging from competitive markets is misplaced; that the assumption of “rational expectations” is wrongheaded because it attributes too much knowledge and forecasting ability to people; that the modern macro mainstay “real business cycle model” is deficient because it ignores so many frictions and imperfections and is useless as a guide to policy for dealing with financial crises; that modern macroeconomics has either assumed away or shortchanged the analysis of unemployment; that the recent financial crisis took modern macro by surprise; and that macroeconomics should be based less on formal decision theory and more on the findings of “behavioral economics.” Shouldn’t these be taken seriously?
Sargent: Sorry, Art, but aside from the foolish and intellectually lazy remark about mathematics, all of the criticisms that you have listed reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished. That said, it is true that modern macroeconomics uses mathematics and statistics to understand behavior in situations where there is uncertainty about how the future will unfold from the past. But a rule of thumb is that the more dynamic, uncertain and ambiguous is the economic environment that you seek to model, the more you are going to have to roll up your sleeves, and learn and use some math. That’s life.
Rolnick: Putting aside fear and ignorance of math, please say more about the other criticisms.
Sargent: Sure. As for the efficient markets hypothesis of the 1960s, please remember the enormous amount of good work that responded to Hansen and Singleton’s ruinous 1983 JPE [Journal of Political Economy] finding that standard rational expectations asset pricing theories fail to fit key features of the U.S. data.1 Far from taking the “efficient markets” outcomes for granted, important parts of modern macro are about understanding a large and interesting suite of asset pricing puzzles, brought to us by Hansen and Singleton and their followers—puzzles about empirical failures of simple versions of efficient markets theories. Here I have in mind papers on the “equity premium puzzle,” the “risk-free rate puzzle,” the “Backus-Smith” puzzle, and on and on.2
These papers have put interesting new forces on the table that can help explain these puzzles, including missing markets, enforcement and information problems that impede trades, difficult estimation and inference problems confronting agents, preference specifications with novel attitudes toward the timing and persistence of risk, and pessimism created by ambiguity and fears of model misspecification.
Rolnick: Tom, let me interrupt. Why should we at central banks care about whether and how those rational expectations asset pricing theories can be repaired to fit the data?
Sargent: Well, there are several important reasons. One is that these theories provide the foundation of our ways of modeling the main channels through which monetary policy’s interest rate decisions affect asset prices and the real economy. To put it technically, the “new Keynesian IS [investment-savings] curve” is an asset pricing equation, one of a form very close to those exposed as empirically deficient by Hansen and Singleton. Efforts to repair the asset pricing theory are part and parcel of the important project of building an econometric model suitable for providing quantitative guidance to monetary and fiscal policymakers.
Another important reason for caring is that monetary policymakers have often been urged to arrest bubbles in asset markets. Easier said than done. Before you can do that, you need a quantitatively reliable theory of asset prices that you can use to identify and measure bubbles.
Rolnick: Before I interrupted, you had begun responding to those criticisms of modern macro. Please continue.
Sargent: I have two responses to your citation of criticisms of “rational expectations.” First, note that rational expectations continues to be a workhorse assumption for policy analysis by macroeconomists of all political persuasions. To take one good example, in the spring of 2009, Joseph Stiglitz and Jeffrey Sachs independently wrote op-ed pieces incisively criticizing the Obama administration’s proposed PPIP (Public-Private Investment Program) for jump-starting private sector purchases of toxic assets.3 Both Stiglitz and Sachs executed a rational expectations calculation to compute the rewards to prospective buyers. Those calculations vividly showed that the administration’s proposal represented a large transfer of taxpayer funds to owners of toxic assets. That analysis threw a floodlight onto the PPIP that some of its authors did not welcome.
And second, economists have been working hard to refine rational expectations theory. For instance, macroeconomists have done creative work that modifies and extends rational expectations in ways that allow us to understand bubbles and crashes in terms of optimism and pessimism that emerge from small deviations from rational expectations. An influential example of such work is the 1978 QJE [Quarterly Journal of Economics] paper by Harrison and Kreps.4 You should also look at a fascinating paper that builds on Harrison and Kreps, written by José Scheinkman and Wei Xiong in the 2003 JPE.5 As I mentioned earlier, for policymakers to know whether and how they can moderate bubbles, we need to have well-confirmed quantitative versions of such models up and running. We don’t yet, but we are working on it.
Rolnick: And the other criticisms?
Sargent: OK. The criticism of real business cycle models and their close cousins, the so-called New Keynesian models, is misdirected and reflects a misunderstanding of the purpose for which those models were devised.6 These models were designed to describe aggregate economic fluctuations during normal times when markets can bring borrowers and lenders together in orderly ways, not during financial crises and market breakdowns.
By the way, participants within both the real business cycle and new Keynesian traditions have been stern and constructive critics of their own works and have done valuable creative work pushing forward the ability of these models to match important properties of aggregate fluctuations. The authors of papers in this literature usually have made it clear what the models are designed to do and what they are not. Again, they are not designed to be theories of financial crises.
Rolnick: What about the most serious criticism—that the recent financial crisis caught modern macroeconomics by surprise?
Sargent: Art, it is just wrong to say that this financial crisis caught modern macroeconomists by surprise. That statement does a disservice to an important body of research to which responsible economists ought to be directing public attention. Researchers have systematically organized empirical evidence about past financial and exchange crises in the United States and abroad. Enlightened by those data, researchers have constructed first-rate dynamic models of the causes of financial crises and government policies that can arrest them or ignite them. The evidence and some of the models are well summarized and extended, for example, in Franklin Allen and Douglas Gale’s 2007 book Understanding Financial Crises.7 Please note that this work was available well before the U.S. financial crisis that began in 2007.
Rolnick: I’ll come back to that in a second, but you haven’t said anything yet about what is to be gained in terms of understanding financial crises from importing insights of behavioral economics into macroeconomics.
Sargent: No, I haven’t.
Financial crises
Rolnick: OK then. Well, what useful things does macroeconomics have to say about financial crises, what causes them, how to manage them after they start and what can be done to prevent them?
Sargent: A lot. In addition to the formal literature summarized in the Allen and Gale book, I want to mention the example of the 2004 book by Gary Stern and Ron Feldman, Too Big to Fail.8 That book doesn’t have an equation in it, but it wisely uses insights gleaned from the formal literature to frame warnings about the time bomb for a financial crisis set by government regulations and promises. Indeed, one of the focuses of Gary Stern’s long tenure as president of the Minneapolis Fed was steadily to draw attention to financial fragility issues and what the government does either to arrest crises or, unfortunately as an unintended consequence, to incubate them.
Rolnick: Thanks for the nice words about Gary, but please elaborate further on macro scholarship and financial crises.
Sargent: I like to think about two polar models of bank crises and what government lender-of-last-resort and deposit insurance do to arrest them or promote them. Both models had origins in papers written at the Federal Reserve Bank of Minneapolis, one authored by John Kareken and Neil Wallace in 1978 and the other by John Bryant in 1980, then extended by Diamond and Dybvig in 1983.9 I call them polar models because in the Diamond-Dybvig and Bryant model, deposit insurance is purely a good thing, while in the Kareken and Wallace model, it is purely bad. These differences occur because of what the two models include and what they omit.
The Bryant and Diamond-Dybvig model starts with an environment in which banks can do things that are very worthwhile socially; namely, they provide maturity transformation and liquidity transformation activities that improve the efficiency of the economy. They enable coalitions of people, namely, the banks’ depositors, to make long-term investments—loans, mortgages and the like—while at the same time the bank’s depositors hold demand deposits, bank liabilities that are short term in duration, because they can withdraw them at any time. Banks thereby facilitate risk-sharing among people with uncertain future liquidity needs. These are all good things.
But there is a potential problem here because for the long-term investments to come to fruition, enough patient depositors must leave their funds in the bank to avoid premature liquidation of a bank’s long-term investments. Without deposit insurance, situations can arise that induce even patient depositors to want to withdraw their funds early, causing the banks prematurely to liquidate the long-term investments, with adverse affects on the realized returns.
What triggers a bank run is patient depositors’ private incentive to withdraw early when they think that other patient investors are also choosing to withdraw early. Technically speaking, that amounts to multiple Nash equilibria. There are situations in which I run (i.e., withdraw from the bank early) because I expect you to run, and when you also run because you expect me to run. But there are other situations in which we both trust that the other person isn’t going to run and we don’t run. Which equilibrium prevails is anyone’s guess, or something resolved only by an extraneous random device for correlating behavior, a device that economists sometimes call a “sunspot.”
So without deposit insurance, the economy is vulnerable to bank runs. The situations where depositors don’t run lead to good outcomes, but when there are bank runs, outcomes are bad. The good news in the Diamond-Dybvig and Bryant model, however, is that if you put in government-supplied deposit insurance, that knocks out the bad equilibrium. People don’t initiate bank runs because they trust that their deposits are safely insured. And a great thing is that it ends up not costing the government anything to offer the deposit insurance! It’s just good all the way around.
Rolnick: Do you think that an abstract model like this ever influences policymakers?
Sargent: I believe that the Bryant-Diamond-Dybvig model has been very influential generally, and in particular that it was very influential in 2008 among policymakers. A perhaps oversimplified but I think largely accurate way of characterizing the vision of many policy authorities in 2008 was that they correctly noticed that a Bryant-Diamond-Dybvig bank is not just something that has “B A N K” written on its stationary and front door. It’s any institution that executes liquidity transformation and maturity transformation, thereby offering a kind of intertemporal risk-sharing.
So in 2008, there were all sorts of institutions that were really banks in the economic sense of the Bryant-Diamond-Dybvig model but that did not have access to explicit deposit insurance, institutions like money market mutual funds, shadow banks, even hedge funds that were doing exactly those maturity-transforming and risk-transforming activities.
When monetary policy authorities, deposit insurance authorities and others looked out their windows in the fall of 2008, they saw Bryant-Diamond-Dybvig bank runs all over the place. And the logic of the Bryant-Diamond-Dybvig model persuaded them that if they could arrest the runs by effectively convincing creditors that their loans—that is, their short-term deposits—to these “banks” were insured, that could be done at little or no eventual cost to the taxpayers. You could nip the run in the bud and really prevent the next Great Depression. This is a very optimistic view of those 2008 interventions enlightened by the Bryant and Diamond-Dybvig model.
But Diamond and Dybvig themselves were cautious about promoting such optimism. In the last part of their 1983 JPE paper, Diamond and Dybvig recommend that their readers take seriously the message of a 1978 paper (written at the Minneapolis Fed, as I mentioned earlier) by Kareken and Wallace. That paper includes something important that Diamond and Dybvig recognize that they left out: moral hazard.
Rolnick: And the Kareken-Wallace story?
Sargent: The main idea is that when a government is in the business of being a lender of last resort or a deposit insurer, depending on how it regulates banks, it affects the risk that banks take and the probability that the government is actually going to be required to exercise lender-of-last-resort and bail out facilities. Neil and Jack call it the “moral hazard” problem, which is the idea that when you insure a bank, you alter its incentives to undertake risks.
In the Kareken-Wallace model, deposit insurance is purely a bad thing. Kareken-Wallace envisions a different economic setting than Bryant and Diamond-Dybvig. Of course, like all models, it’s an abstraction; it simplifies things in order to isolate key forces. The Kareken-Wallace setting has complete markets. There are markets in all possible risky claims. There are also some people who wanted to hold risk-free deposits.
Kareken and Wallace compare two different situations. In one, there is no deposit insurance; depositors are on their own and know that their deposits are uninsured. If they want to hold risk-free deposits, they’d better hold them in banks that are holding risk-free portfolios. Some very conservative banks emerge that can issue safe deposits because the bank portfolio managers themselves hold assets that allow these banks to pay depositors in all possible states of the world.
Kareken and Wallace compare that no-deposit-insurance situation to another situation in which a government agency provides deposit insurance that is either free or is priced too cheaply, meaning that it’s not priced with a proper risk-loading. Kareken and Wallace show that in that situation, banks have an incentive to become as risky as possible, and as large as possible. Therefore, with a positive probability, banks will fail and taxpayers will have to compensate banks’ depositors. It is in banks’ shareholders’ interest that the banks organize themselves this way. This lets them gamble with the insurers’ and depositors’ money.
The Kareken and Wallace model’s prediction is that if a government sets up deposit insurance and doesn’t regulate bank portfolios to prevent them from taking too much risk, the government is setting the stage for a financial crisis. On the basis of the Kareken-Wallace model, Jack Kareken wrote a paper in the Federal Reserve Bank of Minneapolis Quarterly Review referring to the “cart before the horse.”10 He pointed out that if you’re going to deregulate financial institutions, which we in the United States did in the late ’70s and early ’80s (deregulation is the cart), you’d better reform deposit insurance first (that’s the horse). You’d better make it clear that financial institutions that take these risks are not allowed to have access to lender-of-last-resort facilities. But the U.S. government didn’t do that.
So, of those two models, the Kareken-Wallace model makes you very cautious about lender-of-last-resort facilities and very sensitive to the risk-taking activities of banks. The Diamond-Dybvig and Bryant model makes you very sensitive to runs and very optimistic about the ability of insurance to cure them. Both models leave something out, and I think in the real world we’re in a situation where we have to worry about runs and we also have to worry about moral hazard. As you know, an important theme of research for macroeconomics in general and at the Minneapolis Fed in particular has been about how to strike a good balance.
Rolnick: Jack and Neil concluded their 1978 paper with a proposal for dealing with this tension, and that was to require much more capital than was required at the time. Now the government actually requires even less capital than it did when Jack and Neil wrote. If you go back prior to FDIC insurance, turn-of-the-century banks were holding, by some estimates, 20 percent, maybe 30 percent, capital. Capital-equity ratios were that high.
What would you recommend? You just observed that if deposit insurance isn’t priced properly, that leads you in one direction. And Jack and Neil had this idea of making sure there’s a lot more skin in the game, meaning much closer to what banks used to hold when there was no deposit insurance, no too-big-to-fail.
Sargent: The function of capital is exactly to protect against making risky loans. Another proposal is the narrow banking proposal of Milton Friedman and [other economists at the University of] Chicago, which is a proposal to force deposit banks to hold safe portfolios.
Rolnick: Well, with large banks, too-big-to-fail concerns and deposit insurance, I would make the case to tier it based on size. Jack and Neil made the point, I believe, that shareholders of large banks can diversify, but shareholders of smaller banks find it harder to diversify, so they tend to be more risk-averse. Their prediction would therefore have been, I think, that moral hazard is more likely to manifest itself in larger banks—and I think that’s what we saw in the 2007-09 financial crisis. How seriously would you take the relevance of the historical evidence that I cited?
Sargent: I would take it very seriously. I recommend a very interesting paper by Warren Weber presented at the Minneapolis Fed conference in honor of Gary Stern this past April in which Warren compared different private insurance arrangements for managing banks’ risk-taking before the U.S. Civil War.11
The 2009 fiscal stimulus
Rolnick: A January 2009 article quotes you as saying, “The calculations that I have seen supporting the stimulus package are back-of-the-envelope ones that ignore what we have learned in the last 60 years of macroeconomic research.”12 What calculations had you seen?
Sargent: I said something like that to a reporter. I had just read an Obama administration’s Council of Economic Advisers document e-mailed to me by my friend [Stanford University economist] John Taylor.13 I agreed with John that the CEA calculations were surprisingly naive for 2009. They were not informed by what we learned after 1945.
But I suspect that the council was asked to do something quickly, and they did what they thought was “good enough for government work,” as some of us said during my days at the Pentagon in 1968 and 1969. Back-of-envelope work can be a useful starting point or benchmark. But it does mischief when it is oversold.
In early 2009, President Obama’s economic advisers seem to have understated the substantial professional uncertainty and disagreement about the wisdom of implementing a large fiscal stimulus. In early 2009, I recall President Obama as having said that while there was ample disagreement among economists about the appropriate monetary policy and regulatory responses to the financial crisis, there was widespread agreement in favor of a big fiscal stimulus among the vast majority of informed economists. His advisers surely knew that was not an accurate description of the full range of professional opinion. President Obama should have been told that there are respectable reasons for doubting that fiscal stimulus packages promote prosperity, and that there are serious economic researchers who remain unconvinced.
Rolnick: Do any New Keynesian models provide any support for the CEA numbers?
Sargent: Some do; some don’t. I recommend looking at calculations by John Taylor and his pals.14 Based on that work, John remains very skeptical of the 2009 CEA calculations. But Christiano, Eichenbaum and Rebelo have used variants of a New Keynesian model together with particular assumptions about paths of shocks to create quantitative examples of situations in which fiscal multipliers can be as big as those assumed by the CEA.15
Persistent unemployment in Europe (and now the United States?)
Rolnick: Let me go on to another set of questions that I have struggled to answer. Is U.S. unemployment in this recession special? Is it different from the previous 10 recessions? If so, do you have any explanation for why that might be the case? Why it went so high and why it’s staying there as long as it is, relative to the pattern of other recoveries?
I haven’t heard many economists expound on this, but clearly the labor markets are behaving much differently than they did in previous recoveries, and it’s not obvious to me why. I’m curious what you might say about that.
Sargent: May I talk about this by linking to some of my work with Lars Ljungqvist on European unemployment?
Rolnick: By all means.
Sargent: I have little new to say about the details of the big rise in U.S. unemployment since 2008, although the financial crisis was a huge adverse shock to the labor market, so I suspect that we’ll be able to explain the rise. But the main thing that concerns me is the threat of persistent high unemployment, and here the European experience of the last three decades fills me with dread.
Let me begin by explaining what motivated Lars Ljungqvist and me to study European unemployment, why we have been obsessed by it for 15 years. To Lars and me, Europe’s high unemployment rate during the last three decades represents an enormous waste of human resources and individuals’ well-being, what we think is a tragedy in the lives of the people who have not been able to participate in the labor market.
Early explanations in the 1980s for Europe’s high unemployment were that it was due to insufficient demand and wage rigidities. But soon those explanations came to be regarded as unsatisfactory because they couldn’t explain the persistence of unemployment. Some theories blamed Europe’s labor market institutions with their generous government-supplied unemployment insurance and strong government-mandated job protection.
But those theories were decisively criticized by Paul Krugman and others who pointed out that the European institutions that liberally subsidized unemployment and disability retirement were there also in the 1950s and ’60s, periods when Europe had lower unemployment rates than the United States. Therefore, Krugman and others concluded that you can’t blame those generous European social safety nets for the high unemployment rates that Europe has experienced since 1980.
Here’s how Lars and I have attacked the problem. We believe that despite Krugman’s observation, Europe’s generous unemployment compensation system has made an important contribution to sustained high European unemployment, but that those adverse effects came to life only after there occurred what seem to have been permanent changes in the microeconomic environment confronting individual workers. So the culprit was the interactions of those altered microeconomic conditions with those generous European social safety nets.
Rolnick: What changes in microeconomic conditions do you have in mind?
Sargent: Empirical microeconomists have documented that, despite what macroeconomists called the “Great Moderation” in macroeconomic volatility before 2007, individual workers have experienced more turbulent labor market outcomes since the late 1970s and early ’80s. Empirical studies have documented increased volatility of both the transient and permanent components of individuals’ labor earnings. Peter Gottschalk and Robert Moffitt, Costas Meghir and Luigi Pistaferri, and others have documented that.16 David Autor and Larry Katz have assembled a convincing catalogue and critical summary of the evidence.17 So if you look at instances when a job separation causes an individual’s earnings to suffer a big reduction, usually that individual must live with a substantial reduction for a long time.
Lars and I use the shorthand “increased turbulence” to refer to this increased volatility and magnitude of adverse earnings shocks at the time of job loss. In the context of several rational expectations models with human capital dynamics and labor market frictions that impede the ability of displaced workers to find new jobs, we have found that an increase in economic turbulence generates persistently high unemployment when combined with a generous welfare system.
Furthermore, the same government-financed social safety net could actually produce lower unemployment in a low-turbulence environment like the 1950s and 1960s. It could do this through strong government-mandated job protection. But when the microeconomic turbulence increases to the high-turbulence post-1980 environment, that same safety net can unleash persistently higher unemployment. An important element of our analysis is the view that a worker’s human capital tends to grow when he or she is employed, but deteriorates when he or she is not employed. We analyzed these mechanisms in detail in two papers, one in the JPE in 1998, another in Econometrica in 2008.18
That’s our explanation for the higher unemployment rate observed in Europe from 1980 to 2007. Our vision is that an increase in microeconomic turbulence of individual earnings processes occurred in both Europe and the United States. Displaced American workers faced stingier unemployment compensation systems, stingier in both their more limited durations and their lower monthly payments.
Rolnick: When did the microeconomic turbulence begin?
Sargent: The empirical evidence is that it increased substantially sometime in the late 1970s. It happens that it increased just about when high and persistent unemployment broke out in Europe. This is what attracted us to it as a key part of the explanation for the persistent jump in unemployment in Europe relative to the United States.
Rolnick: So turbulence broke out in Europe, OK, but you get the impression that the Great Moderation—a decline in economic volatility—was taking place here in the United States.
Sargent: Well, the so-called Great Moderation really refers to a decrease in macroeconomic volatility. That’s why I stress the difference between individual and aggregate volatility by emphasizing the term microeconomic. The Great Moderation is indeed there in the aggregate data. An econometrician would think about running a simple auto-regressive process for aggregate data and then looking at the error variance. For aggregate data (until 2007), that error variance decreased. But for the micro or individual-level data, just the opposite happened: For individual workers, the error variance—or less technically, unpredictable volatility in earnings—increased.
Rolnick: And why did microeconomic earnings volatility increase?
Sargent: Lars and I believe that when people now become unemployed, they’re taking a more or less permanent hit to their level of human capital, a larger one than they might have received before 1980. We have a theory that people build up human capital while they’re working on a job, but lose human capital when they’re displaced from a job. We think that after 1980, people in Western economies started suffering bigger drops in their human capital at the moment that they suffer a job displacement. Some of the forces leading to this outcome come from various technological changes going under the umbrella name of “globalization.”
Thomas Friedman’s 2005 book The World Is Flat has many stories testifying to such forces.19 By positing increased turbulence in this sense at the microeconomic level, Lars and I have been able both to come to grips with the observations on aggregate unemployment across Europe and the United States and also to explain some of the micro observations collected by Gottschalk and Moffitt and others. So the Great Moderation seems not to have been occurring at the individual level. Just the opposite.
Our theory goes beyond the aggregate unemployment rate and focuses on individuals. Our models have cohorts of aging heterogeneous workers. Our models imply that people in Europe, especially older workers, are suffering from long-term unemployment because of the adverse incentives brought about by a generous social safety net when it interacts with these human capital dynamics. Unfortunately, the data bear this out. In Europe, there has been a long-term unemployment problem especially affecting older workers.
Rolnick: In your model, what type of labor market frictions impede people who want to work from immediately finding a job?
Sargent: The models that we like best for our purposes view unemployment as an “activity” distinct from “work” and “leisure.” We’ve cast the heart of our theory in several contexts, including, for example, search models in the spirit of George Stigler and John McCall,20 where finding a job requires a time-consuming activity of sorting through offers for jobs with various levels of pay and compensating differences; and also Diamond-Mortensen-Pissarides21 matching models where an aggregate matching function imposes a congestion externality on workers’ activity of waiting for a match and firms’ activity of waiting for vacancies to be filled. The same forces come through across a variety of structures, so we think there’s a lot of robustness to our basic story.
Rolnick: OK, so part of your story for lower U.S. unemployment in the past has to be that in the United States, especially for the older workers, the safety net wasn’t as generous. They had to go back and get retrained or whatever; therefore, they chose to be more active in the labor market than their European cousins did.
Sargent: Yes. In a 2003 paper in a volume to honor Edmund S. Phelps, Lars and I exhibited simulations of our model illustrating this.22 What would a typical, say, 50-year-old worker do if he or she loses his or her job and then immediately gets hit by a human capital loss? What differences in behavior would be exhibited by otherwise similar workers, one facing European benefits versus another facing U.S. benefits?
Our simulations exhibit a force that traps the European worker in unemployment. Unemployment compensation systems typically award you compensation that’s linked to your earnings on your last job; those past earnings reflect your past human capital, not your current opportunities or current human capital. That can make collecting unemployment compensation at rates reflecting your past (and now obsolete) human capital more desirable than accepting a job whose earnings reflect a return on your current depreciated level of human capital. This mechanism sets an incentive trap that induces the European worker to withdraw from active labor market participation.
Rolnick: Earlier, you said that the European experience with persistently high unemployment over the last three decades fills you with dread about the prospects for the United States.
Sargent: The prospect that concerns me might sound like I’m hardhearted, but that’s just the opposite of my feelings. What you’ve seen in the recent recession—and it’s quite natural because it’s been so severe—is a tendency of Congress to expand unemployment benefits, over and over again. What Lars and my theory tells us is that if, in the United States, we create a system where unemployment and disability benefits are permanently extended in their generosity and their duration, we will inadvertently put ourselves into the situation that much of Europe has suffered for three decades.
I don’t know enough about politics to predict whether that’s likely to happen. The unfortunate thing is you can see a multiple equilibrium trap here. Low unemployment rates enabled the United States politically to sustain a modest unemployment compensation system. But the politics of the current situation can imply that so long as unemployment is high, we’re going to extend the duration and generosity of benefits. And that extension, done out of the best of motives, is exactly what can lead to the trap of persistently high unemployment. An intriguing thing is that some European countries like Sweden and Denmark are now moving exactly in the opposite direction.
Europe and “unpleasant arithmetic”
Rolnick: Let me ask another question about events in Europe. Some people believe there’s a serious conflict between fiscal and monetary policy, that it’s the result of the Europeans having asked monetary policy to do things it can’t without real fiscal discipline. And as you and Neil pointed out 30 years ago—was it that long ago?!—in “Some Unpleasant Monetarist Arithmetic,” you’d better worry about those links. Is that the way you would interpret what’s going on in Greece, or Europe in general, and concern over Europe’s ability to maintain the euro, that they face some unpleasant arithmetic that could undermine the euro?
Sargent: The people who set up the euro clearly knew about the unpleasant arithmetic and they strove to set things up to protect the euro from any adverse consequences of that arithmetic. Indeed, the whole system was designed to force governments to balance their budgets in a present value sense, adjusting appropriately for growth. Indeed, the Maastricht Treaty actually put in fiscal rules that amounted to overkill in the interests of creating a fail-safe system.
What I mean is that it put in place more restrictive rules on fiscal policy than were needed to express the requirement that a government’s budget had to be balanced in the present-value sense with little or no contributions coming from seigniorage revenues from the inflation tax. The treaty built redundancy into the rules by restricting both debt-to-GDP ratios and deficit-to-GDP ratios.
Remember that under the gold standard, there was no law that restricted your debt-GDP ratio or deficit-GDP ratio. Feasibility and credit markets did the job. If a country wanted to be on the gold standard, it had to balance its budget in a present-value sense. If you didn’t run a balanced budget in the present-value sense, you were going to have a run on your currency sooner or later, and probably sooner. So, what induced one major Western country after another to run a more-or-less balanced budget in the 19th century and early 20th century before World War I was their decision to adhere to the gold standard.
Rolnick: What does the gold standard have to do with the euro in 2010?
Sargent: The euro is basically an artificial gold standard. The fiscal rules in the Maastricht Treaty were designed to make explicit the present-value budget balance that was unspoken under the gold standard. In terms of the monetarist arithmetic, the rules made sense.
Rolnick: So what’s the problem now?
Sargent: Here is what went haywire. In the 2000s, France and Germany, the two key countries at the center of the Union, violated the fiscal rules year after year. Of course, an intriguing thing about the unpleasant arithmetic is that it’s about present values of government primary deficits, and not just deficits for one, two or three years. And remember that the overkill Maastricht Treaty rules are sufficient but not necessary to sustain present-value budget balance, adjusted for real economic growth, so maybe there was no cause for alarm at that time.
But in hindsight, there was cause for alarm. The reason is that France and Germany lost the moral authority to say that they were leading by example. They lost the moral high ground to hold smaller countries to the fiscal rules intended to protect monetary policy from the need to monetize government debt.
Rolnick: And so …
Sargent: So, a number of countries at the European Union economic periphery—Greece, in particular—violated the rules convincingly enough to unleash the threat of unpleasant arithmetic in those countries. The telltale signs were persistently rising debt-GDP ratios in those countries. Of course, the unpleasant arithmetic allows them to go up for a while, but if that goes on too long, eventually you’re going to get a sovereign debt crisis.
Rolnick: What could the European Central Bank do then?
Sargent: Well, here is one thing that you can imagine the ECB doing (which it hasn’t). It could take the stance, “If the government of Greece wants to try to issue euro-denominated bonds, let them do it, or try to do it. And if investors want to hold euro-denominated bonds that are understood to be liabilities of the Greek government, and not of the ECB, let them do it. It’s not any of the ECB’s business. If those bonds threaten to go bad, if Greece just isn’t a good risk, that’s the bondholders’ problem. Let the investors bear that risk. And if Greece defaults or renegotiates, that’s the investors’ problem, not the ECB’s problem.”
Rolnick: Of course, the ECB hasn’t said that, or at least not yet!
Sargent: Well, one reason the ECB hasn’t said that yet is that after the financial crisis of 2008, what seemed to some European banks to be a promising source of higher-yielding instruments was sovereign debt in the form of euro-denominated bonds issued by countries like Greece. The banks located in the center of the euro area, France and Germany, hold Greek-denominated debt, so a threat of default on Greek government debt threatens the portfolios of those banks in other European countries. Because it is the lender of last resort, now it is the ECB’s business.
Rolnick: Tom, this reminds me of an example of a breakdown in one of the lines between monetary and fiscal policy that you wrote about in your paper “Where to Draw Lines” that you presented at the Stern conference we held in April.23
Sargent: Yes, this is a big breakdown in a line between fiscal and monetary policy intended to be set by the Maastricht Treaty in order to enforce that artificial gold standard, as I view the euro to be. Once the monetary authority starts assisting the fiscal authorities of these countries, you’ve drifted from the original conception of the euro.
Rolnick: Would you argue that Jack and Neil’s analysis comes back into play here, the one about too-big-to-fail and moral hazard?
Sargent: Unfortunately, yes, that’s what I was trying to suggest.
Rolnick: Did things have to get to this point?
Sargent: Ultimately, that’s a question about politics, about which I know too little. But in purely economic terms, things could have gone differently. Here’s a “virtual history” of what could have happened:
France and Germany stay “holier than thou” from beginning to end, and always respect the fiscal limits imposed by the Maastricht Treaty. They thereby acquire the moral authority to lead by example, and the central core of euro-area countries are running budgets that without doubt are balanced in a present-value sense. Therefore, the euro is strong. The banks of the core countries (France and Germany again) are well regulated (the message of Kareken and Wallace has been heard), so the banks in France and Germany are not holding any dodgy bonds issued by governments of dubious peripheral countries that have adopted the euro but that flirt with violating the Maastricht Treaty rules.
In this virtual history, the ECB could play tough and let the Greek government default on its creditors by renegotiating terms of the debt. For the euro, letting the Greek bondholders suffer would actually be therapeutic; it would strengthen the euro by teaching peripheral countries that the ECB means business.
Rolnick: Right. Although if that scenario had been foreseen, Greece might not have been able to issue that debt in the first place.
Sargent: Aha! The plot thickens. So then we confront again the issue of how separate can monetary and fiscal policy be? In the spirit of your observation, remember that there were huge capital gains on Italian debt after it became clear that it would be allowed to join the euro area. So, what really was the reason for those capital gains? Were they based on expectations of a reformed and more disciplined fiscal policy in Italy? Or was it rather an expectation that by joining the euro, Italy had gained access to bailouts from other euro-zone countries?
Note that a related point pertains to the 2009 stress tests in the United States. What did it truly mean when a bank passed the stress test? Did it mean that the bank’s balance sheet was solid? Or did it mean that since the Fed said that bank had passed the stress test, the Fed would make sure that henceforth that bank would have access to lender-of-last-resort facilities?
It’s difficult to sort these things out. But notice that throughout our discussion, Art, we’ve been using the vocabulary of rational expectations. In our dynamic and uncertain world, our beliefs about what other people and institutions will do play big roles in shaping our behavior.
Rolnick: Indeed. Thank you again, Tom.
—Art Rolnick
June 15, 2010
More About Thomas J. Sargent
Current Positions
- William R. Berkley Professor of Economics and Business, New York University, since 2002
- Senior Fellow, Hoover Institution, Stanford University, since 1987
- Research Associate, National Bureau of Economic Research, 1970–73 and since 1979
Previous Positions
- Donald Lucas Professor of Economics, Stanford University, 1998–2002
- David Rockefeller Professor of Economics, University of Chicago, 1991–98
- Visiting Scholar, Hoover Institution, Stanford University, 1985–87
- Visiting Professor of Economics, Harvard University, 1981–82
- Ford Foundation Visiting Research Professor of Economics, University of Chicago, 1976–77
- Adviser, Federal Reserve Bank of Minneapolis, 1971–87
- Associate Professor of Economics, University of Minnesota, 1971–87; Professor from 1975
- Associate Professor of Economics, University of Pennsylvania, 1970–71
- First Lieutenant and Captain, U.S. Army; served as Staff Member and Acting Director, Economics Division, Office of the Assistant Secretary of Defense, 1968–69
Professional Affiliations
- President, American Economic Association, 2007; President-elect, 2006; Vice President, 2000–01; Executive Committee, 1986–88
- President, Econometric Society, 2005; First Vice President, 2004; Second Vice President, 2003; Council, 1995–99, 1987–92; Fellow, 1976
- President, Society for Economic Dynamics and Control, 1989–92
- Member, Brookings Panel on Economic Activity, 1973
Honors and Awards
- Moore Distinguished Scholar, California Institute of Technology, 2000–01
- Marshall Lecturer, Cambridge, England, 1996
- Erwin Plein Nemmers Prize in Economics, Northwestern University, 1996–97
- Fellow, American Academy of Arts and Sciences, 1983
- Fellow, National Academy of Sciences, 1983
- Mary Elizabeth Morgan Prize for Excellence in Economics, University of Chicago, 1979
- Most Distinguished Scholar, University of California, Berkeley, Class of 1964
- Phi Beta Kappa, 1963
Publications
-
Author of a dozen books, including Robustness (with Lars Peter Hansen, 2008; Recursive Macroeconomic Theory (with Lars Ljungqvist), 2d ed., 2004; The Big Problem of Small Change (with François Velde), 2002; The Conquest of American Inflation, 1999; and Bounded Rationality in Macroeconomics, 1993. Author of more than 170 research papers focusing on macroeconomic theory, time-series econometrics, learning theory, fiscal and monetary policy, and economic history.
Education
- Harvard University, Ph.D., 1968
- University of California, Berkeley, B.A., 1964
Endnotes
1 Hansen, Lars Peter, and Kenneth J. Singleton. 1983. “Stochastic Consumption, Risk Aversion, and the Temporal Behavior of Asset Returns.” Journal of Political Economy 91(2), pp. 249–65.
2 Mehra, Rajnish, and Edward C. Prescott. 1985. “The Equity Premium: A Puzzle.” Journal of Monetary Economics 15(2), pp. 145–61.
Weil, Philippe. 1989. “The Equity Premium Puzzle and the Risk-Free Rate Puzzle.” Journal of Monetary Economics 24(3), pp. 401–21.
Backus, David K., and Gregor W. Smith. 1993. “Consumption and Real Exchange Rates in Dynamic Economies with Non-Traded Goods.” Journal of International Economics 35(3/4), pp. 297–316.
3 Stiglitz, Joseph E. 2009. “Obama’s Ersatz Capitalism.” New York Times, April 1.
Sachs, Jeffrey. 2009. “Obama’s Bank Plan Could Rob the Taxpayer.” Financial Times, March 25.
4 Harrison, J. Michael, and David M. Kreps. 1978. “Speculative Investor Behavior in a Stock Market with Heterogeneous Expectations.” Quarterly Journal of Economics 92(2), pp. 323–36.
5 Scheinkman, José A., and Wei Xiong. 2003. “Overconfidence and Speculative Bubbles.” Journal of Political Economy 111(6), pp. 1183–1219.
6 “New Keynesian economics” refers to the school of thought that has refined Keynes’ original theories in response to the Lucas critique and rational expectations. Advocates of New Keynesian theory rely on the idea that prices and wages change slowly (referred to as “sticky” prices and wages) to explain why unemployment persists and why monetary policy can influence economic activity. But their models are adapted from the dynamic general equilibrium models developed by new classical economists such as Sargent and Lucas. See http://www.econlib.org/library/Enc/NewKeynesianEconomics.html for elaboration.
7 Allen, Franklin, and Douglas Gale. 2007. Understanding Financial Crises. Oxford and New York: Oxford University Press.
8 Stern, Gary H., and Ron J. Feldman. 2004. Too Big to Fail: The Hazards of Bank Bailouts. Washington, D.C.: Brookings Institution Press.
9 Kareken, John H., and Neil Wallace. 1978. “Deposit Insurance and Bank Regulation: A Partial-Equilibrium Exposition.” Journal of Business 51(July), pp. 413–38. (Also at http://minneapolisfed.org/research/sr/ SR16.pdf.)
Bryant, John. 1980. “A Model of Reserves, Bank Runs, and Deposit Insurance. Journal of Banking & Finance 4(4), pp. 335–44.
Diamond, Douglas W., and Philip H. Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91(3), pp. 401–19.
10 Kareken, John H. 1983. “Deposit Insurance Reform or Deregulation Is the Cart, Not the Horse.” Federal Reserve Bank of Minneapolis Quarterly Review 7(Spring), pp. 1–9.
11 Weber, Warren E. 2010. “Bank Liability Insurance Schemes Before 1865.” Research Department Working Paper 679, Federal Reserve Bank of Minneapolis. (See also “An Antebellum Lesson” in the September 2010 issue of The Region.)
12 Chicago Tribune. 2009.“The Stimulus Rush.” Jan. 13. (Also in Brannon, Ike, and Chris Edwards. 2009. “The Troubling Return of Keynesianism.” Tax & Budget Bulletin 52, Cato Institute.)
13 Romer, Christina, and Jared Bernstein. 2009. “The Job Impact of the American Recovery and Reinvestment Plan.”
14 Cogan, John F., Tobias Cwik, John B. Taylor and Volker Wieland. 2009. “New Keynesian versus Old Keynesian Government Spending Multipliers.” Working Paper 47, Stanford University; for a newer version of this paper (Jan. 8, 2010), see http://www.stanford.edu/~johntayl/CCTW_100108.pdf.
15 Christiano, Lawrence, Martin Eichenbaum and Sergio Rebelo. 2009. “When Is the Government Spending Multiplier Large?” Working Paper, Northwestern University.
16 Gottschalk, Peter, and Robert Moffitt. 1994. “The Growth of Earnings Instability in the U.S. Labor Market.” Brookings Papers on Economic Activity 2, pp. 217–72.
Meghir, Costas, and Luigi Pistaferri. 2004. “Income Variance Dynamics and Heterogeneity.” Econometrica 72(1), pp. 1–32.
17 Katz, Lawrence F., and David H. Autor. 1999. “Changes in the Wage Structure and Earnings Inequality,” in Handbook of Labor Economics, vol. 5. Oxford and New York: Elsevier Science, North-Holland, pp. 1463–1555.
18 Ljungqvist, Lars, and Thomas J. Sargent. 1998. “The European Unemployment Dilemma.” Journal of Political Economy 106(3), pp. 514–50.
Ljungqvist, Lars, and Thomas J. Sargent. 2008. “Two Questions about European Unemployment.” Econometrica 76(1), pp. 1–29.
19 Friedman, Thomas L. 2005. The World Is Flat: A Brief History of the Twenty-First Century. New York: Farrar, Straus and Giroux.
20 Stigler, George J. 1962. “Information in the Labor Market.” Journal of Political Economy 70(2), pp. 94–105.
McCall, John J. 1970. “Economics of Information and Job Search.” Quarterly Journal of Economics 84(1), pp. 113–26.
21 Diamond, Peter A. 1982. “Wage Determination and Efficiency in Search Equilibrium.” Review of Economic Studies 49(2), pp. 217–27.
Mortensen, Dale T. 1982. “Property Rights and Efficiency in Mating, Racing, and Related Games.” American Economic Review 72(5), pp. 968–79.
Pissarides, Christopher A. 1992. “Loss of Skill During Unemployment and the Persistence of Employment Shocks.” Quarterly Journal of Economics 107(4), pp. 1371–91.
Mortensen, Dale T., and Christopher A. Pissarides. 1999. “Unemployment Responses to ‘Skill-Biased’ Technology Shocks: The Role of Labor Market Policy.” Economic Journal 109(455), pp. 242–65.
22 Ljungqvist, Lars, and Thomas J. Sargent. 2003. “European Unemployment: From a Worker’s Perspective,” in Knowledge, Information, and Expectations in Modern Macroeconomics: In Honor of Edmund S. Phelps. Philippe Aghion, Roman Frydman, Joseph Stiglitz and Michael Woodford, eds. Princeton, N.J.: Princeton University Press, pp. 326–50.
23 Sargent, Thomas J. 2010. “Where to Draw Lines: Stability versus Efficiency.” (See also http://www.minneapolisfed.org/research/events/2010_04-23/index.cfm.)