Reflecting on what motivates his research, Berkeley economist Jón Steinsson invokes a cautionary parable.
“There’s this old saying that you are ‘looking for your keys under the lamppost,’” Steinsson said. “And there is a significant amount of that within the economics profession.” Also called the streetlight effect, the fable traces back to 13th century Turkey. It warns of the human tendency to search for answers where it is easy—not necessarily where we will find the truth.
Steinsson connects it to the concept of “theory-induced blindness,” popularized by psychologist Daniel Kahneman in his book Thinking, Fast and Slow. “In order to write down a simple model, economists often make a simplifying assumption that assumes away some part of reality because you want to make some different point,” Steinsson said. “But then everybody starts using the model, they forget about that assumption, and they may think that the world actually works that way.”
Steinsson’s research applies new sources of data and creative analysis to unpack notions that economic models can take for granted, with far-reaching consequences for economic policies: Just how sticky are prices? What do people actually hate (and what might they fail to appreciate) about inflation? How does monetary policy transmit its effect through the economy? How well do financial markets really anticipate the actions of the Fed?
“I try to work in areas where I feel like there’s a significant gap between what’s happening within the economics profession and what’s happening in the world,” Steinsson said. He is a frequent co-author with his wife, fellow Berkeley economist Emi Nakamura. He is also a former visiting scholar with the Opportunity & Inclusive Growth Institute at the Minneapolis Fed and an associate editor at Econometrica and the Quarterly Journal of Economics.
After writing scores of op-eds for his native Iceland, Steinsson has pivoted to engage head-on with public policy questions in the U.S., including through his steady presence on Twitter. Steinsson is a co-director of the Monetary Economics Program at the National Bureau of Economic Research, and he has become a prominent commentator on the powerful monetary policy tool of central bank communication.
We spoke on November 3, 2022.
The unfinished evolution of forward guidance
You and I are talking the day after an FOMC (Federal Open Market Committee) meeting in which financial markets clearly read one thing into the written statement, and then a half-an-hour later they tanked when they heard something very different in the press conference. There’s so much effort involved in trying to get forward guidance just right. Do you feel like we’re getting any better at it?
When I was in college in the late 1990s, it was a huge step for the Fed even to make any statement. They were extremely worried that if they said something about the future—and then circumstances changed—they wouldn’t want to do what they said before. The amount of communication was very little, and I think that was bad for policy.
The thinking has shifted enormously over the last 20 to 30 years. We’re obviously in a much better place now, where it is taken for granted that the Fed should be guiding the market’s expectations about the path of policy over some period of time. This is important because even though we sometimes talk about the federal funds rate—the short-term interest rate that the Fed directly acts to control—clearly the interest rates that matter in the economy are longer-term interest rates: mortgage rates, the rates that firms borrow at over longer horizons. Longer-term interest rates are determined by the path of policy over a longer horizon.
Loading chart 1...
In some sense, the entire discussion should be about the path. That’s why I think the Federal Reserve has incompletely shifted towards a point where it goes without saying that all discussions should be about the path of policy. Now they’re in this state where every other meeting they put out a Summary of Economic Projections (SEP) with the “dot plot,” and those are very clear about the path. But in the other meetings there’s not a dot plot, there’s no SEP. So then, as you were discussing in your question, they have to put a lot of effort into these much-less-perfect ways of conveying what they mean about the path.
I think the reluctance to release an SEP with each meeting comes from the concern that markets will construe those statements as commitments and therefore the Fed will be boxed into that later. We need to treat the market as adults and get everybody used to the fact that forward guidance is something the Fed does every time. It is not a commitment; it’s what they think is the appropriate policy going forward, and that’s going to change based on incoming data every meeting.
We have also seen a real—if gradual—revolution in Federal Reserve Governors and Bank presidents speaking and being available to the media between FOMC meetings. In a way, it allows people to try to guess at what those little dots are whenever they pick up the newspaper or search for what the FOMC participants had to say in the latest interview. What do you make of that change?
There are many people who disagree with me on this, but I think that hearing from the different members of the FOMC is positive. It’s good for the public to see that there’s a real give-and-take, that people with different views are on the Committee. The fact that the FOMC members are going out there and giving speeches that reflect the different perspectives helps the general public understand that those perspectives are being heard and thought about within the deliberation process.
Now, there is a view that these speeches will move the market. Suppose a hawkish person on the Committee goes out and gives a speech and the market moves in a hawkish direction. If the Fed is reluctant to surprise the market, then that has now led the whole Committee to be more likely to act in a hawkish way, and that is not good. There has to be an understanding that the consensus view might be different from these speeches, and the FOMC has to become comfortable with the notion that it surprises the market if the market is reading too much into these speeches.
To that point, you recently concluded a long Twitter thread by saying that it is “inevitable” that the Fed will “continue to need to surprise the market on a regular basis.” Given the conventional wisdom that we generally want to avoid surprises, I wanted to drill a little deeper into what you’re thinking of there.
The meetings only happen every six weeks or so. And in between those meetings there are all kinds of announcements and new data that are coming in. And for each of these, the market has to think about how it thinks the Fed is going to react to that new piece of data.
“It’s inevitable that the Fed is going to surprise the market, and it needs to be willing to surprise the market. Otherwise, it's putting the market in the driver’s seat of monetary policy.”
If we lived in a world where the Fed’s “reaction function”—the way it reacts to new information—was completely known to everybody, then there would never arise a discrepancy between what the market thinks the Fed thinks, and what the Fed actually thinks. But we don’t live in that world. I’d be surprised if the members of the FOMC could even, in their mind, fully articulate their reaction function. It’s just too complicated to do.
Differences between the market and the Fed are going to compound over those six weeks. At the next meeting, the Fed is going to want to tell the market again: “This is actually what we think, this is how we view the incoming data over the last six weeks, and this is what we think we’re going to do going forward.”
It’s inevitable that the Fed is going to surprise the market, and it needs to be willing to surprise the market. Otherwise, it’s putting the market in the driver’s seat of monetary policy—which doesn’t actually even conceptually make any sense because it’s circular, but certainly is not good policy. In that sense, I think it’s important that the Fed not be afraid of surprising the market.
The powerful Fed “information effect”
In your research, you worked to isolate the difference between the policy shocks from Fed actions and the information effects of those actions. What is the difference and why does that matter?
After a meeting when the Fed puts out a press release and the chair gives a press conference, let’s think about the information that is revealed by those statements and words.
One thing being conveyed is the Fed’s view about policy going forward. But another thing being revealed is how they interpret various other things that have been happening. So the Fed statement might say, “XYZ developments have occurred, and we think that means that the economy is doing very well and the labor market is at risk of overheating.” The Fed might think this more than the markets do, so that’s basically being revealed as news: “Oh, the Fed thinks the economy is very strong.”
“That’s the information effect: What the Fed conveys about how the economy is doing affects the private sector’s beliefs, independent of the news about monetary policy.”
Now, one thing the markets can do is say, “Well, we have our own research departments—and even if the Fed thinks that, we think something else so this doesn’t change our views very much.” If everybody had infinite resources to make up their minds, then perhaps whether the Fed thinks this or that wouldn’t change their mind.
But another view is that the Fed has a lot of researchers thinking about how the economy is doing: “If the Fed has put all those resources to work and come to the conclusion that the economy is at risk of overheating, maybe we should change our minds.” That’s the information effect: What the Fed conveys about how the economy is doing affects the private sector’s beliefs, independent of the news about monetary policy.
When you tried to measure the balance of policy and information effects, restricting your view to the half-hour around an FOMC policy announcement, I recall you found that the information effect is roughly two-thirds of the total.
In our paper, we found that information effects were very large, and we’re not the first to do that. There are other researchers within the Fed system, in particular a team of researchers at the Chicago Fed, who have found similar results.
Those results are controversial. There are many researchers who are skeptical. Even when you go to different branches of the Federal Reserve, the researchers at the New York Fed have a different view, I have found, than the researchers at the Board of Governors. I think the dust hasn’t settled.
It’s a heavy responsibility to think about, that the Fed is not just announcing, “Here’s the target for the federal funds rate,” but issuing a report on the state of the economy that people take extremely seriously.
One important thing about that is that it can lead to a situation where the Fed might be reluctant to ease policy. Suppose you’re in a recession and the Fed surprises the market by easing policy. But then the Fed is worried that by doing that, it’s going to make the private sector more pessimistic about the economy, and that added pessimism is going to make the economy do worse.
You might get a situation where easing policies has the opposite effect than the effect that you want it to have. That’s one of the things that you need to start thinking about once these information effects are at play.
I think ultimately the Fed has to realize that if it starts systematically not revealing what it thinks, then the market will realize that it’s doing that and try to backward-induct what the Fed really thinks, depending on what it actually says. So, I think efforts to strategically withhold information, in the end, are going to be futile. But there is this temptation, especially in a deep recession, to not be fully forthcoming about the seriousness of the situation.
Inflation: Weighing our anchors
The market seems to be conditioned by a certain amount of hope or wishful thinking—or so it seems recently in the six-week gaps between FOMC meetings. It feels like investors, and certainly households, are looking for reasons for optimism in the data that rolls in. You see the wind blow that way and then it’s reset with the Fed meeting.
My own research on this particular issue, I think, points maybe a little bit differently. There might be a bias in terms of people being too optimistic. But there’s another interpretation of this pattern that you’re referring to, which is that over the last 30 years we’ve seen consistently low inflation and, over the last decade or so, very low interest rates. And that conditions market participants to think, “Well, the future’s going to look like the past, we think the world is going to go back to normal,” which is low inflation and low interest rates. They’re expecting kind of “a return to normalcy,” and they’re surprised by how maybe what they think is normal is not normal.
If you go further back to the ’70s and ’80s, what was considered normal was very different. If you look at the ’80s, the market was always thinking that interest rates will come back up to what, at that time, seemed normal: high interest rates and high inflation. But that didn’t happen, and the market was surprised in the other direction over and over again for about 15 years.
This has to do with the notion that recent experience anchors what the market thinks is normal. We have a paper where we show that even rational agents can behave like this over fairly substantial horizons, in an environment where the world is very complicated and it’s hard to learn.
This relates to the question of what actually anchors our inflation expectations. There is, of course, a lot of obsession right now over them becoming unanchored.
This is certainly a topic that I think is under-researched and poorly understood. In certain settings, inflationary expectations are anchored by recent experience like what I was just discussing, or by some belief in the credibility of an institution. The notion that the Fed has an inflation target and is committed to go back to the inflation target might anchor beliefs.
“There’s a strong belief that the Fed is committed to bringing inflation back down to 2 percent, and that does anchor long-term inflation expectations. … But we can’t take that for granted. If economic policy misbehaves sufficiently, that will erode and it might flip.”
In many periods that kind of an anchoring of beliefs can be extremely stable. But then in other situations we can see a very rapid flip in that. This happens in situations where you have, for example, a fixed exchange rate which has high credibility for a long period of time, and then very rapidly it unravels and there’s a speculative attack and no amount of intervening by the central bank can avoid a currency devaluation.
Sometimes the bond market is convinced that government debt is safe for a very long time, and then very rapidly that confidence flips and the yields on bonds skyrocket extremely rapidly. We saw a little bit of that in Britain recently. It has happened in more spectacular fashion in other countries.
I think inflationary expectations are similar. At the moment, I think there’s a strong belief that the Fed is committed to bringing inflation back down to 2 percent, and that does anchor long-term inflation expectations. If you look at long-term inflation expectations, both by professional forecasters and from the TIPS (Treasury inflation-protected securities) market, they still seem to not be moving very much.
But we can’t take that for granted. If economic policy misbehaves sufficiently, that will erode and it might flip, and we might have things happen in the United States that we think can only happen in other countries. That’s one of the reasons I think that the Fed is correct in saying that it wants to err on the side of raising rates too much rather than raising too little at the moment to bring inflation back down.
Is unanchoring something where we’ll just know it when we see it? It sounds like you imagine something of a tipping point—it’s not something that just sneaks up on us slowly.
I think we will know it when we see it. The signs will be the kinds of things that happened in Britain this fall: bond yields at the long end, rising rapidly, the currency depreciating. Luckily, that’s not what we’re seeing in the United States at the moment.
When you look at countries that have weaker credibility and a history of crises, loss of confidence happens all of a sudden. That’s the scary thing about crises, and that’s also why the people that are on the other side of this, the doves, certainly have a point where they say, “You can scaremonger and scaremonger, but you have no idea how close we are to this. You’re overblowing this risk.” There’s very little one can say about that because that is true: It might take an enormous amount of time—or misbehavior by the Fed or the Treasury—to reach that point. It’s hard to know.
Setting prices: Micro movements, macro consequences
I want to talk with you about your research on pricing, which has been a huge part of your career. A lot of your work with Emi Nakamura has given us a better understanding of how prices move by tapping into much more granular data than people had in the past. Why has it been so valuable for bigger macroeconomic questions to understand how prices move at this extremely “micro” level?
The workhorse framework for thinking about the effects of monetary policy is the Keynesian model. The biggest distinction between the Keynesian model and a neoclassical model is the notion that prices are sticky. Prices don’t react to changes in costs perfectly. They react sluggishly to shocks. There’s huge literature about which type of model is more realistic and so on. One of the ways in which you can try to get at that question is to think about the core differences in assumptions—which has to do with price rigidity, and whether that phenomenon really exists in the world.
And so, that’s what our research was about. Earlier research had come to the conclusion that prices were somewhat sticky, but not very sticky, and that was influencing the debate in the direction of saying the Keynesian model was less realistic and maybe not the right way to think about monetary policy.
Our particular work came to the conclusion that prices were stickier than found in the papers that preceded our paper—and in that sense may have shored up confidence in the Keynesian model. But that one piece of evidence is only one little piece and certainly doesn’t resolve the issue.
In particular, I have another set of work which has to do with how steep the Phillips curve is: When unemployment falls, how much does inflation react to changes in unemployment? In a Keynesian model, the Phillips curve is relatively flat. In a neoclassical model it’s very steep. In our work—and various other authors have come to similar conclusions—you tend to estimate a very flat Phillips curve. So, the world looks very Keynesian if you do it that way.
Actually, it looks so Keynesian that you would need an enormous amount of price stickiness to get the Phillips curve to be that flat. You need something more than just price rigidity to make a model with a Phillips curve that is as flat as you see in the real world. The workhorse idea here is the idea of coordination problems between price setters: Even if you’re going to change your price, you don’t change it as much as you would otherwise, because you’re worried about your competitors. If they’re not changing their price, you might not change your price very much because you don’t want your price to move very far away from your competitor’s prices. You can get amplification of these price rigidities through those kinds of coordination problems.
As this current inflation settled in, I kept waiting for my barber to raise his prices. In my view, he held off much longer than I would’ve expected. Finally, they taped up new prices on their board, but almost in a way that they would be able to remove them if they needed to. I was thinking about that as I was looking at your work on prices: We’re living in the middle of an experiment, an opportunity to learn.
That’s right. One of the things that hampered our work on price rigidity originally was the fact that the sample period we had access to when we originally studied price rigidity was one where there was little variation in inflation. Then we did some extra work and we found these very old data about price rigidity, and we extended the sample back to the 1970s. We were able to study a period with higher inflation.
Now we have another period of high inflation, and it is very surprising how much prices remain sticky even in the face of pretty substantial amounts of inflation. I can understand people who are skeptical about price rigidities from an a priori point of view, and that’s why it’s important to do empirical work and see if this phenomenon really exists in the real world.
Your work illuminated the fascinating way in which businesses schedule or coordinate sales and how we should not be taken in by the rapid price movement that we appear to see, because stuff is going on and off sale all the time.
That played a very important role in our original work on price rigidity. Maybe you’ll see the price is $5.99, and then it drops to $3.99, and then it goes back to $5.99. And maybe that happens all the time, so you see one hundred price changes.
“How costly is inflation? We know that people in the world, they hate inflation with a passion. Economists have had a hard time understanding precisely why people hate inflation as much as they do.”
But really what was happening was that you were going back and forth between $3.99 and $5.99, and in that sense the regular price had remained unchanged for a long period of time. That’s exactly what we see in the real-world data. It’s not that temporary sales impart no flexibility to prices, but it’s certainly not what you would see in a model where prices are perfectly flexible like a neoclassical model.
Pricing and the pain of inflation
Your work also has insights about price volatility and price dispersion, and we can talk about both of those in this inflationary moment.
One very important question having to do with inflation is, how costly is inflation? We know that people in the world, they hate inflation with a passion. Economists have had a hard time understanding precisely why people hate inflation as much as they do.
One of the ways in which, in our models, inflation is costly, is that it messes up the way in which the price mechanism works in the economy. You have goods that should have the same price because they have the same cost of production, but just from the vagaries of which price has recently been changed, one is much higher than the other, and therefore there’s much more demand for the cheaper good and much less demand for the more expensive good. If they have the same cost of production, this is a very inefficient situation.
This price dispersion is one potential reason why inflation would be costly—and actually the one that is easiest to model and therefore is incorporated into economists’ analyses most commonly. In one of our papers, we set out to study whether this cost of inflation really is substantial in the real world. And we came to the conclusion that it’s not. We tried to estimate the volatility of prices and how that volatility changed when inflation changed. And if this cost was large, you should see the dispersion of prices of similar goods rising as inflation rises. And we didn’t find that.
That’s not to say that inflation is not costly. It’s just that the cost of inflation must be something else. When economists survey people about why they hate inflation, the most common thing that people will say is that inflation makes them poorer: Their wage hasn’t changed, and then prices are going up and this is making them poor. This is a difficult thing to make inference about, because in the very short run, this is of course true: Your real wage, the amount you can buy with your wages, has gone down and you’re poorer.
But then the question in the longer term is whether that is a short-term asynchrony: Your wage will catch up with the prices, it just happens with a little bit of a lag. So, it’s possible that people’s perceptions of how much poorer inflation is making them are overblown relative to the actual reality.
“The most common thing that people will say is that inflation makes them poorer: Their wage hasn’t changed, and then prices are going up. … It’s possible that people’s perceptions of how much poorer inflation is making them are overblown relative to the actual reality.”
We should be able to look at the data at this point and assess whether that seems true. Do wages ultimately catch up over time, just with a lag?
Well, it’s very hard to figure that out. In the current episode, real wages have fallen. You could infer from this that the current inflationary episode has been making people poorer. But it’s not so easy to come to that conclusion, because you would need a counterfactual: What would wages have done without the inflation that we saw?
In the United States, maybe it’s hard to point to any reason why real wages would have fallen in this period. And so maybe you should come to the conclusion that it is the fault of inflation. But look at Europe at the moment. There’s a war in Ukraine that’s having a very large effect on energy prices in Europe. So that is a very negative shock. It’s just the case that things are more expensive. You are not going to be able to buy as much stuff because of this war. And so that’s a case where part of it is due to inflation, and part of it is actually due to the thing that is causing the inflation, which is the war in Ukraine. So that’s the thing that is difficult in terms of making inference about this, and why the economics profession hasn’t been able to fully work this out.
It’s also not even clear that this particular channel is necessarily the biggest cost of inflation. The fact that inflation is affecting the real value of all kinds of assets, of course, is a cost. People entered into contracts with certain expectations, and then inflation messes with those expectations in important ways. And that is a source of inefficiency. The tax system is also not indexed; that’s a source of inefficiency.
But many of us have fixed-rate 30-year mortgages. These are not indexed to inflation, so anybody that has a fixed-rate 30-year mortgage is benefiting from the inflation. When wages fall, it benefits certain people in terms of it being easier to get a job, even as it hurts other people that have the good jobs with the high wages at the moment. People who are benefiting from inflation are maybe not attributing that to inflation as much as the people who are being hurt. In many situations, people are more aware of the bad things that happen to them than the good things.
I do agree that inflation is costly, and I firmly agree that the Fed should try to bring inflation back down to its target. But I sometimes think that inflation is blamed for more than is actually due to inflation.
An encouraging turn for macroeconomics
I heard you compare trying to understand macroeconomic events to efforts to understand volcanos. You’re originally from Iceland, so you speak from experience on both topics! I believe you contrasted that with hurricanes—and I’ll hand it back to you to take your metaphor from here.
We economists often get criticized for our inability to predict how things are going to turn out. I think people don’t fully appreciate the fact that we’re trying to predict something that is not only pretty complicated, but we’ve only seen very few instances of this thing we’re trying to predict.
In the United States in the post-war period, we’ve seen maybe a dozen recessions. We’ve seen inflation really rise three or four times. This is a little bit like a weather forecaster who’s trying to predict the weather but has only ever seen 12 storms.
“The volcanologists are on TV every day being asked to predict when the next eruption is going to happen. … I felt like it was very similar to us economists who are being asked the same questions about events that only happen every decade or so.”
Now I’m overdoing it a little bit of course; there are many countries and we can learn from other countries. But countries are correlated. And the total number of events in terms of recessions and increases in inflation that we’ve seen—maybe on the order of a few hundred. It’s a fairly modest amount of data that we’re using to make inference about something extremely complicated. And recessions are heterogeneous: The COVID recession is very different from the Great Recession, which is very different from the Volcker recession.
There’s a similar thing going on in Iceland at the moment, because there’s this volcano that is rumbling and actually has erupted twice in the last two years. But this volcano hadn’t erupted for 800 years. The volcanologists are on TV every day being asked to predict when the next eruption is going to happen, how long is the eruption going to last, is it going to get bigger, is it going to get smaller.
I felt like it was very similar to us economists who are being asked the same questions about events that only happen every decade or so. I think the public is more understanding of the volcanologist than they are of the economist in this respect! I wanted to draw that distinction, because I think it’s important for people to understand that the amount of data we have to go on is very finite, and that’s playing a role in why we can’t forecast everything perfectly.
You have written about how exceptionally difficult it is to isolate causal effects in macroeconomics. I took away that there’s an awful lot that we think we know that we probably don’t actually know.
Yes—although there’s an undercurrent among a lot of people that there’s something wrong with macroeconomics, and I actually don’t agree with those people as much as they might think I do.
I think the state of macro is much better than it was 10 years ago. One of the biggest problems that macro has faced over the last generation is a lack of balance between theory and empirics. The rational expectations revolution was a major theoretical advance. But something that maybe people don’t realize enough is that it turned macro away from empirical work. There was a whole generation of people that I think misunderstood the Lucas critique to mean that we couldn’t do empirical work based on natural experiments and quasi-experimental methodologies—that the only way to do empirical work was to estimate full structural models.
At the same time, in other parts of economics, that kind of work exploded. Macro was behind in that regard for a while, but I see more and more younger scholars excited about doing quasi-experimental causal inference in macro. As you say, it is very hard to do that because identifying causal effects in a general equilibrium system is difficult.
Just to define this for a wider audience: When we talk about quasi-experimental, we’re talking about policy changes, things changing in the real world—it’s not in a sealed environment. We haven’t randomly selected people into one group or the other, but we’re finding ways to make the best of the data that we have.
We can’t run randomized trials in macro, so we have to come up with ways of figuring out counterfactuals in credible ways. The idea of natural experiments is to use quirky things that affect maybe one state but not another state, or one sector but not another sector. You can use the other state or the other sector as a control group that was not affected by the policy and this gives you kind of a counterfactual.
“We can’t run randomized trials in macro, so we have to come up with ways of figuring out counterfactuals in credible ways. … Without using those kinds of methods, you will just run out of steam as a science.”
It’s often difficult to make this argument, but it’s not impossible. Emi and I have been trying to do work along these lines, and I think that’s super important because without using those kinds of methods, you will just run out of steam as a science.
What is, for example, the marginal propensity to consume? Suppose I randomly give you a dollar. How much of that extra dollar are you going to spend within a year or within a quarter? It’s very difficult to figure that out without some kind of quasi-experimental or even, in that case, truly experimental variation.
Marginal propensity to consume (MPC) is a great practical example. It sounds kind of jargony, but it is something that applies to everybody. Economists used to think that it was quite small. Starting to recognize that it actually might be fairly significant has huge implications for government stimulus, for example.
Exactly. If there’s a very low marginal propensity to consume, then a fiscal stimulus check is not going to do anything.
This is my favorite example of “theory-induced blindness” because it applies to myself. Many of the models you encounter in introductory courses in economics imply a very low marginal propensity to consume. If you think that, “Well, the marginal propensity to consume is very small because all the models I’ve ever seen imply that,” then you have potentially wrong beliefs about the efficacy of various policies.
That’s exactly an example where I think empirical work has really helped. We now have a pretty large body of empirical work suggesting that MPCs are pretty large, and that has affected the debate about whether fiscal stimulus checks, for example, make any sense as a policy.
Optimism, climate change, and one more volcano
I flagged one recent tweet of yours, where you wrote that “we are on the verge of eradicating extreme poverty in the world.” Around the same time, you retweeted some charts and said we are “closer to solving climate change than many realize.” What explains your rampant optimism?
Over the last few years, I’ve become much more optimistic about climate change than I was before, and it’s based on technological advances. It’s very easy to look at the graph of world emissions, and it feels like we’re not even close to solving the problem. The problem is getting worse and worse every year—emissions are going up. And if that’s your mindset, then I think that is not right, because underlying this is the fact of the growth of renewables. Renewables are growing at a really, really rapid rate, and the costs are coming down at a really rapid rate.
“It’s very easy to look at the graph of world emissions, and it feels like we're not even close to solving the [climate change] problem. … If that’s your mindset, then I think that is not right, because underlying this is the fact of the growth of renewables.”
Obviously, a 15 percent growth rate when something is small, it keeps being small for a while. But once it gets big and it keeps growing at 15 percent per year, it’s going to get really big, really fast. And I think we’re right at that moment where renewables are big and are still growing at 15 percent per year. A lot of people haven’t quite caught on to that. I think you’re going to start seeing that in the emissions numbers—I wouldn’t be surprised if they’re going to peak very soon and start coming down faster than many people expect.
I felt like I learned this year that we are closer to a complete package of technologies on the climate front. At the beginning of the year, I thought, solar and wind are great sources for producing renewable electricity, but the storage issue is a real problem. Lithium-ion batteries are a good short-term storage solution, but I didn’t see where we would get the long-term storage. We need a lot of long-term, high-capacity storage of energy to deal with seasons, but also to deal with day-to-day things, like there’s a storm and no sun for four days.
I became more convinced this year that we had a technology on the brink of becoming an important part of the solution, which is green hydrogen. I don’t know—maybe it’s not the ultimate solution. But I felt like that quartet of technologies—wind, solar, lithium-ion batteries, and green hydrogen—might be all we need, essentially. If we get that fourth one in the bag, then we might technologically have solved the whole ballgame—apart from some carbon capture, which we will probably need at some point. That made me more optimistic about climate change, I think, than the average person.
We’ve talked about quasi- and natural experiments. We’ve talked about volcanos. Maybe fittingly we can end by talking about a volcano that provided you a very handy experiment. This paper connects back to your roots in Iceland, and I thought it was fascinating.
So, there’s an island off the coast of Iceland: Heimaey, part of the Westman Islands. And there’s an important, vibrant town on the island, because it’s the only harbor on the south coast of Iceland and there’s a lot of fishing that goes on out of Iceland. Then in 1973, out of the blue, there was a volcanic eruption only 300 yards from the edge of the town. Everybody was evacuated. But over the course of several months, half of the town is engulfed by lava and the rest of the town is engulfed by ash.
In January 1973, the Eldfell volcano erupted with little warning on the Icelandic island of Heimaey. A lava field entirely consumed some homes but not others, providing a rare natural experiment to study the long-term effects of forced relocation on adults and children. (Photo by Terry Disney/Express/Hulton Archive/Getty Images)
They dig the houses out of the ash. But the people whose houses were engulfed by lava, there’s just this smoldering lava field there. They were compensated by the government for the value of the house. But those people are much more likely to leave the town and never come back.
This provides us with a treatment and control group. The treatment group, in this case, is the people who lost their houses, and the control group is the people on the other side of the street who didn’t lose their house. We’re interested in the question: How does it affect people’s lifetime earnings to be induced to leave this town?
There’s a similar literature in America including a very famous study called Moving to Opportunity, where people were given vouchers to move out of very low-income, high-crime areas. In that setting, maybe it’s not so surprising that the people who get the vouchers do better. But in our setting, actually the town is the highest-income town in Iceland because fisheries is a very profitable industry in Iceland.
Basically, you’re inducing people to leave an extremely prosperous place. So, you might think that people will be worse off by being pushed out. And we find that is true for the adults: People older than 25 at the time of the eruption, their lifetime earnings fall if they’re induced to leave.
But the children and the unborn children—the descendants of these people who are induced to leave—their lifetime income is much, much higher than those in the control group. How do you make sense of this? How can it be good to be pushed out of a prosperous town?
The way we make sense of this is by thinking about a model with comparative advantage. This is a prosperous town, but the economic opportunities in this town are very concentrated in a particular industry: fisheries. If you are born into this town and your actual skills are different—you’re really a good computer programmer or a legal scholar or something else—then being induced to leave the town—all your friends are there, all your family, making it hard to leave—being induced to leave the town actually leads those people to have much higher incomes.
You call it the “gift of moving.” You have to love that paper, how it all comes together. This has been very enjoyable, and thank you for making time for us.
Thanks for having me. I enjoyed it as well.