Skip to main content

Across the Fed, a mindful exploration of AI is underway

While the wider world is focused on ChatGPT, a broader toolkit is helping economists and data scientists meet their missions

September 22, 2023

Author

Jeff Horwich Senior Economics Writer
Graphics of charts, newscasters, and bills float on the left side, then pass through an AI filter, becoming an orderly collection on the right
Cara Ewing/Minneapolis Fed

Article Highlights

  • AI-related tools help economists transcend “hunt and peck” to see mathematical relationships beyond the realm of human comprehension
  • Language-learning tools open vast amounts of text-based data to Fed data scientists
  • AI holds potential for macroeconomic forecasting to inform monetary policy, but only after extensive testing
Across the Fed, a mindful exploration of AI is underway

Since the public got its first taste of ChatGPT in late 2022, the promises and perils of human-like “generative AI” have dominated discussions about technology and economic growth. Artificial intelligence is driving a surge of investors hoping to ride a wave of world-changing innovation. Along the way, AI has weirded us out, threatened our jobs, and changed the way we order at White Castle.

But well before this current year of AI, tools under the wide umbrella of artificial intelligence were making steady inroads in economics.1 While ChatGPT might—for the moment—still struggle to pass an undergraduate econ exam, its less-chatty cousins are enhancing how some Fed economists, data scientists, and bank regulators do their work.

The end of “hunt-and-peck”

Minneapolis Fed economist Amanda Michaud studies the millions of Americans whose limited work histories leave them ineligible for unemployment insurance. But she faces a common issue: We can’t identify these workers in the data. Every four years, the U.S. Census Bureau runs a survey covering unemployment insurance. “But if you want to know anything about the unemployment experiences of people in the other years, you don’t have those survey questions available,” said Michaud, a senior research economist with the Opportunity & Inclusive Growth Institute.

Traditionally, an economist might make educated guesses about the characteristics of these workers. For Michaud, tools of “machine learning”—one major branch of AI—offer a different path.2 A computer algorithm essentially studies the characteristics of the subset of workers whose outcomes are known, learning to predict who is eligible for unemployment insurance. The rules it derives—likely far more complex than anything a human economist might conjure—can then be applied to estimate eligibility among the entire U.S. population.

AI-based tools can identify patterns and connections we might never see for ourselves—or might only approximate with great effort and luck.

“You go from kind of hunt-and-peck to this world where you can do this prediction very quickly and fill in data gaps,” Michaud said.

She cautions that AI tools can go astray without a human expert holding the reins. An algorithm will not inherently know the state-by-state rules to qualify for unemployment insurance, for example, or understand why data during the Great Recession will be fundamentally different from other years. “You can’t just feed the machine learning algorithms garbage,” Michaud said. “But if you can feed them informed variables, using economic theory or an understanding of institutions, they perform much better.”

Pros and cons of the black box

To researchers like Michaud who are fluent in advanced statistical techniques, it is not always clear where standard econometrics ends and AI techniques like “machine learning” begin.

In a recent paper musing on the origins of AI, former Minneapolis Fed economic advisor and Nobel laureate Thomas Sargent describes AI simply as “computer programs that recognize patterns and make decisions”—but do so on par with the great thinkers of human history. “By artificial intelligence I mean computer programs that are designed to do some of the ‘intelligent’ things that creative people like Galileo, Darwin, and Kepler have done,” Sargent writes—scientists whose brilliant leaps of logic vaulted forward the state of knowledge. By digesting enormous amounts of data, together with probability theory and calculus, Sargent says AI can infer patterns that transcend normal human intuition.

AI-based tools can identify patterns and connections we might never see for ourselves—or might only approximate with great effort and luck. They can help economists investigate problems with multiple possible solutions or uncover relationships that are nonlinear (imagine a graph with a sharp kink, sudden break, or multiple changes of direction).

The correlations and mathematical relationships identified by AI are often beyond the realm of human conception, even for economists who do math for a living. This lack of easy interpretability—a “black box problem”—is an inherent trade-off that accompanies AI’s core strength: It can zero in on important relationships and connections (correlation), even if it cannot tell us why those connections exist (causation).

The correlations and mathematical relationships identified by AI are often beyond the realm of human conception, even for economists who do math for a living. This “black box problem” is an inherent trade-off of AI.

“Machine learning is really, really good for prediction,” said recent Institute visiting scholar Stefania Albanesi. “As opposed to standard econometrics and regression analysis, which is more about explaining the data.” In her research, the economist at the Miami Herbert Business School uses one black box to counter another: the opaque, proprietary credit-scoring algorithms that dictate the interest rate we pay on a loan or whether we get an apartment. With co-authors, Albanesi has built AI-driven credit scoring systems that appear fairer to younger consumers, low-income people, and people of color.

“Based on our model, conventional credit scores misclassified 30 to 50 percent of consumers, meaning they put them in the wrong risk category,” Albanesi said. “We found that our model does a lot better, and you could improve credit scores by using more advanced prediction technologies than apparently the big credit scoring companies are using.”

Tapping into a world of “unstructured” data

Economists and policymakers thrive on numbers. Unfortunately, governments and the private sector bury lots of valuable information among mountains of words. One of the most common and practical applications of AI in economic research is turning these words into quantitative data—generally referred to as “natural language processing” (NLP).

NLP opens a new universe of public records to Kim-Eng Ky, a senior data scientist in Community Development and Engagement at the Minneapolis Fed. Ky got her grounding in AI through prior jobs analyzing enormous datasets for public transit and a major health insurer. At the Minneapolis Fed, she recently turned to NLP-based methods to tackle an important local policy issue: the rise of investor-owned homes.

“We don’t have good data on which single-family homes are owned by investors—especially large, corporate investors that own single-family rentals all across the U.S.,” Ky said. “We wanted to try to estimate this using property tax records that are available within our area.”

A human brain could reasonably judge at a glance whether two properties correspond to the same owner. But it would take an absurd amount of time for a person to compare more than 500,000 records.

Collecting the records was straightforward. Analyzing them, not so much. Each county has different information requirements, and property owner names are “unstructured” data: The same owner might appear with or without “LLC,” for example, or with variations of first and middle names. A human brain could reasonably judge at a glance whether two properties correspond to the same owner. But it would take an absurd amount of time for a person to compare more than 500,000 non-homestead houses in the Twin Cities.

A computer algorithm calibrated by Ky, however, can do the job in a few hours. She initially ran and reran the program to adjust and improve its code until it was able to match records with suitable accuracy. Now the algorithm is on standby to efficiently update the Minneapolis Fed’s investor-owned property-data tool as new records arrive annually.

“We are borrowing some of the tools that people use in natural language processing to do this very, very simple counting,” said Ky. “It seems like an extremely simple problem, but apparently very difficult to solve.”

Although Ky developed more advanced AI applications in prior jobs, the rollout of AI in her current role is gradual, mindfully balanced against other goals. Her division is focused on informing policymakers in the Ninth Federal Reserve District—few of whom are computer scientists. “Because of our audience, if we can’t explain to them what exactly is going on, it could be hard to convince people to believe us,” Ky said.

For another recent project on the cost of homeownership versus renting, Ky considered an AI tool with the provocative name “random forest.” The preliminary results were not superior enough to warrant using it over traditional data methods. “It’s a lot harder to explain to people what ‘random forest’ is versus a closed-form formula where you can actually write down exactly what it does,” Ky said.

Proof-of-concept stage for forecasting and monetary guidance

A similarly measured approach applies to data that directly inform the Fed’s own policymakers. “In an academic paper, you can have these pie-in-the-sky ideas,” said Thomas Cook, a data scientist in the Economic Research Department at the Kansas City Fed. “What you want for informing policy and informing a Bank president is something that is very rigorously tested, very well thought out.”

Five years before the world had heard of ChatGPT, Cook published the Kansas City Fed working paper “Macroeconomic Indicator Forecasting with Deep Neural Networks,” in which computer nodes roughly analogous to a brain consistently outperform the Survey of Professional Forecasters in predicting the U.S. unemployment rate. Today, Cook regularly convenes curious researchers from across the Federal Reserve System to discuss AI-related methods.

He imagines a day when the Fed’s economic forecasts might, for example, integrate insights culled minute by minute from the talking heads, chart graphics, and on-screen chyrons of cable business news. But for now, “in Bank time and in policy time, we’re still pretty early,” Cook said. “Eventually, we’ll probably get [to] using much larger, more sophisticated machine learning techniques. But it’ll be baby steps to get there, because the risks of getting something wrong are too high.”

Cook says powerful computers, more efficient algorithms, and accessible AI software have come together today to make these tools a practical option for economic researchers at the Fed. For monetary policy, though, they will need to prove their worth. “Over time, I’ll be able to go back and say, ‘This thing I have [run] over the last couple years has outperformed in x, y, or z ways,’” Cook says. “At that point, it’s easier for me to say this might be a good candidate for including in a policy briefing.”

Powerful computers, more efficient algorithms, and accessible AI software have come together to make these tools a practical option for economic researchers at the Fed. For monetary policy, though, they will need to prove their worth.

In the meantime, Cook demonstrates proof of concept through collaborations with colleagues around the System. To address AI’s black box issue, Cook has explored tools to help better interpret and understand machine learning output. This research showcased AI’s ability to identify nonlinearities in housing data (such as the point at which the age of a house stops becoming a liability and begins adding to home value).

Cook’s in-process research with Board of Governors economist Nathan Palmer explores the potential to endow hypothetical actors in an economic model with neural networks as they interact within a simulated economy. Modern macroeconomic models generally presume people are forward-looking and adjust their behavior accordingly (they have “rational expectations”). But pre-AI models, said Cook, must flatten this process to an unrealistic degree. “If in the long run people come to that understanding, how long does it take them to get there? How long does it take for people to adjust their expectations?” A branch of AI called “reinforcement learning” could help economists explore that question.

With colleagues at the Kansas City and Richmond Feds, Cook is preparing to publish the insights gleaned from applying natural language processing to the earnings conference calls of banks following the turmoil of early 2023. “We’re interested in looking at how the discussion in those transcripts changed or evolved in response to the banking stress starting last March with [the collapse of] SVB and First Republic Bank,” Cook said. “These are data points that are hard to cleanly extract simply by looking at market prices and fluctuations.”

While models for routine macroeconomic forecasting wait in the wings, proving their worth, Cook expects much more of this rapid-response AI application in the near term. “I think you might continue to see the ability of economic researchers to be much more responsive in terms of reacting to recent events and doing analysis on them,” he said.

Along with Cook’s work in Kansas City, the Philadelphia Fed has notably established a “machine learning initiative” with a small hive of economists focused on AI methods.

The Fed looks at ChatGPT (and ChatGPT looks at the Fed)

None of the applications above are forms of generative AI, a la ChatGPT. That kind of application would, hypothetically, not just return results and quantitative analysis but draft its own written summary and conclusions.

Generative AI engines are known to still make baffling, potentially harmful errors despite their impressive capabilities. When approaching any new technology, institutions in a position of public trust must proceed with care. One as powerful as ChatGPT warrants particular caution—even as economists begin to test its potential.

A new paper by Richmond Fed economists Anne Lundgaard Hansen and Sophia Kazinnik brings the latest generative AI tools into the picture, turning ChatGPT back on the voluminous transcripts generated by the Fed itself.

Researchers found the latest iterations of ChatGPT performed on par with human interpretations of “Fedspeak”—the carefully calibrated language employed by Fed officials.

They find the latest iterations of ChatGPT perform on par with human interpretations of “Fedspeak”—the carefully calibrated language employed by Fed officials to characterize their policy actions and observations about the U.S. economy. Their tests include rating Fed statements on a scale from hawkish to dovish, and identifying when Fed communications indicate a genuine monetary policy shock (an intentional action by policymakers to combat inflation).

“Does this result mean that experts and researchers are now obsolete?” they write. “On the contrary. Our results show that there is a tremendous potential for boosting the capabilities of researchers in the realm of qualitative analysis.”

The economists propose that the ability of today’s generative AI to interpret Fedspeak holds potential to improve the “clarity, transparency, and effectiveness” of Fed policy messaging, a goal that is “highly relevant in an era where central banks are becoming more focused on making their communication more accessible to the public.”

While noting that generative AI tools “are not infallible” and require considerable guidance, “the implications of this ‘renaissance’ are profound,” they conclude. “It opens up possibilities for a new era of research and knowledge discovery that combines the depth of human insight with the breadth and speed of AI.”


Endnotes

1 While economists have been writing about the potential for artificial intelligence for decades and dabbling in early tools, a recent survey of economic literature found the use of AI-based tools in published economic research was negligible until around 2015, when it began to grow exponentially. As of 2020, the researchers found the share comprised between 2 and 3 percent of economics papers.

2 The types, terminology, and tools of AI are fluid and overlapping, including concepts such as machine learning, deep learning, neural networks, natural language processing, and generative AI (such as ChatGPT). Rather than attempt comprehensive definitions, this article uses the terms in context and reflects the language as applied by the respective researchers.

Jeff Horwich
Senior Economics Writer

Jeff Horwich is the senior economics writer for the Minneapolis Fed. He has been an economic journalist with public radio, commissioned examiner for the Consumer Financial Protection Bureau, and director of policy and communications for the Minneapolis Public Housing Authority. He received his master’s degree in applied economics from the University of Minnesota.