The Federal Reserve of the Next 100 Years: The Promise of "Big Data"
Edina High School
Over the next 100 years, “big data” will revolutionize the Federal Reserve’s conduct of monetary policy. Currently, Federal Reserve policy is based on economic data that at any given time are “only partially known, as key information on spending, production, and prices becomes available only with a lag.”1 As a result, policymakers may be forced to “act on the basis of misleading information.”2 In the future, the availability of vast amounts of data, along with the computing power to interpret and analyze it—so-called big data3—will allow the Federal Reserve to react more quickly and effectively to changes in the U.S. economy. While there will still be uncertainties regarding the timing and magnitude of the economy’s response to Federal Reserve policy, lags and misleading information will no longer be significant impediments to policymakers.
The impact of lags and gaps in economic data can be seen in the lead-up to the financial crisis of 2008-09. On Sept. 16, 2008, the day after the collapse of Lehman Brothers, the Federal Open Market Committee kept its target for the federal funds rate at 2 percent.4 While the transcript of that meeting shows considerable uncertainty on the part of FOMC members about what was currently happening in key sectors of the economy,5 the FOMC ultimately concluded that the “current stance of monetary policy is consistent with a gradual strengthening of economic growth” beginning in 2009.6 The Fed’s economists also projected a stabilization of the housing market.7 As we now know, the U.S. economy was on the edge of a precipice. While the preliminary August 2008 payroll report released 11 days before the meeting showed a decline of 84,000 jobs,8 revised numbers for this period showed a decline three times as large.9 Far from stabilizing, the downturn in the housing market was accelerating, with rapidly declining prices and rising mortgage delinquencies.10 Initial estimates of fourth quarter 2008 GDP were of a contraction at a rate of 3.8 percent per year.11 Later estimates for this period show that the actual decline was at a rate of 8.9 percent per year.12
In September 2008, information that could have provided an accurate, up-to-the-minute assessment of the economy did exist: It consisted of the many transactions occurring in every sector of the economy, recorded in real time in the computer networks and accounting systems of private sector companies and government agencies. Access to this real-time information on payroll tax payments, unemployment filings and average hours worked would have provided the FOMC with insight into the actual, not perceived, employment situation. Information on daily retail sales and prices would have revealed spending and growth trends. Information on mortgage payment delinquencies, which were rising rapidly, would have corrected any impression of a housing market recovery. However, while the information existed, the means to collect, analyze and provide it to FOMC policymakers, on a real-time basis, did not. Their response to the deteriorating economy was hampered by the substantial “recognition lag”13 to which the economic information they needed was subject. A key to better economic policy, especially critical in times of financial crisis and dislocation, is access to this information in real time and on a larger scale, coupled with analytical tools to enable policymakers to interpret it quickly and accurately. This is what big data offers.14
The promise of big data in improving economic policy can already be seen in the Billion Prices Project, which tracks prices in the United States and other countries on a daily basis by using “web scraping” techniques to gather, from publicly available sources, the prices of certain identified goods.15 In comparison, the traditional consumer price index is still determined from data manually (and more expensively) collected for approximately 80,000 items,16 with CPI data for a particular month available after a lag of approximately two weeks (chained CPI is not final until more than a year later).17 Over the past five years, BPP data have closely tracked the CPI.18 The BPP detected not only drops in prices that occurred as soon as two days after the collapse of Lehman Brothers, but also the price recovery that began in January 2009, well before the same information became available through the CPI.19
However, big data will not automatically lead to better economic reports or forecasts. Correct interpretation and modeling of data by economists and statisticians will still be necessary. Big data has enormous potential, but without careful analysis and modeling, the information it provides may be inaccurate. An example of a large data set that nonetheless produces an apparently flawed result is the monthly ADP private sector employment forecast. While ADP’s report is based on 23 million payroll records from over 400,000 employers,20 in the past six months it has varied by an average of 65,000 jobs per month, or 35 percent, from the authoritative final monthly numbers provided by the Bureau of Labor Statistics.21 It may be that the ADP sample is not broad enough or is not properly modeled. In any case, the discrepancy illustrates the caution necessary in using data derived from a large sample that may appear to have produced a reliable result. Economic data are inherently noisy, and providing good reports and forecasts to policymakers requires separating out “the signal from the noise.”22 The availability of big data does not remove the need for common sense, economic theory or careful research design.23 There will still be a need for the discretion of experienced FOMC officials in making economic policy.
The Federal Reserve should seek real-time access to government and private-company data on economic activity and should invest in the human and computing power necessary to fully utilize these data.24 Access to databases will need to be subject to carefully designed protections for proprietary business information and consumer privacy. The types of relevant data are virtually limitless, but some of the more important would be daily sales information from bricks-and-mortar and online retailers, payroll tax payments, energy use, and mortgage and credit card payments and delinquencies. In the Ninth District alone, information from companies such as Target, UnitedHealth Group, Best Buy and U.S. Bancorp, and from government sources such as the Minnesota Department of Revenue, would provide valuable real-time economic data.
In September 2008, a Federal Reserve economist wondered whether he could take the current retail sales report at face value, noting that “we’ve been head-faked a number of times by the retail sales data, which are subject to some pretty substantial revisions.”25 With comprehensive, accurate real-time data, he would have had the answer to his own question. The information provided by big data will lead to better policymaking by the Federal Reserve of the next 100 years.