Skip to Main Content
Speaking of the Economy
Speaking of the Economy - Nicholas Trachter, Marios Karabarbounis, Adam Blandin
Speaking of the Economy
March 11, 2021

Tracking the Economy in Real Time

Audiences: Economists, General Public, Policymakers
Listen:

Download MP3 (18.6 MB, 20:20)

Also available on:
apple podcast logo

Three economists discuss how they have used high-frequency datasets to track the health of the economy during the COVID-19 pandemic — Marios Karabarbounis and Nicholas Trachter of the Richmond Fed and Adam Blandin of Virginia Commonwealth University. Karabarbounis and Trachter compile the Pandemic Pulse, while Blandin runs the Real Time Population Survey with Alexander Bick of Arizona State University.

Speakers


Transcript


Jessie Romero: I'm Jessie Romero, director of Research Publications at the Richmond Fed. In this episode of "Speaking of the Economy," I'm talking with Adam Blandin, an economist at Virginia Commonwealth University, and with Richmond Fed economists Nico Trachter and Marios Karabarbounis.

Over the past year, they have all used new, high-frequency datasets, such as payroll software and cell phone data, to track the health of the economy through the COVID-19 pandemic. With coauthor Alexander Bick of Arizona State University, Adam developed the Real Time Population Survey. Nico and Marios have used their data to produce the Pandemic Pulse, a series of charts that are updated weekly on the Richmond Fed's website, richmondfed.org.

I hope you enjoy our conversation today, and if you do, you can find more episodes and subscribe through Apple Podcasts.

So Adam, let me start with you. Employment and GDP are among the most widely cited indicators of the health of the economy. But, despite how important they are, the jobs report comes out only once a month and GDP only comes out on a quarterly basis. Why is that?

Adam Blandin: Thanks, Jessie.

The monthly jobs report by the Bureau of Labor Statistics is actually based on two different labor market surveys. One is a survey of firms and the other is a survey of households.

The headline unemployment number that attracts the most attention comes from the household survey, which is called the Current Population Survey, or CPS for short. That's a regular survey that's been run for a long time. It surveys about 60,000 households each month. To interview so many households and to compile all the data associated with that requires hundreds of staff and tens of millions of dollars a year. So, to do it more often would just require even more resources.

You have a similar message applying to measurements of GDP. That's only measured once a quarter, and just collecting, reporting, and compiling the data from the large number of sources that you need can be very costly. There is just a limit to what our government can produce.

Romero: Under normal circumstances, does it matter very much that there's a lag between when we get the data and the period of time the data cover?

Blandin: Let's start by thinking about where the lag comes from. When you're looking at data, it's always going to be slightly out of date. The question, you know, is how out of date?

Because the jobs report, for example, is only conducted once a month, that means at certain times of the month the data is going to be at least four or five weeks out of date. But it's actually even a little bit larger of a lag than that. It takes the government several weeks from when they conduct the survey until they release the survey because of the processing time.

This means that information on the labor market can be almost two months out of date at certain points. For example, the most recent jobs report refers to the week of January 10, 2021. The next job report we're not going to get until March 5.

Now, even being almost two months out of date isn't such a big deal in normal times. The labor market normally doesn't dramatically change from one month to the next. So, it's rare for the unemployment rate, for example, to move by more than half a percentage point in any direction between any two months.

Romero: So that's during normal times. Obviously, things haven't felt normal in quite a while. So, Nico, let me turn to you and ask, what about during a crisis like the COVID-19 pandemic? What challenges do these kind of data lags pose for policymakers when we are in a situation like the current one?

Nicholas Trachter: Hi, Jessie. Yeah, that's a good question.

The pandemic had a massive impact on the real economy. A huge number of workers lost their jobs very quickly and businesses were closing their doors very, very quickly, too. More than 22 million jobs were lost between February and April.

When the economy is unraveling that fast, it requires a similarly fast policy action, and the long data lag that Adam was talking about means that policymakers have to act without the most recent, relevant information. So acting quickly to alleviate the effects of the pandemic requires also to act fast and that requires high-frequency data.

Marios Karabarbounis: Hi, Jessie. Let me add to what Nico said.

A delayed action could be especially costly in the COVID-19 recession. The downturn happened in just a few months, extremely fast from a historical perspective. Also, employment losses were concentrated in a few sectors like accommodation and food services that typically pay lower wages and also employ younger people. So, a delayed policy response could seriously affect their ability to make ends meet.

Romero: Thanks for adding that, Marios.

Nico and Marios, you have taken advantage of some new and unique datasets to track the economy throughout the pandemic. Could you describe some of them?

Trachter: Sure, Jessie. Let me just tell you about three of them.

The first one is named Homebase. It is just a scheduling and time-tracking tool that's used by around 100,000 small and local businesses across the U.S. Mostly they are restaurants and other small businesses. They tell us things like how many local businesses are open, how many hours their employees are working [and] how much income their employees have lost due to the pandemic. Overall, it gives us insights about the labor market.

Another one is Kronos. Kronos is a time-keeping software for hourly employees. Every week, people go and punch if they are working or not for a large amount of firms that includes many industries.

Finally, another one is called SafeGraph. This one uses smartphone location data to track consumer foot traffic, basically when and how frequently they visit various stores and restaurants. With this we can go and measure how individuals changed their habits and patterns due to the pandemic.

Romero: So, what did you learn from all of these various sources?

Trachter: We could see that the pandemic hit especially hard small businesses and, in particular, restaurants. Now, I suppose this is not surprising, but at the time we didn't know what was going on.

Also, we saw some interesting trends. For example, activity didn't decline equally across cities. Also [there was] a bigger decline on weekends relative to weekdays, maybe because people are spreading out their consumption or shopping across time or are trying to avoid all of the other people going to do shopping on the weekends.

Karabarbounis: We also looked at whether social distancing helps in reducing infections. Surprisingly, we found that measures of mobility were only weakly related to infections. That was true when we compared different countries — for example, the U.S. with countries in Europe — and also when we compared some regions within the U.S. that, at the time, were experiencing a surge of the infection rate like Florida and Arizona with the rest of the U.S.

Trachter: Just to add to this, all of this information was very important to us, to other researchers, and to policymakers. It allowed us, all of us, to have a monthly picture of which communities and citizens who were hurting the most without the need to wait until the standard low-frequency information were published.

Romero: Are there any advantages to datasets like this beyond just the fact that they're more timely?

Karabarbounis: Yes. Traditional datasets are typically based on surveys and, as Adam explained, they're difficult and costly to collect. Moreover, they are subject to measurement error.

All the datasets that we use employ some kind of tracking technology. SafeGraph and Google are using geolocation tracking, while Kronos and Homebase are using online log-in tools. With these datasets, we can gather more information about more people, relatively quickly and cheaper.

Romero: Are there any disadvantages?

Karabarbounis: The biggest concern is how applicable these results are to the broader economy. All the datasets provide a clear but narrow picture of the overall economy. For example, Google geolocation services are more likely to be used by younger people at metro areas, so you cannot necessarily assume that the same trends will hold for other groups of people or for other regions. That is exactly why we used many datasets from many sources in order to have a comprehensive picture of the economy.

Trachter: Also, let me add that there are multiple confounding reasons that can explain changes in a particular time series. As we measure data at a higher frequency, the confounding reasons naturally expand. For example, there is seasonality in shopping behavior around weekends, which we wouldn't see if we measure shopping behavior on a monthly basis. Thus, depending on the question you are trying to explore and answer, you need to pay special attention to why the data are moving.

I mentioned the changes we saw in shopping behavior. Consider the case where we collect data at a daily basis and we see a change in shopping behavior on Saturday. Do we conclude the change is because of the pandemic, or is there something else going on around weekends?

Another thing I want to point out is, by definition, working with high-frequency data requires a fast turnaround of analysis. A natural implication is that one has less time to think hard about the details in the analyses. And, in research, the devil is always in the details.

Blandin: I'll just second that. In my own experience, research papers often take years to put together. In the spring and summer, we were doing analyses that were being turned around in a few days or a week or so. That fast pace is a very different environment than, I think, what normal academics are used to.

Romero: Yes, it was a very different pace for people at the Fed as well.

Blandin: I bet, yeah.

Romero: So, together with Alexander Bick of Arizona State University, you've developed what you've called a "real time population survey." What inspired you to create it?

Blandin: There were a few weeks right when the pandemic was starting in the U.S. that really felt like the world was falling down around us. The stock market was crashing. Major sports events were being cancelled. A lot of us weren't allowed to go into the office or to bring our kids to school or to daycare. As an economist, it was pretty clear that this was going to have bad consequences for the labor market, but the question was, kind of, how bad?

Just taking a quick look at the calendar, my coauthor Alex and I realized that the first jobs report wouldn't come out until May 7 with information about the full extent of job losses after the pandemic. And, I don't know if you remember, but back in March and April, a day or a week seemed like it just stretched for an eternity. So, thinking about waiting almost two months for new data seemed kind of crazy. So, a lot like Marios and Nico, we wondered if there were ways of getting more up-to-date data, and that led us to create the Real Time Population Survey, or RPS for short.

Romero: So what is the RPS and how does is it different from the CPS that you talked about earlier?

Blandin: The basic idea of the RPS was to try to replicate the core of the government survey. The idea was by replicating the government survey, we wanted to try to ensure that our numbers were comparable. You might be skeptical of the quality of a new survey or wonder what is it really telling you, and one way we tried to build confidence in our measures was to follow a really high quality survey that many economists have experience working with.

The difference was, rather than employing a whole army of staff to personally interview tens of thousands of people, we used an online survey company to send our interview to about 2,000 respondents. This had a few advantages. First, it was much cheaper than the government survey — it was something we could fund ourselves, with some support from our institutions. Second, we could process the results almost immediately because everything was conducted online. This allowed us to release our results much sooner and to run the survey twice as often as the government survey.

Also, because we designed the survey, it allowed us to include some additional data that was not in the government survey. For example, we asked some questions about firm tenure, whether firms were recalling workers that they had temporarily laid off, changes in worker pay over short durations, and about work from home. Some of that information is in the government survey, but we were able to ask more tailored, more detailed questions than what the government could.

Romero: The CPS is 60,000 households, which is a really big sample size, and then you go down to 2,000 people. How do you guarantee the validity of the results when the sample size is so much smaller?

Blandin: That's a great question. That's one of the things we thought a lot about.

The first thing to point out is there's going to be inherent limitations with a small sample size. We can only cut the data so finely. So, most of our analyses are at the aggregate level or only looking at broad groups of workers — for example, men vs. women — whereas with a larger sample size you can look at more narrow groups of people.

Another thing that we did was we collected information about workers' labor market experience several months prior to when the interview was being conducted. In particular, we asked people about their work experience in February 2020, which was just before the pandemic hit.

One reason we did that was we wanted a reference point to see how workers' experiences had changed since the pandemic. Another thing it allowed us to do was to validate our measures of what people in our surveys were telling us in February with the actual February report from the government, which we already had. So we did some sort of retrospective validation exercises.

There were some differences to the government survey. To our surprise, a lot of the numbers ended up lining up relatively closely with the government survey. That gave us some confidence as well.

Romero: There was a lot of alignment in February as you just described. In the months that followed, did your survey tell a different story than the CPS did?

Blandin: Along many dimensions, the data pretty closely agreed with the government survey. For example, we show that similar patterns in employment and hours worked are coming through both in our survey and in the government survey.

There are some differences. One interesting thing we found is the unemployment rate is a bit higher in our survey versus the government survey. One reason the unemployment rate could be higher in our survey is in order to be classified as unemployed, you need to be actively searching for work. The way the government survey collects this information is they ask you, what did you do to search for work, whereas our survey, since it's an online survey, we can't have people responding with open-ended questions. Instead, we give them a list of options that they can choose from. We're finding more people actively searching for work possibly because we're prompting them to do so.

Despite those differences, I think the key advantage of our numbers is that they come out several weeks ahead of the government survey.

Romero: Have you seen policymakers or businesses take advantage of your data, given that they are coming out a few weeks ahead?

Blandin: Yeah. Policymakers have used our survey as one of many signals about the state of the labor market. There have been lots of advances in high-frequency data since the start of the pandemic. Work like Marios and Nico has done has played an important role. The Census Bureau came out with their own high-frequency online survey called the Census Pulse. With those other measures, our survey was one tool to help fill in the gaps for policymakers while we waited for the government survey to come out.

Another area of our survey that has been fairly widely used is data on work from home. [This data] is relevant for a wide range of questions. Just to take one example that was surprising to me at least. Apparently, radio stations had a lot of trouble selling ad space during the pandemic because advertisers didn't know if anyone was driving in to work and listening to the radio. Our data showed that, after spiking in the spring, commuting actually picked up during the summer. Some radio stations were using our data as evidence that advertisers should continue to buy ad space because people were, in fact, driving to work.

Romero: Well, thank you for your contributions to more ads on the radio. [Laughter]

Technology like smart phones, GPS allows us to collect and track all kinds of detailed real-time data. What opportunities do you see on the horizon for future data sources?

Karabarbounis: There are many opportunities for social scientists to get access to personalized data in order to make aggregate inferences. As the world becomes more connected, these datasets will likely become more available.

Trachter: As previously mentioned, it is now possible to track the location of people in real time. This allows researchers to study spatial consumption patterns, or commuting patterns, or how much time people spend in locations with other people. Also, we can learn a lot about life habits of people, for example, if they like going to parks, the movies, restaurants, or any other thing.

Romero: Nico, why would it be important for economists to know if people like going to parks or to the movies?

Trachter: Well, I guess it depends on the question in mind. In general, we may want to know about that just to think about what goods people like to buy and shop around. In particular, in terms of COVID, we worry about people getting together, and we care about businesses and want to understand what kinds of things people want to do and what are the risks in order to take that into account when we do policy.

Blandin: Just to take the flip side of what Nico said. On the one hand, cell phone data has definitely made it easier to track people geographically and that can provide lots of useful information. But cell phones have also make it easier for people to screen calls and to avoid interviews. For example, my iPhone now has a feature that it will automatically screen a call if it doesn't recognize the number. If that was someone trying to call me to conduct a poll, that's just one more person that they have not been able to recruit.

So, changes in technology bring new opportunities, but they also present new challenges. I think overall it's just an exciting frontier to take part in.

Romero: Well, thank you all so much for joining us to talk about your work. I really appreciate your time and have really enjoyed our conversation.

Blandin: Thanks very much. It was great.

Trachter: Thank you.

Karabarbounis: Thank you very much.

Phone Icon Contact Us

Research Department (804) 697-8000