Skip to Main Content

Applied Macro-Econometrics: A Conference Recap

Economic Brief
October 2022, No. 22-43

Lessening the computational difficulty of large Bayesian autoregressions, using predictions pools as a means of estimating impulse response functions, and removing the predictable component of interest rate changes around FOMC meetings: These were among the methodological procedures addressed by economists during a recent research conference.


Economists from the Richmond Fed and research universities met for a conference in Richmond during CORE Week in September. Conference participants presented and discussed papers on a variety of topics related to applied econometrics. This Economic Brief summarizes those presentations.

High Dimensional Block Vector Autoregressions

Bayesian vector autoregressions are widely used for macroeconomic analysis and forecasting. (For background, see the article "Computer Models and the Fed.") Until recently, however, economists have mostly confined their use to models with a relatively small number of variables. This is because larger models often run up against computational limitations, since the number of parameters to be estimated increases exponentially with the number of variables in the model.

Despite these challenges, large Bayesian vector autoregressions have generally had good forecasting performances. Consequently, many papers have explored methods of lessening their computational difficulty, usually by imposing restrictions on or shrinking a model's covariance matrix:

  • Popular restrictions include banding, tapering and thresholding.
  • Common shrinkage methods include optimal shrinkage, penalized likelihood, Bayesian methods and regularization of principal components.

Drew Creal of the University of Notre Dame presented a novel approach to the estimation of large vector autoregression models in his paper "High Dimensional Block Vector Autoregressions," co-authored by Jaeho Kim of Hanyang University.

The researchers develop Bayesian estimation methods for multiple linear regression models where the covariance matrix of the errors has a block structure. They derive the posterior and marginal likelihood for a Gaussian regression model when the number of blocks and the assignment of each series to a block are both known. When the block structure is unknown, they build a random partition model that assigns a prior distribution over the space of partitions of the data into blocks.

Although the methods in their paper do not directly impose sparsity (or zeros) in the covariance matrix, a block covariance matrix becomes sparse after the original data are appropriately rotated according to their method. By searching model space over block matrices, their approach implicitly searches over this space of rotations and the degree of sparsity without imposing sparsity directly on the covariance matrix.

The paper applies the new methods to two common settings:

  • Factor models in finance
  • Vector autoregressions in empirical macroeconomics

Uniform Priors for Impulse Response Functions

Structural vector autoregressions are popular for estimating dynamic causal effects in macroeconomics. To solve the identification problem associated with such models, economists often impose sign and/or zero restrictions on the structural parameters or some function of the structural parameters, such as an impulse response function (IRF). However, the use of such restrictions has been criticized by some researchers.  

Daniel Waggoner of Emory University presented research that addresses some of these criticisms. His paper, "Uniform Priors for IRFs," was co-authored with Juan Jonas Arias of the Philadelphia Fed and Juan Rubio-Ramirez of Emory University and the Atlanta Fed. The paper makes three main points:

Conditional Prior Distributions of Responses

While the researchers concede that the conditional prior distributions of individual responses are nonuniform, they argue that the problem is not as severe as some critics suggest. They show that:

  • The conditional prior distributions of individual impulse responses are different from unconditional prior distributions of individual impulse responses.
  • The unconditional prior distributions of individual impulse responses do not drive unconditional posterior distributions of individual impulse responses.

Prior Overidentified Sets

The authors recognize the validity of some economists' concern that it is not desirable that the "prior overidentified sets" are nonuniform. They point to research, however, that suggests that this problem is less severe in tightly identified models. They show that the conventional method implies a uniform joint prior and posterior distributions over the identified set for the vector of impulse responses.

Prior Research

They present an argument against the contention of the 2015 paper "Sign Restrictions, Structural Vector Autoregressions, and Useful Prior Information (PDF)" that "Because the objects of interest in structural VARs are highly nonlinear functions of the underlying parameters, the quest for 'noninformative' priors for structural VARs is destined to fail."

Averaging Impulse Responses Using Prediction Pools

Macroeconomists compute impulse responses to trace out the effects of structural shocks on macroeconomic aggregates. When estimating these impulse responses, economists face many choices between methodological approaches: vector autoregression versus local projection, Bayesian versus frequentist, and so on. It is well known that these choices can generate significantly different results. While there is a growing literature discussing conditions under which one approach might be preferred over another, many of the conditions are likely to be difficult for researchers to verify in practice.

Paul Ho of the Richmond Fed presented "Averaging Impulse Responses Using Prediction Pools (PDF)," co-authored with Thomas Lubik, also of the Richmond Fed, and Christian Matthes of Indiana University. The paper introduces a method to average impulse responses from different estimators by extending the optimal prediction pools studied in the 2007 paper "Combining Density Forecasts" and the 2011 paper "Optimal Prediction Pools." This new method is designed to appeal to empirical macroeconomists who may find it difficult to choose between different methods for estimating impulse responses.

The researchers find the optimal weights that maximize the weighted average log score function for forecasts conditional on the structural shock of interest. The only input required is a set of forecast densities that trace out the model-specific effects of the shocks of interest. The individual impulse responses can be based on any method that delivers such a conditional forecast density for a given variable at a given horizon.

The paper highlights several broad messages for estimating impulse responses using prediction pools:

  • The theoretical properties of individual models are not sufficient criteria for the choice of optimal weights in the prediction pool.
  • Misspecified models can dominate correctly specified (or more flexible) models in finite samples.
  • Models that produce tighter estimates need not receive greater weight.
  • The choice of models and their weights depends on the entire predictive distribution and not only the point estimates.

Monetary Policy Surprises and High-Frequency Identification

Over the past two decades, high-frequency interest rate changes around Federal Open Market Committee (FOMC) announcements have become signals for identifying the effects of monetary policy. These rate changes — which are commonly interpreted as monetary policy surprises — provide researchers with important clues to help identify the effects of monetary policy on asset prices and the macroeconomy.

A few recent studies, however, have questioned whether these so-called surprises possess the desirable properties (particularly exogeneity) that the literature has typically assumed. Indeed, researchers have found that these surprises may not actually be surprises after all, since they are correlated with macroeconomic and financial data that is publicly available prior to FOMC announcements.

Eric Swanson of the University of California-Irvine presented research that addresses these concerns. His paper, "A Reassessment of Monetary Policy Surprises and High-Frequency Identification," was co-authored by Michael Bauer of Universtat Hamburg.

The researchers take two approaches to addressing the problem. First, they substantially expand the breadth of data available for analysis by including monetary policy surprises around an expanded set of events. In addition to including interest rate changes around FOMC announcements, they include changes around press conferences, speeches and testimony by the Fed chair.

Second, for this expanded set of events, they address the exogeneity issue by attempting to remove the predictable components of interest rate changes that occur around the events. They do this by regressing interest rate changes against a variety of economic and financial indicators that were known by the market prior to the event. The researchers believe that the residuals from these regressions — which they refer to as orthogonalized monetary policy surprises — will provide better estimates of monetary policy's true effects.

Reassessing previous high-frequency event studies, the researchers find that the use of their new dataset does not greatly change estimates of the effects of monetary policy on financial markets. However, the new data appear to substantially change estimates of the effects of monetary policy on the macroeconomy.

Event-Day Options

Economists have long used options prices to draw inferences about market participants' ex-ante views about market volatility. The widely used CBOE Volatility Index (VIX), for example, tracks 30-day volatility implied by S&P 500 index options. Like many options contracts, the VIX's relatively lengthy horizon impairs its usefulness for measuring the uncertainty associated with specific events.

However, the Chicago Mercantile Exchange (CME) at least partially alleviated this problem in 2011 when it introduced weekly options expiring each Friday on S&P 500 and Treasury futures. (The CME followed up in 2017 by adding weekly options expiring each Wednesday.) Jonathan Wright of Johns Hopkins University presented his paper, "Event-Day Options," which takes advantage of these one-week options and the more granular measures of uncertainty that they provide.

He uses weekly options to measure uncertainty as of the day before major events, particularly employment reports and FOMC meetings. For an option scheduled to expire on an employment-report Friday, he uses the implied volatility as of the previous day's close as a measure of the ex-ante uncertainty specifically associated with that employment report. Likewise, for options scheduled to expire on an FOMC-announcement Wednesday, he uses the implied volatility as of the previous day's close as a measure of the ex-ante uncertainty specifically associated with that FOMC meeting.

Wright recognizes that jobs data are not the only news hitting the market on employment-report Fridays (and FOMC-announcement Wednesdays). He attempts to control for this by observing the behavior of implied options volatilities prior to non-employment-report Fridays (and non-FOMC-announcement Wednesdays). He uses these observations as benchmarks for comparison.

His analysis provides evidence that employment reports and FOMC meetings lead to elevated implied options volatilities and risk premia. In his view, the large volatility risk premium in Treasuries around FOMC announcements suggests that monetary policy uncertainty is a significant driver of the overall Treasury volatility risk premium.

The S-Curve and the Dynamics of Worldwide Financial Liberalization

One of the most important developments over the past four decades has been the growing willingness of governments to open their countries' financial sectors to market forces. Few policy choices are as fundamental and controversial as the choice of whether to engage in or resist financial liberalization.

Tao Zha of Emory University presented research on the worldwide evolution of financial reform. His paper, "The S-Curve: Understanding the Dynamics of Worldwide Financial Liberalization," was co-authored with Nan Li and Chris Papageorgiou, both of the International Monetary Fund, and Tong Xu of the Southwestern University of Finance and Economics.

The researchers used a novel and comprehensive database of domestic financial reforms that covers 90 countries over the period 1973-2014. Using this data, they document that global financial liberalization followed an S-curve path: Reforms were slow and gradual in early parts of the period, accelerated substantially during the 1990s and slowed after 2000.

They estimate a learning model that explains these dynamics. In the model, policymakers update their beliefs about the growth effects of financial reform by learning from their own and other countries' experiences. One of the researchers' notable findings is that emerging and low-income countries learned from the liberalization experiences of advanced economies. Also, the learning process had more to do with information diffusion than with geographic proximity. The authors estimate that if emerging economies had learned only from experiences among themselves — and not from those of advanced economies — they would have adopted fewer financial reforms, and their resulting GDP per capita would have been 8 percent lower on average as of 2014.

According to their estimates, policymakers in most countries held pessimistic views about the growth effects of financial liberalization during the early part of the period. The situation changed dramatically by the 1990s after policymakers in emerging and low-income countries had observed positive post-liberalization outcomes in some advanced economies. Conversely, the authors find that the 2008 global financial crisis caused a negative reversal of belief about the effects of financial reforms on growth, especially in emerging and low-income economies.

Zoomers and Boomers: Asset Prices and Intergenerational Inequality

Large and persistent income inequality across generational cohorts has been observed in western economies over the past century. For example, the median U.S. male who turned 25 in the late 1960s earned substantially higher career earnings (adjusted for inflation) than the median U.S. male who turned 25 during the early 1980s.

Leland Farmer of the University of Virginia presented research that explores the causes and implications of this phenomenon. His paper, "Zoomers and Boomers: Asset Prices and Intergenerational Inequality," was co-authored with Roger Farmer of UCLA.

The researchers construct a dynamic stochastic general equilibrium (DSGE) model in which people have finite lives and a constant probability of death. Models with this property are referred to as perpetual youth models. Agents in the model trade with each other in dynamically complete asset markets but are unable to buy or sell securities contingent on the state of the world they are born into.

Focusing on long-lived agents with Epstein-Zin preferences, the researchers present the first discrete-time solution to the problem as a function of the moments of the endowment profile and of current and future prices. Previous macro models that use Epstein-Zin preferences had exploited the representative agent assumption to simplify the solution.

They prove the existence of multiple autarkic steady-state equilibria in the perpetual youth model when agents have a hump-shaped endowment profile and when the intertemporal elasticity of substitution is less than a critical value. They establish that one of the autarkic steady states is dynamically inefficient and demonstrate that this fact permits the construction of equilibria that are driven principally by non-fundamental shocks to beliefs. By exploiting sunspots and indeterminacy, they explain three asset market puzzles:

  • The low risk-free rate of interest
  • Excess volatility of asset prices
  • A large equity premium

Their model captures the large and persistent movements in generational inequality that have been seen in the U.S. It provides a plausible explanation for the fact that members of Generation Z are the first Americans in recent history to be worse off than their parents with little or no prospects of accumulating significant financial assets, whereas many Baby Boomers were already homeowners in their 30s.


John Mullin is a senior economics writer in the Research Department at the Federal Reserve Bank of Richmond.


To cite this Economic Brief, please use the following format: Mullin, John. (October 2022) "Applied Macro-Econometrics: A Conference Recap." Federal Reserve Bank of Richmond Economic Brief, No. 22-43.


This article may be photocopied or reprinted in its entirety. Please credit the author, source, and the Federal Reserve Bank of Richmond and include the italicized statement below.

Views expressed in this article are those of the author and not necessarily those of the Federal Reserve Bank of Richmond or the Federal Reserve System.

Subscribe to Economic Brief

Receive a notification when Economic Brief is posted online.

Subscribe to Economic Brief

By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.

Phone Icon Contact Us

RC Balaban (804) 697-8144