Skip to Main Content

The New Neoclassical Synthesis and the Role of Monetary Policy

Honoring Marvin Goodfriend

Goodfriend, Marvin, and Robert G. King. 1997. "The New Neoclassical Synthesis and the Role of Monetary Policy." In NBER Macroeconomics Annual 1997, edited by Ben S. Bernanke and Julio J. Rotemberg, 231-283. Cambridge and London: National Bureau of Economic Research.

Marvin Goodfriend is probably best known for his contributions to practical policy analysis during his long and distinguished career as an economist and economic advisor within the Federal Reserve system. But his influence was also great outside the Fed, and indeed outside the community of central bankers. Marvin made fundamental contributions to the modern theory of monetary policy, which have greatly influenced the scholarly literature as well. He was unusual in his ability to bridge the worlds of practical policy debate and scholarly analysis, providing academics like myself insight into the issues that needed to be addressed in order for the academic literature to be of greater relevance for policy discussions, while also playing a crucial role in translating the conclusions from economic models for policymakers. My own work was deeply influenced both by my study of Marvin's writing and by the many conversations that I was privileged to have with him about our shared concerns.

His paper with Bob King, "The New Neoclassical Synthesis and the Role of Monetary Policy," is a landmark in the development of the modern, welfare-based theory of monetary policy. It was one of two papers1 published in the NBER Macroeconomics Annual for 1997 that advocated a new approach to monetary policy analysis, using DSGE models with a basic architecture taken from real business cycle (RBC) theory,2 but introducing sticky prices in order to allow for real effects of monetary policy.

The two papers, while written independently, were largely complementary in the approaches that they proposed. However, the emphases of the two papers were different. The primary aim of Rotemberg and Woodford (1997) was to demonstrate the possibility of the kind of econometric policy evaluation of specific quantitative policy rules promoted (most notably) by John Taylor (1979, 1993a), while deriving both model equations and policy objectives from explicit microeconomic foundations as in RBC models. Goodfriend and King instead focused primarily on the conceptual foundations of the new type of model; on the general principles that should inform a welfare-based approach to monetary policy analysis; and on the desirability of a particular kind of target for monetary policy, without reference to the kind of interest rate reaction function that might be involved in implementing it.3

These new papers can usefully be considered in the context of Julio Rotemberg's review of the emerging "New Keynesian Microfoundations" a decade earlier.4 That paper had highlighted a shift from an emphasis on nominal wage rigidity (in the models of authors such as Ned Phelps, Stan Fischer, and John Taylor in the late 1970s) to models of sticky prices, an emphasis continued in the first wave of monetary DSGE models. Rotemberg also emphasized the emergence of models in which price adjustment results from the explicitly modeled optimizing decisions of firms, rather than being specified by a posited dynamic response of "the market" to imbalances between supply and demand or assuming that prices are predetermined by some shadowy "auctioneer" at a level that is "expected to clear the market" at the time that the prices are set. In this connection he argued for the value of modeling individual suppliers as monopolistic competitors. Consideration of the price-setting decisions of suppliers naturally led to an emphasis on the relationship between individual prices and firms' (actual or anticipated) marginal costs rather than on a gap between supply and demand; it also made it natural to consider the role of firms' expectations regarding future market conditions as a central determinant of pricing dynamics. The paper briefly reviewed popular dynamic models of staggered wage or price adjustment based on nominal commitments for a fixed period of time, as in the influential models of Taylor (1980) and Blanchard (1983) and early models of state-dependent pricing. For purposes of econometric modeling of aggregate time series, however, Rotemberg advocated two approaches that both allowed flexible variation in the degree of stickiness of prices while preserving tractability of the analysis of dynamics: a model with quadratic costs of price adjustment5 and one in which individual prices remain fixed for random intervals, with a constant hazard for reconsideration at any point in time.6 The 1997 NBER Macroeconomics Annual papers represent a further stage of development of the program sketched by Rotemberg.

A New Neoclassical Synthesis

Goodfriend and King call the approach they advocate a "New Neoclassical Synthesis."7 The terminology recalls Paul Samuelson's proposal of a "neoclassical synthesis" in the mid-20th century, intended as a way to reconcile the use of Keynesian models for practical policy analysis with the Walrasian model of competitive equilibrium, the canonical model of a market economy among economic theorists. Samuelson proposed that the Walrasian model correctly described the long-run equilibrium of a market economy, once prices and wages have all adjusted in response to market forces, while the Keynesian model (or more specifically, its IS-LM formulation by John Hicks) described equilibrium in a short run over which wages and/or prices were predetermined. This formulation allowed economists to regard each model as valid in its own (carefully delimited) sphere of application, but it didn't really integrate the two approaches; there was no accepted account of the dynamics of wage and price adjustment that should lead from one situation to the other, and this left considerable ambiguity about exactly when (if ever) either of the two limiting cases should be empirically relevant. The lack of a model of wage and price adjustment meant that the framework had little to say about the causes or consequences of inflation, a weakness that became glaring by the 1970s; and the lack of any explicit modeling of dynamics made it hard to say much about the determinants and effects of expectations, an increasing focus of attention by the 1970s as well. As Goodfriend and King discuss, these weaknesses made the original neoclassical synthesis particularly unsuitable as a guide to the conduct of monetary policy.

RBC theory offered a different answer to the question of how to integrate a model of short-run fluctuations in business activity with a model of long-run growth by developing a Walrasian model of a complete intertemporal equilibrium (rather than using Walrasian competitive equilibrium only as a model of an essentially static "long run"), with fluctuations in response to exogenous random disturbances to productivity. The Kydland-Prescott (1982) model offered a complete description of the dynamics of the economy's response to a shock, with no artificial separation of "short-run" from "long-run" analysis, and at the same time provided complete choice-theoretic foundations for all of the model's equations, so that there was a clear answer (at least in theory) to the question of which equations should be considered "structural" in the face of a change in government policies. However, RBC models of this kind provided no guidance for monetary policy. Indeed, Kydland and Prescott argued against any role for monetary policy as a determinant of economic activity, even in the short run, so that a quarterly model of business fluctuations could safely ignore nominal variables altogether. But the econometric estimation of the real effects of monetary policy became an increasing focus of study in the late 1980s and throughout the 1990s, and most of this literature (reviewed in Christiano et al., 1999) found real effects of identified monetary policy shocks that were nontrivial both in size and persistence. The new generation of models developed in the mid-1990s sought to make DSGE models consistent with these facts.

Goodfriend and King argue that the new kind of models represent an updated (and more articulated) version of Samuelson's neoclassical synthesis. A Walrasian model of market equilibrium (essentially, an RBC model) is still at the heart of the synthesis model and represents a limiting case of it (one in which the parameter that determines the delay in price adjustment is set to zero). Moreover, even more than in the original neoclassical synthesis, all model structural relations are derived from explicit analysis of the optimization problems of households and firms (including an analysis of optimal price setting, on those occasions when monopolistically competitive suppliers reconsider their prices), just as in Walrasian general equilibrium models (and RBC models). Yet the fact that prices are not continually reoptimized means that the short-run effects of shocks reflect the consequences of optimizing behavior when some prices or wages are predetermined. This means, as in the original neoclassical synthesis, that aggregate demand — and, crucially for Goodfriend and King, monetary policy — becomes an important determinant of economic activity in the short run, even though the economy's long-run growth path is determined by factors such as productivity growth, growth of the labor force, and incentives for capital accumulation, which are essentially independent of monetary policy. The model also retains important features of an RBC model in that "supply-side" factors (such as random variations in productivity growth) continue to play an important role in the economy's short-run dynamics.

Goodfriend and King introduce nominal rigidities into a DSGE model using a variant of the model of staggered price adjustment originally proposed by Calvo (1983) and adapted to a discrete-time DSGE framework by Yun (1994, 1996). In the Calvo-Yun model, firms are monopolistically competitive suppliers of differentiated goods and set the prices of their own product so as to maximize the value to the owners of the firm of the flow of profits generated by its pricing policy. Thus, the model is one in which prices are determined on the basis of an optimizing decision, as advocated by Rotemberg (1987), rather than being arbitrarily specified or adjusting "in response to market pressures" through some arbitrarily specified process that is unclear about who actually arranges for prices to change. This makes clear the role of factors such as firms' degree of market power (as well as their information when decisions about prices are made) in price determination. The real effects of monetary disturbances result from an assumption that prices are not continuously reconsidered, and, as in the more ad hoc models of Taylor (1980) and Blanchard (1983), the persistence of these real effects is amplified by staggering of the times at which different firms reconsider their prices. Yun (1994, chap. 1) further showed that the empirical realism of the adjustment dynamics implied by such a specification (when combined with an RBC core model) was improved by assuming random intervals between price adjustments rather than fixed-length price commitments as in the models of Taylor or Blanchard. This made the Calvo-Yun specification convenient for use in parsimoniously parameterized monetary DSGE models that were intended to be compared with aggregate time series, such as King and Watson (1996), Yun (1996), and Rotemberg and Woodford (1997).8 The Calvo-Yun model also had the advantage of allowing a flexible specification of the time required for prices to adjust, while requiring only a small number of state variables, so that analytical solutions remained possible in the case of sufficiently simple policy rules (as illustrated, for example, in Woodford, 1996 and 1999).

Goodfriend and King discuss how the Calvo-Yun framework can be further generalized, to endogenize the timing of firms' price adjustments rather than treating these as exogenously specified. The approach they suggest (citing an early version of Dotsey et al. [1999], in which the analysis is more fully developed) incorporates elements of "state-dependent" pricing models while retaining much of the tractability of the Calvo-Yun framework. In addition to providing more complete microfoundations for the specification of price adjustment dynamics, the richer framework of Goodfriend also nests models such as those of Taylor and Blanchard as special cases, thus allowing a more unified treatment of the literature on this topic.

This kind of microfounded model of price adjustment had consequences beyond those relating to the tractability of calculations, the interpretability of macroeconomic structural relations in terms of measurable microeconomic variables, and the possibility of parameterizing the model to allow for substantial persistence. One that was to prove important for subsequent policy discussions followed from the fact that firms are assumed to set prices in a forward-looking way, recognizing that they are unlikely to reconsider their prices again immediately, though it may already be predictable that market conditions are changing. This makes expectations, and more specifically expectations about other firms' likely price increases over the near term, a crucial factor in price setting, as Goodfriend and King emphasize. To the extent that one accepts the realism of the assumptions of this kind of model, it provides a powerful case for the potential value for stabilization policy of credible, public, and easily interpretable advance commitments about future policy, such as official inflation targets;9 it also suggests that more ad hoc announcements about future policy, as in the case of "forward guidance" in response to a crisis, can be effective.10

Another general implication of NNS models, highlighted by Goodfriend and King, is that they imply that an increase in relative price dispersion has adverse effects similar to a negative productivity shock and that instability of the general price level should increase such dispersion. (To establish this result, they leverage the explicit demand aggregation provided by Dixit-Stiglitz aggregators and the specific production aggregation result developed by Yun [1994, 1996].11) This provides a rigorously microfounded basis for concern about the stability of the general price level. While this is probably not the only reason that a variable price level complicates economic decision-making and hence creates distortions, the Goodfriend-King model provides strong support for the importance of price stability, even without taking into account these other potential reasons.

Inflation and welfare

The greatest strength of a model of business fluctuations, and of the short-run effects of monetary policy, with explicit microeconomic foundations is that it becomes possible to evaluate alternative approaches to the conduct of monetary policy not simply in terms of positive predictions (i.e., the extent to which various variables should be stabilized to a greater or lesser extent), but in terms of economic welfare (i.e., the extent to which people more successfully achieve their private objectives, the ones revealed by their behavioral choices). Thus the theory of monetary policy can be treated as a branch of welfare economics, using methods similar to, and fully consistent with, the ones that had already been used for decades in theoretical public finance (including the dynamic extensions of the theory that figured extensively in the more recent literature).

It is in their discussion of the implications of the New Neoclassical Synthesis (NNS) framework for a normative theory of monetary policy that Goodfriend and King break the greatest amount of new ground. Sections 7 and 8 of the paper take up a broad range of central issues in the theory of monetary policy and provide novel insights about most of them. Here I will mention only a few of the most striking of these insights.

Many economic theorists have noted that, in principle, the money prices charged for real goods and services should be of no significance for decisions about quantities (only relative prices should matter), and they have asked why, if that is so, the inflation rate (the rate of change of prices in general) should be a matter of concern at all. Goodfriend and King point out that the NNS model provides an answer by showing how the inflation rate is inevitably connected with changes in relative prices that distort the allocation of resources (even on the assumption that households and firms are all perfectly rational and thus not subject to "money illusion"). First of all, as already mentioned, their model of staggered price setting implies that under any path of the general level of prices other than perfect price stability, the fact that different firms revise their prices at different times will result in relative price differences (that do not reflect any differences in production costs or utility from consumption of the different goods) and hence in deadweight losses of the same kind as those resulting from distorting taxes.

Second, and more subtly, they point out that their model of optimal price setting implies a structural relationship between price changes and the gap between a good's current supply price and the firm's current marginal cost of supplying the good. Hence there is a tight connection between variations in the overall inflation rate and variations in the average markup of prices over marginal cost at each point in time. The markup also has effects on the equilibrium allocation of resources that are closely analogous to the effects of a tax distortion, as standard public finance analyses of the deadweight losses associated with monopoly power have long emphasized.

These insights provide the basis for an analysis of what monetary policy should seek to achieve, that is based on consideration of the consequences of monetary policy for the deadweight losses associated with relative price distortions rather than taking as primitive policymakers' concerns for macroeconomic objectives such as control of inflation or reduction of unemployment. Goodfriend and King draw two important conclusions. The first is that monetary policy should be used to ensure an average inflation rate near zero. This is based on a consideration of the effects of steady inflation or deflation on the average markup on the one hand and on the degree of dispersion of relative prices on the other.

With regard to relative price dispersion, they show that it has an effect like a downward shift of aggregate productivity owing to the fact that the "composite good" that matters for consumers' utility from consumption is produced using a less efficient mix of individual goods (owing to their differing prices). This plainly reduces welfare (for any assumed path of aggregate output, measured in terms of the "composite good"), and it is easy to show that in their model of staggered price setting, relative price dispersion (and hence the productivity reduction) is minimized when the inflation rate is always zero. (In this case, all firms can maintain identical prices even though they reconsider the optimality of their prices at different points in time.) Hence from the standpoint of this consideration, taken in isolation, an inflation rate of exactly zero is clearly optimal.12

But it is also necessary to consider the consequences of different constant (average) inflation rates for the average markup of prices over marginal costs of supply. Here Goodfriend and King show that inflation has two offsetting effects. On the one hand, for given expectations regarding future inflation, a higher inflation rate (a greater rate of increase of prices on average between period t-1 and period t) implies a lower average markup in period t, because the firms that do not reconsider their prices will fall further below the prices that they would wish to set at that time (which is to say, their prices fall relative to their marginal costs of supply to a greater extent), while those that do increase their prices are simply keeping up with the faster growth of nominal marginal costs (that must grow faster in order to bring about a higher inflation rate). But on the other hand, for a given current rate of inflation, a higher expected future rate of inflation (between periods t and t + 1) will be associated with a higher average markup in period t, because firms reconsidering their prices (and realizing that most likely they will not reconsider them again as soon as period t + 1) will raise them by more than the amount by which nominal marginal costs have already risen to take account of the higher costs (and higher competitors' prices) that they expect in period t + 1. These two forces roughly balance one another, so that changes in the average rate of inflation around zero don't much change the average markup (assuming that the rate at which firms discount future profits is relatively low).

This much they are able to establish analytically using log-linearized structural equations relating the average markup to the path of inflation, which hold for an inflation rate not too far from zero.13 Goodfriend and King go further and numerically solve for the deterministic steady state of their model for different assumed constant inflation rates, using the exact nonlinear model equations, and show in their calibrated model that while the steady-state markup is relatively constant for a small range of inflation rates, it becomes significantly higher in the case of inflation rates that are either much below zero or much above zero, owing to nonlinearities.14 Hence consideration of relative price distortions and of average markups lead to roughly similar conclusions: distortions should be larger if the average rate of inflation is very far from zero in either direction. Goodfriend and King accordingly argue that policy should strive to keep the average rate of inflation near zero.

Their discussion of the issue is based on a comparison of alternative possible stationary equilibria with constant inflation, but as subsequent literature was to show, the conclusion is also true if one asks what inflation rate one should commit to maintain in the long run, even when transition dynamics are taken into account. Indeed, under such an optimal policy commitment, one can show that in a model like the one proposed by Goodfriend and King, the inflation rate should converge in the long run to exactly zero rather than to a slightly higher value as suggested by the comparative steady-states analysis given in their paper. This was first shown in a related NNS model (with price commitments that last for exactly two periods) by King and Wolman (1999) and in a model with Calvo-Yun staggered price adjustment by Woodford (2003, chap. 7).15

But perhaps more notably, the broad conclusion of Goodfriend and King — that the optimal inflation target cannot be too far from zero — has proven to be remarkably robust to the addition of a variety of further complications to their basic monetary DSGE model, as reviewed by Schmitt-Grohé and Uribe (2011).16 Nowadays, the general consensus is that an inflation target of a couple of percentage points above zero is preferable to a target of zero. But the modern literature, even when providing arguments for the preferability of a moderately positive inflation rate, continues to use the basic method pioneered by Goodfriend and King: analyzing the implications of different average inflation rates for the microeconomic distortions associated with different degrees of misalignment of relative prices and prices relative to costs by considering how trend inflation interacts with optimal price setting by individual firms.17

Stabilization policy and welfare

The arguments just reviewed concern the average rate of inflation but do not yet consider the extent to which it may be desirable to allow inflation to vary around its average (or trend) rate in response to the shocks that give rise to short-run fluctuations in business activity. The second important conclusion of Goodfriend and King addresses this issue. They argue for a conception of "neutral monetary policy" under which monetary policy is used to keep the average markup constant at all times. Under at least some circumstances (which they describe as their "benchmark" case), this corresponds to maintaining a constant price level despite the occurrence of real shocks of various types. Thus their prescription calls not only for an average inflation rate near zero, but also for complete stabilization of the inflation rate.

This conclusion again follows from a consideration of how monetary policy affects the economy through its implications for the path of the average markup. Goodfriend and King argue that monetary policy cannot have much of an effect on the long-run average markup18 (that is, its average over time, as opposed to the average across firms at a point in time), but that it can determine how the average markup (across firms) varies around this long-run value in response to different kinds of shocks. With regard to the latter issue, they argue that in their model, absolutely any time path for the average markup consistent with the long-run average level can be achieved by a suitably state-contingent monetary policy.

They then ask how one should want the average markup to vary with shocks and argue that since the average markup has effects on the allocation of resources similar to a distorting tax (such as a tax on labor income), the familiar result in theoretical public finance that it is desirable to smooth tax rates over time (and across states associated with different shocks) suggests that it should similarly be optimal to smooth the average markup over time and across states. The structural relationship between the path of inflation and the average markup can then be used to show that the average markup is constant, at the level that occurs in a flexible price equilibrium, if and only if the inflation rate is zero at all times. But an inflation rate of zero at all times means that firms' prices do not get out of alignment simply because some firms reconsider their prices and others do not; hence optimal policy creates a situation with no relative price distortions and no differences in the markups of different firms. It is thus not only the average markup that must equal the flexible price markup, but each and every firm's markup at each point in time. Hence firms' prices are at all times exactly the ones that they would choose if they were able to continuously update their prices, and the equilibrium allocation of resources under optimal policy will be the same as in an equilibrium with perfectly flexible prices.

Thus the predictions of RBC theory remain relevant in the view of Goodfriend and King. These are not simply the way that output, hours worked, consumption, and so on would vary in response to real shocks if prices were (counterfactually) fully flexible;19 they are also the way that these variables should evolve, given the way that the economy actually does work, in the case of a "neutral monetary policy" — which Goodfriend and King suggest should be the welfare-maximizing monetary policy.

The proposed argument from an analogy with the theory of tax smoothing is an important one but somewhat incomplete as presented. The fact that inflation variations must correspond to variations in the average markup, and that the average markup has consequences similar to a tax on production or on variable inputs, makes it relevant to ask about the welfare consequences of variability of such a "tax rate." But this distortion is not the only one created by variations in inflation; inflation variations also create relative price distortions, and so an analysis of the way in which it is optimal for inflation to vary in response to shocks has to consider the welfare consequences of these effects as well.20 Nonetheless, in Goodfriend and King's baseline case, consideration of the distortions created by variation in the average markup leads to a conclusion that policy should fully stabilize the price level, regardless of the shocks hitting the economy; this is also the policy that minimizes the distortions created by relative price dispersion. Hence even when one takes account of the relative price distortions as well, one can conclude (under certain circumstances) that complete price stability is optimal.21

The result that complete price stability is optimal, despite the occurrence of a variety of types of exogenous disturbances to "demand" and "supply" factors, is perhaps less counterintuitive once one realizes that this policy results in an equilibrium allocation of resources that is identical to the one in a flexible price economy that is subject to the same exogenous shocks.22 At least in the case of a perfectly competitive RBC model, the equilibrium allocation maximizes the expected utility of the representative household, even in the presence of many types of exogenous disturbances. The first welfare theorem does not hold, however, in the case of a flexible price model with monopolistic competition. And the result that perfect price stability is the optimal monetary policy is also no longer quite correct once one adds staggered pricing to a model with monopolistic competition.23 Nonetheless, if the degree of market power is not too great, the welfare-optimal responses of quantities to real disturbances are still fairly close to the flexible price equilibrium responses, and a policy of maintaining price stability is not too bad an approximation of the optimal policy.

Goodfriend and King go on to discuss an important case in which complete stabilization of inflation is not optimal — though the exception further demonstrates the fruitfulness of their general approach. This is the case of an oil price shock, which they model as an increase in production costs (negative productivity shock) in the oil-producing sector. They further assume that the oil-producing sector has flexible prices, while prices are sticky (with Calvo-Yun staggered price setting) in the non-oil sector. Their analysis of this case proceeds by first positing that also in the case of this kind of sectoral productivity shock, the flexible price equilibrium (i.e., the RBC equilibrium) should represent a welfare optimum.24 They then ask if monetary policy can achieve this outcome.

If the oil sector has perfectly flexible prices, the answer is that it can by using monetary policy to ensure a completely stable index of prices in the non-oil sector (i.e., the sticky price sector). In this case, all firms in the non-oil sector set the same prices as they would in the flexible price economy, as do all of the oil-producing firms; hence the equilibrium is the same as in the flexible price economy. Of course, stabilizing the price index of the sticky price sector is not equivalent to using policy to stabilize a broader price index, which includes the oil price; the broad price index must be allowed to go up. Thus, one can think of the policy as one in which "headline inflation" is allowed to rise in response to a "cost-push shock" in order to avoid having to contract activity more in the sticky price sector. Alternatively, one can describe it as a strict inflation-targeting regime, in which however the inflation target is defined in terms of a measure of "core inflation" rather than the headline rate of inflation.25 Interestingly, not only is the counterfactual flexible-price allocation still useful as a normative benchmark in this case, but the optimal policy can still be described as "neutral monetary policy" in the sense that Goodfriend and King propose — that is, the monetary policy that maintains a constant level for the average markup (corresponding to the markup in a flexible price equilibrium). If we assume a similar degree of market power in both sectors (so that the flexible price markup is the same for both kinds of firms), then the fact that prices are flexible in the oil-producing sector means that markups there are always equal to the flexible price markup, regardless of monetary policy. Achieving an average markup for the economy as a whole equal to the flexible price markup then requires that monetary policy ensure a constant average markup in the non-oil sector that is also equal to the flexible price markup; this is achieved by stabilizing the price index for the sticky price sector.

The analysis provided by Goodfriend and King depends on assuming that prices are perfectly flexible in the oil-producing sector. This is not a bad assumption in the case of the oil sector, but one might also be concerned about the "cost-push" effects of other kinds of asymmetric real disturbances that similarly impact the relative costs of supplying different goods, but none of which are goods with perfectly flexible prices. In this more general case, it will in general not be possible for any monetary policy to bring about the allocation of resources corresponding to a flexible price equilibrium; instead, one will have to consider the trade-off between mitigating or exacerbating distortions of several types, which cannot all be reduced to zero.26

Yet even then, "neutral monetary policy" as defined by Goodfriend and King can provide a reasonable approximation to welfare optimal policy. In calibrated numerical examples, Woodford (2003, chap. 6) finds that in a model with two sticky price sectors subject to asymmetric disturbances, a monetary policy that completely stabilizes a particular price index provides a close approximation to the second-best optimal policy; however, as in the discussion of oil shocks by Goodfriend and King, the price index that one should stabilize is not in general the one that weights prices in the two sectors in proportion to their share in the consumption basket of the representative household.27 Instead, the nearly optimal policy stabilizes a price index that puts greater weight on prices in the sector with stickier prices, but it does not put sole weight on prices in only one sector except in the extreme case of perfect price flexibility in one sector.28 Moreover, the principle of putting more weight on prices in the sector with stickier prices is exactly what would follow from using monetary policy to stabilize the economy-wide average markup, since in the sector with more flexible prices, a given range of variation in the sectoral inflation rate corresponds to smaller variations in markups in that sector.

Obtaining a more precise characterization of optimal policy, and dealing with a larger number of complications (additional types of heterogeneity and additional market frictions), requires one to go beyond the relatively informal discussion of welfare objectives provided in this paper and develop a quantitative analysis in which the trade-offs between distortions of different types can be explicitly represented. However, the distortions identified by Goodfriend and King remain central to analyses of monetary stabilization policy, even when these make use of much more complex models. Even more importantly, the spirit of their analysis — insisting not only on explicit microeconomic foundations for the structural relations that define what policy can possibly achieve and explicit microeconomic interpretations of the "shocks" that shift those relationships among aggregate variables, but also on using microeconomic analysis of the distortions created by misaligned prices as the basis for welfare judgments with regard to macroeconomic outcomes — has continued to guide much subsequent work. The paper remains a classic contribution to the theory of monetary policy, and one from which much can be learned even today.


Adam, Klaus, and Henning Weber. 2019. "Optimal Trend Inflation." American Economic Review 109, no. 2 (February): 702-737.

Aoki, Kosuke. 2001. "Optimal Monetary Policy Responses to Relative Price Changes." Journal of Monetary Economics 48, no. 1 (August): 55-80.

Benigno, Pierpaolo, 2004. "Optimal Monetary Policy in a Currency Area." Journal of International Economics 63, no. 2 (July): 293-320.

Benigno, Pierpaolo, and Michael Woodford. 2005. "Inflation Stabilization and Welfare: The Case of a Distorted Steady State." Journal of the European Economic Association 3, no. 6 (December): 1185-1236.

Blanchard, Olivier J. 1983. "Price Asynchronization and Price-Level Inertia." In Inflation, Debt, and Indexation, edited by R. Dornbusch and M.H. Simonsen, 3-25. Cambridge: MIT Press.

Calvo, Guillermo A. 1983. "Staggered Prices in a Utility-Maximizing Framework." Journal of Monetary Economics 12, no. 3 (September): 383-398.

Christiano, Lawrence J., Martin S. Eichenbaum, and Charles L. Evans. 1999. "Monetary Policy Shocks: What Have We Learned and to What End?" In Handbook of Macroeconomics, Volume 1A, edited by J.B. Taylor and M. Woodford, 65-148. Amsterdam: North-Holland.

Christiano, Lawrence J., Martin S. Eichenbaum, and Charles L. Evans. 2005. "Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy." Journal of Political Economy 113, no. 1 (February): 1-45.

Dotsey, Michael, Robert G. King, and Alexander L. Wolman. 1999. "State-Dependent Pricing and the General-Equilibrium Dynamics of Money and Output." Quarterly Journal of Economics 114, no. 2 (May): 655-690.

Eggertsson, Gauti B., and Michael Woodford. 2003. "The Zero Bound on Interest Rates and Optimal Monetary Policy." Brookings Papers on Economic Activity 1: 139-211.

Goodfriend, Marvin, and Robert G. King. 2001. "The Case for Price Stability." National Bureau of Economic Research Working Paper 8423, August; also published In Why Price Stability? Proceedings of the First ECB Central Banking Conference, edited by A. Garcia-Herrero, V. Gaspar, L. Hoogduin, J. Morgan, and B. Winkler. Frankfurt, Germany: European Central Bank.

Kimball, Miles S. 1995. "The Quantitative Analytics of the Basic Neomonetarist Model." Journal of Money, Credit and Banking 27, no. 4, part 2 (November): 1241-1277.

King, Robert G., and Mark W. Watson. 1996. "Money, Prices, Interest Rates and the Business Cycle." Review of Economics and Statistics 78, no. 1 (February): 35-53.

King, Robert G., and Alexander L. Wolman. 1996. "Inflation Targeting in a St. Louis Model of the 21st Century." Federal Reserve Bank of St. Louis Review 78, no. 3 (May/June): 83-107.

King, Robert G., and Alexander L. Wolman. 1999. "What Should the Monetary Authority Do When Prices Are Sticky?" In Monetary Policy Rules, edited by John B. Taylor, 349-404. Chicago: University of Chicago Press.

Kydland, Finn E., and Edward C. Prescott. 1982. "Time to Build and Aggregate Fluctuations." Econometrica 50, no. 6 (November): 1345-1370.

Long, John B., and Charles I. Plosser. 1983. "Real Business Cycles." Journal of Political Economy 91, no. 1 (February): 39-69.

Rotemberg, Julio J. "Monopolistic Price Adjustment and Aggregate Output." Review of Economic Studies 49, no. 4 (October): 517-531.

Rotemberg, Julio J. 1987. "The New Keynesian Microfoundations." In NBER Macroeconomics Annual 1987, Volume 2, edited by Stanley Fischer, 69-104. Cambridge: MIT Press.

Rotemberg, Julio J., and Michael Woodford. 1997. "An Optimization-Based Econometric Framework for the Evaluation of Monetary Policy." In NBER Macroeconomics Annual 1997, Volume 12, edited by Ben S. Bernanke and Julio J. Rotemberg, 297-361. Cambridge: MIT Press.

Schmitt-Grohe, Stephanie, and Martin Uribe. 2010. "The Optimal Rate of Inflation." In Handbook of Monetary Economics, edited by Benjamin M. Friedman and Michael Woodford, Chapter 13. North-Holland: Elsevier.

Svensson, Lars E.O. 2003. "What Is Wrong with Taylor Rules? Using Judgment in Monetary Policy through Targeting Rules." Journal of Economic Literature 41, no. 2 (June): 426-477.

Svensson, Lars E.O. 2010. "Inflation Targeting." In Handbook of Monetary Economics, edited by Benjamin M. Friedman and Michael Woodford, Chapter 22. North-Holland: Elsevier.

Taylor, John B. 1979. "Estimation and Control of a Macroeconomic Model with Rational Expectations." Econometrica 47, no. 5 (September): 1267-1296.

Taylor, John B. 1980. "Aggregate Dynamics and Staggered Contracts." Journal of Political Economy 88, no. 1 (February): 1-23.

Taylor, John B. 1993a. Macroeconomic Policy in a World Economy: From Econometric Design to Practical Operation. New York: W.W. Norton.

Taylor, John B. 1993b. "Discretion versus Policy Rules in Practice." Carnegie-Rochester Conference Series on Public Policy 39 (December): 195-214.

Woodford, Michael. 1996. "Control of the Public Debt: A Requirement for Price Stability?" NBER Working Paper 5684, July.

Woodford, Michael. 1999. "Optimal Monetary Policy Inertia." NBER Working Paper 7261, August.

Woodford, Michael. 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton: Princeton University Press.

Cite as: Woodford, Michael. 2022. "The New Neoclassical Synthesis and the Role of Monetary Policy." In Essays in Honor of Marvin Goodfriend: Economist and Central Banker, edited by Robert G. King and Alexander L. Wolman. Richmond, Va.: Federal Reserve Bank of Richmond.


Along with Rotemberg and Woodford (1997).


Kydland and Prescott (1982); Long and Plosser (1983).


In focusing on the quantitative evaluation of alternative interest-rate feedback rules, Rotemberg and Woodford work within a program advocated by Taylor (1993b). The discussion of principles for the conduct of monetary policy by Goodfriend and King is instead more in line with the growing adoption by central banks throughout the 1990s of well-defined inflation targets, without a commitment to specific operating procedures through which the targets should be hit. On the advantages of formulating rules for monetary policy as "targeting rules," see Svensson (2003).


Rotemberg (1987).


Rotemberg (1982).


Calvo (1983).


Others working on related models around the same time proposed a variety of names for the new style of modeling. Kimball (1995) called it "neomonetarist," and King and Wolman (1996) also stressed the monetarist influence on their model. I had preferred the term "neo-Wicksellian" (Woodford, 2003), but the term that eventually stuck was "New Keynesian," probably because of its popularity as an epithet among critics of the new approach.


Rotemberg and Woodford modify the basic Calvo-Yun model of price setting to assume that when prices are reconsidered, the new price that takes effect in quarter t must be set on the basis of the economy's state in quarter t-1. This assumption makes their theoretical model consistent with an identifying assumption in their structural VAR estimation of the effects of monetary policy shocks, which interprets contemporaneous correlation between inflation and interest rate innovations as necessarily reflecting an effect of current inflation on the Fed's interest rate target rather than any possible effect of a policy surprise on price setting in that quarter. A similar time lag is assumed in Christiano et al. (2005), for the same reason.


On the role of models of this kind in the theoretical case for inflation targeting, see in particular Svensson (2011).


See for example Eggertsson and Woodford (2003).


These are also the basis for the penalty for inflation variability in the microfounded loss function derived by Rotemberg and Woodford (1997), as discussed further in Woodford (2003, chap. 6).


The result depends on assuming that prices remain unchanged in nominal terms between the occasions on which they are reconsidered. Yun (1996) proposes a more complex model in which prices are automatically increased to reflect some "normal" rate of inflation between the occasions on which they are reconsidered; in that version of the model, price dispersion is minimized by choosing a steady inflation rate equal to the "normal" rate that firms expect, which need not be zero. The argument that zero inflation results in minimal price dispersion also tacitly assumes that one starts from a situation with zero price dispersion. If one starts from a different distribution of relative prices — for example, because one has had positive inflation up until now — then the policy that minimizes price dispersion will not be one that jumps immediately to a zero inflation rate, though it should converge to zero inflation eventually.


The two counterbalancing effects are essentially the same as those that can be observed in the relationship between inflation and the output gap in the familiar "New Keynesian Phillips curve."


For the purposes of their numerical analysis, Goodfriend and King assume Taylor-style staggered price commitments that last for four quarters and calculate the effect of steady-state inflation on relative price distortions, the reset price chosen by adjusting firms, and two measures of the markup.


See Benigno and Woodford (2005) for a more complete treatment of this issue.


Goodfriend and King themselves discussed some of these extensions in a follow-up paper for the ECB's First Central Banking Conference on the theme "Why Price Stability?" (Goodfriend and King, 2001). See the discussion of this contribution by Vitor Gaspar and Frank Smets, elsewhere in the current volume.


For a recent example, see Adam and Weber (2019).


That is, as discussed above, they show that it is not possible to make the markup be on average much lower than the markup associated with price stability, which would also be the markup in a flexible-price economy (reflecting the market power of monopolistically competitive suppliers). It is possible to use monetary policy to make the average markup significantly higher than this, but that would not be desirable, since even the average markup level associated with price stability distorts the equilibrium allocation of resources away from the social optimum.


Actually, the flexible-price limiting case of the NNS model is not exactly a canonical RBC model, because it would still involve monopolistic competition, and hence a positive markup, while the RBC model of Kydland and Prescott is a perfectly competitive economy. Nonetheless, the logic of equilibrium determination is extremely similar in the two types of flexible-price DSGE models, and even their quantitative predictions are similar if market power is not too extreme.


In the approach introduced in Rotemberg and Woodford (1997), the expected utility of the representative household is approximated by a quadratic loss function (derived using a perturbation expansion around the zero inflation steady state). The loss function has terms of two sorts each period: one proportional to the squared deviation of aggregate output from its "natural rate," and the other proportional to the squared deviation of the inflation rate from zero (the optimal long-run rate). The deviation of the "average markup" from its steady-state value, emphasized by Goodfriend and King, is (to a log-linear approximation) proportional to the deviation of output from its natural rate, as indeed Goodfriend and King note. Thus their consideration of the welfare losses associated with fluctuations in the average markup corresponds to the terms in the Rotemberg-Woodford loss function that penalize fluctuations in the output gap. But a full consideration of the welfare consequences of optimal policy must take account of the terms proportional to the squared inflation rate as well. These terms represent a quadratic approximation to the impact on productivity of the relative price distortions created by inflation variation, as noted above.


See the discussion of this point in Woodford (2003, chap. 7).


King and Wolman (1996) had earlier described this result when using an NNS model to analyze strict inflation targeting (i.e., price level targeting).


Optimal policy is analyzed taking explicit account of the distortions due to monopolistic competition that remain even in steady state in the subsequent work of King and Wolman (1999), using a model with two-period price commitments, and the analysis of optimal policy in a model with Calvo-Yun price-setting by Benigno and Woodford (2005).


Once again, this would be true if one were talking about a flexible-price model in which both sectors are perfectly competitive. If there is instead monopolistic competition in the non-oil sector, it is not quite true, though the idea remains useful as an approximation.


The welfare analysis leading to this conclusion is developed more fully in Aoki (2001).


Even in a one-sector model with only aggregate disturbances, such trade-offs exist, of course, if the degree of market power is nonnegligible, as shown by Benigno and Woodford (2005).


See in particular Figures 6.2 and 6.3, and the discussion of these figures. The results are based on the analysis of the optimal inflation target for a monetary union subject to region-specific shocks in Benigno (2004).


Woodford (2011) provides further insight into the reason for such a policy to approximate an optimal policy commitment, showing analytically that the optimal policy commitment implies long-run stability of a particular price index. In general, the second-best optimal policy involves transitory fluctuations in this index in response to shocks but no permanent changes in it, even when the shocks result in permanent shifts in the relative price of goods supplied by the two sectors.

Phone Icon Contact Us

Lisa Davis (804) 697-8179