Skip to Main Content

Jonathan A. Parker

Econ Focus
Third/Fourth Quarter 2016
Interview
headshot of Jonathan Parker

Photo by Andrew Kubica, Valencia Images/MIT Sloan School of Management

Economists are sometimes pegged as either theorists or empiricists. But often this dichotomy is overstated. Many economists bring together theory and empirical analysis to study a broad range of questions. For Jonathan Parker, this approach is perhaps the defining characteristic of his work.

Parker, the Robert C. Merton (1970) Professor of Finance at the Massachusetts Institute of Technology's Sloan School of Management, uses data in novel ways to better understand a host of economic issues and the theories that underpin them. For instance, the economic stimulus program of 2008 offered the potential to examine the way households respond to an influx of liquidity — and with it, whether people smooth their consumption, as theory would predict. But to realize that potential required developing some investigational tools — in Parker's case, designing surveys for households belonging to the Nielsen Consumer Panel to better understand what they did with the payments they received and why.

Parker has also looked at such issues as whether people can hold incorrect but nonetheless utility-optimizing beliefs; which segments of the income distribution are most affected by economic shocks and how that has changed over time; and whether households respond to good economic news in a proportionate manner to bad economic news. As he says, he’s an applied microeconomist, an asset pricer, a macroeconomist, a public finance economist, and a behavioral economist. Which one depends on the question at hand and the methods required to answer it.

Prior to joining the MIT faculty, where he is also the co-director of the Golub Center for Finance and Policy, Parker taught at Northwestern University, Princeton University, and the University of Wisconsin, and he was a research fellow at the University of Michigan. He edits the National Bureau of Economic Research's Macroeconomics Annual, serves on the board of editors of the American Economic Review, and is a member of the Congressional Budget Office’s Panel of Economic Advisers. Aaron Steelman interviewed Parker at his office at MIT in December 2016.


EF: Among your work on economic stimulus programs is a recent paper with Daniel Green, Brian Melzer, and Arcenis Rojas on the Car Allowance Rebate System (CARS) of 2009, popularly known as "Cash for Clunkers." Could you discuss the empirical findings of that paper as well as potential implications for structuring stimulus programs given what we know from participation in CARS?

Parker: One of the interesting things we saw about that program was that it was massively oversubscribed. The government originally allocated $1 billion to a three-month program and exhausted that $1 billion in about a week. It then reauthorized the program for another $2 billion and still ran out of funds two months early. The other notable thing was that it was a program that provided liquidity. It paid households $3,500 or $4,500 to trade in and scrap an old vehicle. And that means it provided liquidity — and really enough liquidity for a down payment. So we wanted to know: Can we link these two, the provision of liquidity and the high take-up rate? Also, there was interesting existing research that had been done on the program, specifically work by Atif Mian and Amir Sufi, which produced a nice aggregate impact measure of the program but nothing at the micro level of how individual households were responding. And I wondered about the reversal of the impact, which is one of their main findings: The program generated sales, but within six to nine months afterward there was no cumulative difference in purchases for people eligible for the program and people who weren't.

We got access to the Bureau of Labor Statistics' Consumer Expenditure Survey data and made a precise measure of eligibility of vehicles based on fuel efficiency and used car value by make, model, and year. We then mapped the program responses to eligibility and the economic subsidy associated with any given car. If you owned a car, the economic subsidy was the program payment minus the value you could get for your car on the used car market. So a car worth $4,500 on the used car market would get, in effect, no subsidy from the program, but a car worth $1,000 would get a $3,500 subsidy. We mapped from car value to the program response to see if people with really junky old cars used the program much more strongly. And indeed we found that to be the case. Typically, about $1,000 of used car value reduced your probability of participating in the program by about half a percentage point. That suggests the government could have gotten as large a response with slightly smaller subsidies because the program ran out of funds and there was a lot of response from people with moderate-valued vehicles.

EF: But how can you know people's sensitivity to the subsidy in advance?

Parker: Exactly the right question. Because the program ran out of money, it's not a program for which we observe an unconstrained, equilibrium response. Instead, it was a response constrained by the funding amount. So we don't absolutely know; what we do know is that the subsidies were more generous than they needed to be to generate that many sales. And what we also know is, had the subsidy been lower, it probably would have been the people with the lousiest cars who would have traded them in and that would have resulted in less destruction of more valuable used cars. That's all easy to look at and say after the fact. But there was this massive underestimation of the response to the program, and we think that's because of liquidity.

We think that an economic subsidy should generate intertemporal substitution; it's a temporary price subsidy to a durable good. In this case, we figured out how the liquidity dimension could actually be measured separately a little bit from the economic subsidy. The economic subsidy is not the same for everyone, but for most people it is the same as the liquidity provided by the program. But some people have loans on their program-eligible vehicles. If a vehicle is securing a loan, then when it's brought into the dealer and scrapped as part of the program, the household has to pay off that loan, and so they lose some of the liquidity benefit of the program. In our study, we estimate this liquidity effect, separate from the intertemporal substitution effect of the economic subsidy, and we find that the effect of the program was much smaller on vehicles that were securing loans. In fact, it's almost nonexistent. So we find the impact of liquidity to be very strong — it was an accelerant for the economic subsidy in the target population.

We also find very weak evidence consistent with the reversal effect that Mian and Sufi first discovered, which feeds into the question: Is this a worthwhile sort of program to do? It was a program that caused, at an annual rate, a $44 billion increase in personal consumption expenditures on durable goods in the third quarter of 2009, which was the quarter in which the recession ended and in which GDP grew by about $44 billion. And in the previous quarter GDP declined by about $44 billion. So it looks pivotal. On the other hand, half of the content of the vehicles purchased under the program was imported, so that means that one has to take the number of new purchases and divide by two to get an estimate of the partial-equilibrium impact on demand. So really it wasn't pivotal at moving us from no growth to growth, and also the program seems to have been reversed over six to nine months because there's no cumulative impact in sales. On the other hand, it generated all that spending for a relatively small fiscal cost of only $3 billion ($12 billion at an annual rate). But these are all accounting, partial-equilibrium calculations.

For this to be optimal from a stabilization perspective, you need to believe that the government multiplier is much larger in the quarter in which CARS is run than six months later. And this is a period when we are having a slow recovery. So the net benefit of the program is ultimately a general equilibrium question that other people would need to answer, but the hurdle is significant given that one has to see such a significant swing in the size of the multiplier between those two periods. If one wants to do a similar program again, and similar programs have now been run in countries all over the world, our results generally emphasize that the liquidity was a crucial part of the program — not just people substituting over time due to a temporary price subsidy — and as such, our findings relate to the literature on investment tax credits for firms where liquidity also seems to be important.    


WEB EXCLUSIVE

EF: What were some of the differences in the way households responded to CARS and the way they responded to the Economic Growth and Tax Relief Reconciliation Act of 2001, which sent tax rebates of $300 or $600 to most households?

Parker: There's a difference and a similarity. First, the difference. The tax rebates of 2001 had a much smaller ratio of spending to government outlays. The CARS program was $3 billion in government outlays for $11 billion in new car purchases. So that's a very shovel ready, if you will, way to generate spending. The tax rebates generated less spending than fiscal cost but without a spending reversal in the short term. The similarity is the importance of liquidity. For the tax rebate question, we really think it's the households with little liquidity that turn around and spend at high rates, whereas households with high liquidity have low rates of spending. Similarly, for a CARS-type program, it seems that liquidity provision is crucial — the people who were the most eager and taking the most advantage of it were those who seemed to value the liquidity the most.

The interesting postscript to that is when we analyzed the 2008 payments, which were about $950 per household on average, we found much larger spending on new vehicles — we detected absolutely none in 2001. Why the difference between 2008 and 2001? One possibility is that there were different spending responses. The 2008 economic stimulus payments came at a time when gas prices were high and a lot of people and media were talking about trading in old vehicles for more fuel-efficient vehicles. Another difference is that in 2008 the payments were about twice as big as 2001, at least in nominal terms, and so were a much more significant part of a down payment on a new vehicle, which may have contributed to quite a different spending response. Of course, there is just a lot of statistical uncertainty in the survey when you're studying these tax rebates and particularly when you're looking at durables purchases, which tend to be lumpy and infrequent. So the other possibility is just that the true economic effects are more similar and that the data just happen to measure the effects of the first rebate program one way and the second another way.


EF: I would like to go back to some of your earlier work on household financial decisionmaking — in particular, your 2002 Econometrica paper with Pierre-Olivier Gourinchas. It seems consistent with the standard life cycle theory of saving and consumption. [See this issue's Jargon Alert for more about this theory.] Would you say that's a fair characterization of that paper?

Parker: From the perspective of today, I think the contribution of that paper is more methodological in some sense. We worked out a framework for taking cohort-level analysis of microdata that had been used nicely before by Angus Deaton and Christina Paxson, Orazio Attanasio, and others, and combined it with a structural model of an income fluctuation problem so as to estimate the parameters governing the behavior of households using a simulated method of moments estimator. That said, as you noted, the model fits the life cycle profiles of consumption and saving with a model in which households differ solely based on their history of income shocks and their age. So age is a major determinant of the propensity to spend. Since then, the research has expanded in many ways to endogenize the choices we made exogenous in that paper or assumed away, including portfolio choice, labor supply, illiquid retirement saving, government programs, housing, and some very nice work by Mariacristina De Nardi and Eric French and co-authors on retirement. People are also considering the liquidity of different investments now in structural models and stochastic credit constraints, all of which we pushed away, but the method remains a very useful one for evaluating these models.

EF: You've revisited some questions fundamental to life cycle theory in your recent paper, "Why Don't Households Smooth Consumption?"

Parker: In that paper, I use Nielsen Consumer Panel data to design and run my own survey on households to measure the effect of what was then the second of these large randomized experiments run by the U.S. government, the economic stimulus program of 2008. The key feature of that program was that the timing of the distribution of payments was determined by the last two digits of the Social Security number of the taxpayer, numbers that are essentially randomly assigned. So the government effectively ran a $100 billion natural experiment in 2008, distributing money randomly across time to people, and this policy provides a way to measure quite cleanly how people respond to infusions of liquidity.

The goal of "Why Don't Households Smooth ..." is to provide evidence of the structural model underlying the observed importance of liquidity on household spending behavior. And in theory, while the buffer-stock model might correctly match the behavior, it also might be that people spend expected income gains only when they arrive because of problems stemming from self-control, inattention, inability to plan, some sort of rule of thumb or mental accounting behavior, or the like. So I designed a bunch of questions trying to get at these alternative behaviors. I should clarify that they are not really alternatives, in the sense that they all interact with liquidity constraints.

The first thing I found out is that illiquidity is still a tremendous predictor of who spends more when a predictable payment arrives. But it's not only liquidity. People with low income have a very high propensity to spend, and not just people who have low income today, as would be associated with the standard buffer-stock model. You can imagine a situation where you've had a bad income shock, you happen to have low liquidity, and you spend a lot. But illiquidity one or even two years prior to the payment is just as strongly associated with a propensity to spend out of liquidity, as illiquidity at the time of the payment. This same set of people who have persistently high propensities to consume are also the people who characterize themselves as the type of people who spend for today rather than save for tomorrow when I asked them specifically about their type, not their situation. They are also the people who report that they have not sat down and made financial plans.

What you end up with is that a high propensity to consume correlates with low liquidity, which is useful for theorizing but also presents a little bit of a chicken-and-egg problem. Is it different preferences, objectives, or behavioral constraints that are causing both the low liquidity and the propensity to spend, or is it the low liquidity that is causing the lack of planning and high spending responses? So for many purposes, what I take my findings to mean is that the buffer-stock model is a quite reasonable model with one critical ingredient. The critical difference relative to the way I modeled households in the 2002 paper with Gourinchas is that I think there's much more heterogeneity in preferences across households. While in that paper we looked at differences in preferences across occupation and industry, I think there's just much more persistence in heterogeneity in behavior, consistent in the buffer-stock model with differences in impatience. Partly I say this because I do not find a big relationship between age and propensity to spend in a number of studies, and partly from the persistence of the high-spending propensities I find in this recent paper. But it's also visible in some sense in even older data. Low liquidity, or low financial wealth, is a very persistent state across households, suggesting the propensity to spend is not purely situational. A lot of it is closer to an individual-specific permanent effect than something transient due to temporary income shocks.

EF: Did people generally understand the magnitude of the 2008 stimulus program prior to receiving payments? And if they didn't, did that show up in consumption patterns?

Parker: In my study, one of the questions I asked people was: So you got this economic stimulus payment, did you expect it? Was it more than you expected? Was it less than you expected? Was it a surprise? First of all, about 80 percent of households got basically what they expected. That means you're never going to explain the spending response by people being surprised, as say in some versions of an inattention model. That is a nonstarter. Expectations about the program were reasonably accurate, with the important caveat that people may not be answering the survey truthfully. Interestingly, there is a slightly higher propensity to spend, though not statistically significantly so, among those who were surprised and received more than they expected. But there is also exactly the same response among those people who got less than they expected. So it looks more like the people who weren't expecting the right thing are worse at consumption smoothing.

EF: How do you define the distinction between "optimal expectations" and "rational expectations"? What are the differences in the ways agents with each set of expectations tend to behave? And if agents with optimal expectations may make "poorer" decisions, in some sense, how may that ultimately be advantageous or desirable?

Parker: In some sense, the starting point for my work with Markus Brunnermeier came from a number of observations in the social psychology literature that people just tend to be optimistic or overconfident, the type of behavioral biases that lead people to believe they're better drivers than average — that sort of optimism. Looking at the objective functions that we usually consider, the simplest way to maximize the expected present discounted value of anything is to put more probability on better outcomes — simply to be more optimistic. You can see how that can be a source of happiness today. If we think about how good we are at many different things, it's nice to have confidence and believe you're maybe better looking or smarter than you actually are. On the other hand, to the extent that you actually allow yourself these sorts of enjoyable biases, you're likely to make slightly worse decisions. You might leave insufficient time to complete a project, for instance, which would make you worse off.

So the basic idea of that optimal expectations paper is to think of the optimal trade-off between those two — the idea that you will get more expected future utility today by expecting better outcomes, but on the other hand you're going to make some decision errors because of that expectation. It turns out that this sort of a simple trade-off has many interesting implications. The first is basically that you're always somewhat optimistic. The reason is that moving a small amount of probability from, say, the worst state out in the future to the best leads to a first-order gain in expected present discounted value of utility flows of consumption. But a small change in probability causes a small change in behavior, and a small change in behavior from the optimal has very small — second-order — welfare costs. So, overall, the benefits outweigh the costs.

There are also some interesting implications that come from the fact that optimism is situational. For example, when considering investing, one way to be optimistic is to think the stock market's going to go up more than everybody else believes, and to go longer into it. But you can also be optimistic by shorting the market and believing it's going to crash when everybody else thinks it's going to go up. So when do you short? It turns out that there are conditions under which you will actually invest in an unfair bet if it's positively skewed enough. That gives you a theory that looks like people buying lottery tickets, which are unfair gambles with a very small probability of a very high positive payout. They provide a very nice future state to believe in at a pretty low-dollar cost today. So the observed unfair gambles, lottery tickets, are exactly the sort of unfair gambles that our theory predicts people should prefer. This type of behavior looks like the big short. That's a theme that runs through several of our results: People with optimal expectations want something that has a very high positive payoff to dream about that at the same time isn't very costly to invest in.

There is also a natural nonconvexity in the model, which we didn't expect. When I am more optimistic about a certain outcome in the future, that means I want to buy more consumption in that state of the world. When I buy more consumption in that state, that means I want to be more optimistic about it, which in turn means I want to buy more consumption there. And this natural nonconvexity means that people are going to do something like hold a reasonably well-diversified portfolio and then invest excessively in a particular asset, such as one or two individual stocks. We didn't expect that sort of behavior to pop out, but that's what the model taught us. This leads to our work with Christian Gollier that looked at the conditions under which the model generated disagreement and could raise the return on negatively skewed assets.


WEB EXCLUSIVE

EF: In the paper, you consider some evolutionary arguments related to optimal expectations.

Parker: We were asked in the review process to think about why this might occur or how people might come to have these beliefs. In the paper, we have a couple of paragraphs that I think I'm still a little uncomfortable with, but the arguments run something like this: How might you get to optimal beliefs? You start out with an optimistic assessment of how easy something will be, and then you kind of think about it a little bit and if the costs of being wrong are significant, you start to downgrade your optimism. However, if you don't see any big costs, then you don't. So it suggests you approach decisions with natural optimism and then consider the consequences, and you bring beliefs back toward reality if you need to. In terms of evolution, people do lots of matching with friends, with colleagues, with potential spouses in which they project confidence about the value of matching with them, of working with them, of marrying them — and a credible, stronger belief in themselves may be useful in that process. That's not in the theory, per se, but these are stories that might help us (or a referee) believe that there's something there.

I think it's worth noting that one of the reasons I think this paper has been controversial (at least relative to our belief in the theory!) — it has gotten good citations, but it hasn't led to a lot of subsequent literature — is that it is a behavioral paper that contradicts a common belief among behavioral economists that the mistakes people make are potentially very large. Our model delivers exactly the reverse, which is that the mistakes people make are the ones that satisfy or generate these biases but do not cause or risk large negative payoffs. So it's behavioral economics that the behavioral economists don't like.

I don't think that every theory has to explain every behavior, but I also think our theory can incorporate situations in which there's an awfully large belief bias or in which things are extreme if one moves away from the particular frictionless, stationary, full-information environment that we studied for biases. It might be that there are explicit costs associated with moving beliefs away from rationality. This approach might also make the model more, not less, empirically useful. And the way we worked with optimal expectations theory, people are meta-smart — they know the true probabilities and work from those to these biased probabilities. There are situations where people may really not understand the truth at all.

Let me come at this a slightly different way. There's a set of behavioral models in which there is a belief bias and it is invariant to the costs and payoffs. And you see that pretty clearly rejected, I think, in the world and in labs. So our paper gets something right in terms of biases being disciplined by costs. The models in which biases are fixed and not responsive have the problem that people can be turned into money pumps and can make very severe errors in certain, regularly occurring states of the world. Some economists are very comfortable with the idea that people do regularly make major mistakes. Our paper lets people optimally tone down their optimistic bias and so rules out regular, really costly mistakes. But some people find that a bug and not a feature.

EF: I would like to move on to some of your work on consumption equality. By some accounts, consumption inequality did not increase substantially during the latter half of the 20th century. Do you think that is largely accurate or a function of measurement issues?

Parker: My reading of the literature is that it's significantly measurement issues, and I would refer to papers by Mark Aguiar and Mark Bils, and the evidence that suggests that the longer-run changes have been pretty well-tracked where we can measure them, such as papers by Orazio Attanasio and Steven Davis and others. It is also true that it's very difficult to measure consumption at the very high-income end, and that's where the inequality has really taken off. High-income households maybe consume shorter commutes, so that's reflected in house prices; they consume a lot of amenities indirectly. There are also things like charitable donations that we don't usually consider consumption, so it's very hard to measure the ways in which the very wealthy consume. With a different purpose, I tried to measure some of these things in my work with Yacine Ait-Sahalia and Motohiro Yogo.


EF: How would you describe the changes we have seen in the way high-income and high-consumption households have become exposed to aggregate economic fluctuations over the last 30 years roughly?

Parker: Due to the data issues we just discussed, I have not really been able to track the consumption of high-consumption households, but in work with Annette Vissing-Jorgensen we have looked at how the labor income of high-income households has changed significantly. What we zoomed in on is that high-income households used to live a relatively quiet life in the sense that the top 1 percent would earn a relatively stable income, more stable than the average income. When the average income dropped by 1 percent, the incomes of the top 1 percent would drop by about only six-tenths of a percent. In the early 1980s that switched, so that in a recession if aggregate income dropped by 1 percent, the incomes of the top 1 percent dropped more like 2.5 percent — quadrupling the previous cyclicality. So now they're much more exposed to aggregate fluctuations than the typical income. We also show that decade by decade, as the top income share increased, so did its exposure to the business cycle in the 1980s, 1990s, and 2000s. And as you go further and further up the income distribution, that top share — not just the top 1 percent, but the top 10th of a percent, and the top 100th of a percent — there's also been a bigger increase in inequality and a bigger increase in the exposure to the business cycle.

EF: What's the story for that?

Parker: First of all, we used to think the income cyclicality was exactly the reverse, because low-income workers would lose their jobs in recessions and high upper-income workers would not. And so while high-income households might get lower raises in recessions, they wouldn't actually go down to zero. Since job losses are concentrated among lower-earning workers, you have much greater cyclicality in overall incomes among low-wage workers. In another paper I did with Annette Vissing-Jorgensen, we looked at cross-country evidence in the recent decades of high inequality. The countries with the biggest earnings inequality were also the countries with the largest high-income cyclicality relative to the average. So what explains these sorts of findings? We thought there were two leading hypotheses.

First, starting around the end of the 1980s, we see the adoption of incentive-based pay for CEOs and other highly placed managers. Incentive compensation over this time rises, and it happens to be that the incentive compensation is not based on relative performance, which would therefore difference out what goes on in the macroeconomy, but instead is based on absolute performance. And in the U.S. case, that could partly be due to simply what counts legally as incentive-based compensation and so is not subject to corporate profits tax. Pay in the form of stock options, for example, counts as incentive-based compensation. Pure salary does not and so is taxed as corporate profits above $1 million.

The other possibility is that it's purely technological. Something like incentive-based compensation may be a sideshow. The idea is that new information and communication technologies allow the best managers to manage more people, to run bigger companies, and therefore to earn more; the best investment managers to manage more money and to make more for themselves; the best entertainers and performers to reach more people and therefore earn a larger share of the spending on entertainment goods. High earners have become small businesses. While it is not universally true that such a shift to high-volume low-markup profits for the winners necessarily leads to greater cyclicality, it is true for some reasonable functional forms of production.

We do know that increased cyclicality in income among high earners can't come simply from the financial sector. That sector just isn't quantitatively big enough, and you see the increase in earnings share and in cyclicality across industries and occupations. It's not the case that just the top hedge fund managers have become the high earners and they're very cyclical; Oprah is also.


WEB EXCLUSIVE

EF: Could you discuss what you view as the strengths of the Bureau of Labor Statistics' Consumer Expenditure Survey relative to other similar surveys and how it might it be further improved without sacrificing its core virtues?

Parker: The BLS is revising the CE Survey now. It's called the Gemini Project, and I have been involved a little with advising how to revamp it. Surveys in general have been experiencing problems with participation and reporting. The CE is suffering from these problems, and so the Gemini Project is trying to address them. The CE has the huge benefit of being a nationally representative survey done by the Census Bureau; almost all of the alternative datasets that we're using from administrative sources that are not strictly survey datasets are less representative. So reducing the CE's problems with participation and reporting could potentially have a very large payoff. Of course, the cost of the change is that the CE Survey as it stands now is a very long panel dataset that has had the same format throughout the whole time. So we're going to break that and no longer be adding new time periods to an intertemporally comparable dataset. But I think that's probably a cost worth paying at this point.

What the BLS is planning is to change dramatically the way the CE Survey is conducted. They're going to gather data in quite different ways than they have in the past, including some spending categories that will almost have so-called administrative sources. What I have been pushing for is maintaining some panel dimension in the new version of the CE Survey. If you don't have a panel dimension, then for lots of macro-type questions, you can track people only at the group level. And since groups are usually affected differently by other things going on in the world, you lose a lot of ability to identify stuff that might be interesting — tracking someone who had a specific policy exposure in one period and seeing how they're doing a month or a year later. If the BLS eliminates the panel dimension, researchers couldn't do anything like I did with my tax rebates work, nor any other work that looks at treatments that are happening at the individual level. But I'm hoping that the new, state-of-the-art version of the CE Survey will last another 35 years and be just as good.

EF: In your discussion of the paper, "Stimulating Housing Markets," by David Berger, Nicholas Turner, and Eric Zwick at the summer 2016 NBER meetings, you raised two interesting, related questions: First, the first-time homebuyer tax credit seems to encourage more debt and leverage — does it do so efficiently? Second, should we, more generally, seek to support house prices? I was wondering what your thoughts were on those issues.

Parker: Let me start with one interesting issue that isn't addressed in that paper, which is the way the program is structured and the relative roles of liquidity versus intertemporal substitution. It is actually possible, though difficult, I believe, to get the credit at closing. So it is possible to use the tax credit to relieve liquidity or down payment constraints. In some ways one could view the CARS payment and this first-time homebuyer tax credit as the government giving a first-loss loan or a second-lien loan, but not even trying to recover it. Eighty percent of cars are purchased using financing and nearly 100 percent of houses. So the household, the purchaser/borrower, needs to come up with a down payment such that the collateral is sufficient to cover the loan. 

Returning to your questions, as an economist, I'm generally skeptical of policies to support prices of specific items absent clearly measured externalities. And my reading of the literature on foreclosures is that there's almost no evidence that the externality is significant; however, there is some evidence in a paper by Brian Melzer that there is a debt overhang problem, that people are not taking care of their homes if their mortgage exceeds their home value because of the possibility they might end up losing their homes. This can lead to substantial inefficient underinvestment in home maintenance because someone might be in such a situation for two years before they're actually foreclosed on. But it's not obvious that the first-time homebuyer tax credit raises house prices sufficiently to provide much quantitative help with this type of inefficiency.

Another question is whether there is a big social benefit from having a person buy a house rather than having that same person rent that house. If there are a lot of people who can afford to rent a house, but they just don't have the cash for a down payment, then the first-time homebuyer tax credit might facilitate transactions that the private credit markets can't quite complete, at a loss to the taxpayer. But I can't quite see the major benefit of ownership versus renting.

EF: It seems that the case turns on how much more homeowners are likely to take care of a property relative to a renter and the externalities that result.

Parker: Yes, but there are benefits and costs of homeownership at all times. I am not sure we have this right. Ownership can also promote NIMBY (Not in my Back Yard) type behavior, and that's not necessarily good. Working for better schools, yes. Lobbying for tighter zoning so that no one else can build houses, no. If we don't understand these costs and benefits in normal times, do we understand them better during a financial crisis?  On the one hand, transfers to households are probably welfare-enhancing on distributional grounds over transfers to banks, but there are lots of possible policies to help housing, some of which, like fixing the mortgage securitization process, seemed more directly targeted to places that we could see were in some ways broken.

EF: For those not familiar with the NBER conferences, could you please discuss the annual conference on macroeconomics — how topics are chosen and what you aim to achieve with the conference?

Parker: The conference accepts six papers each year. Most come from submitted proposals, but sometimes we go out and find people doing interesting work and pull them in. We try to take risks in that we're accepting proposals or selecting people and asking them to write papers that may or may not work out but are big topics that we view as promising. We are guaranteeing publication, and the upside is hopefully worth the downside. We try to be topical, either related to current big issues in the real world or new lines of research. For example, we've had a couple of papers on survey measures of expectations, which are new sets of data that one might think about incorporating into macro models and what can we learn from them. We've had papers on the euro and monetary policy at the zero lower bound.

The discussants for the papers are usually experts who weigh in not only on the paper, but also on the field or the policy in general. They typically do a really great job. In recent years, we've added a dinner speaker, whose address is sometimes published, and that person tends to be someone who has served high up in the policy world and is also an economist and an NBER member or former member. Recently, we've also added a lunch panel. We ran it last year and we had someone from government, someone from academia, and someone from industry talk about gas and natural resource prices and the cycles of those. The three different perspectives they brought and then the general discussion were informative and a lot of fun.

Most importantly, we have also innovated in dissemination. We have posted to the conference website videos of the papers, videos of the lunch panel, and we have all the papers, so we're trying to be as open and accessible as we can be for a somewhat small conference with limited resources. I encourage people to look at the conference website.

EF: Do you think conferences like these give people a chance to do work that they might want to but otherwise wouldn't, and which will prove valuable to the profession?

Parker: It's true that we do encourage a little bit more of the "big think" paper in the sense that an author might be able to say, here's the real case for this without perfectly proving it precisely each step of the way, like you have to for a top journal. Realistically, we don't get many papers of that type, but when we do, I think they make nice contributions. More commonly, we get pretty tightly executed papers that stimulate the discussion among the folks who are there and that opens up a wide-ranging discussion. A good example is a few years ago, we got a very nice paper that looked at national highway funds spent in different states that yielded a provocative discussion about the government multiplier in general and what all the evidence was coming together to show. And that I think is the real value of the conference.


EF: What do you think are the most important unanswered or understudied questions in household behavior and household financial decisonmaking?

Parker: The big one is: Do we need a different model than the canonical stochastic life cycle model with credit constraints to understand consumer behavior? Do we need to introduce inattention or hyperbolic discounting, for instance, to make it richer? My sense is that for a lot of questions so far, the answer is still no, but we now have a few pieces of evidence that in a few places the answer is yes. As we get better data, and we think about questions like credit market equilibria and consumer financial regulation, we have the information to evaluate rich models of behavior and the need for models that are as complete as possible in describing behavior.

In my work, liquidity is first order, consistent with the buffer-stock model. But liquidity almost seems to explain too much. In the Nielsen study that we discussed earlier, people don't spend the money the week before it shows up — they spend it the week it shows up. And it seems like you're going to have a lot of difficulty quantitatively fitting that little foresight into a life cycle model unless people are often literally liquidity constrained, absolutely at their debt limits.

In the Cash for Clunkers program, liquidity mattered critically. One interpretation is that this importance is consistent with the canonical model in which some people lack the liquidity for a down payment. But there is an alternative interpretation. Again, our main finding is that people who have outstanding loans on their vehicles are much less likely to participate in the program, presumably because to buy a new car using the program, they would have to put some cash down along with the payment in order to make the down payment. Such people are much less likely to take advantage of the program than people who don't have loans on their vehicles but instead have unsecured debt, like on a credit card. Sounds like liquidity. But perhaps the people who have the secured debt could walk into the dealer and turn into that other person — that is, use their credit card to buy the car, so they leave the dealer with unsecured debt, just like the other person. In this case, liquidity matters, but maybe not according to strictly the life cycle model with liquidity constraints. Instead, such behavior sounds more like people using heuristics or mental accounts. The big question: In what combination do we need each ingredient – rationality and heuristics? And where do the heuristics come from?

The other question that I think research is really exploring, which I mentioned above, is what equilibria look like for saving and borrowing. What equilibrium supports high-fee mutual funds, index funds, and so on, and how does that change the flow of funds between the corporate and household sector and the pricing of risk? How does the market for lending to households evolve as risk is repriced and interest rates move, and how does this feed back into spending? The interplay between borrowers and lenders in these markets is a very interesting and active area of research because we're getting a lot of the data on mortgages, credit cards, retail investment, and financial accounts. These data are allowing us to look at and understand the equilibria in those markets, which is really fun.


WEB EXCLUSIVE

EF: What share of the population do you think is credit constrained?

Parker: Literally? I would say it's close to zero in that everybody always has a few bucks. But the perfect zero constraint is somewhat of an idealized thing. So the question is how many people are influenced by constraints in practice. Is their marginal propensity to consume noticeably influenced by the fact that they might be constrained next month or in six months? I would say that's quantitatively important for roughly half of the population.

EF: How do you think that looks over time?

Parker: I don't think there's a lot of transition between the people who would consistently hit these constraints or be concerned about them and the people for whom they're not that relevant.

EF: What are you working on currently?

Parker: I'm digging into a number of different proprietary datasets. In particular, I have two ongoing projects. In one project, with longtime collaborator Nick Souleles, I'm comparing the spending people say they do to what econometric methods reveal that they do. Economists generally want to use revealed behavior to test and estimate theories, but there's another methodology that's not unrelated to the use of survey expectations data, which is to just ask people what they did or what they might do in response to a certain policy. That is much less expensive than actual experimentation, and it might be quite informative. Marketing firms, for example, do lots of such surveys, and they get paid lots of money for their findings. An area of research in marketing is focused on predicting behavior from survey responses to hypothetical questions, called applied conjoint analysis.

Another project uses account-level data to look at people's responses to tax refunds and owing and paying income taxes. We are able to observe when people file their taxes as well as whether they owe taxes or have tax refunds. And because we have panel data, we can observe the previous year's refunds or taxes due, which can actually give us a very good proxy for what those people should expect. So we can look at changes in spending on the date that the news is fully in, the date that they file, but also over the weeks before they file, as they learn about whether they owed more or less taxes than last year. We also look at how spending adjusts when a refund is received or taxes paid, so we can look at not just positive incoming liquidity, like most previous research, but also negative liquidity. And we can contrast this response with the response to positive or negative news. What's kind of interesting is that so far we're finding that negative news and negative liquidity produce almost no responses — that people are quite good at smoothing through the bad, and what they tend to respond and react to is the good news and positive cash flow.

EF: Which economists have influenced you the most?

Parker: Certainly my advisers at MIT, Olivier Blanchard and Ricardo Caballero, and before MIT, at Michigan, Gary Solon and Matt Shapiro were big influences. And then at Princeton, Angus Deaton and Chris Sims and the theorists were phenomenal just to listen to, and I have been very influenced by them. I also really enjoyed my colleagues and the macro group at Northwestern. There are so many other people, it's hard to even say — certainly all my co-authors. One of the great things about being in departments that have heterogeneity in approaches is that you hear so many different perspectives and learn the different ways people have of thinking through problems and really just describing them. That may be especially true for my work given that I cross quite a few fields. I'm an applied microeconomist; I'm an asset pricer; I'm a macroeconomist; I'm a public finance economist; I'm a behavioral economist. So being in a place where you can listen to the theorists and the econometricians and the labor guys, and they all have different ways of learning about the world — that's just a lot of fun.


Jonathan A. Parker 

Present Position

Robert C. Merton (1970) Professor of Finance, Massachusetts Institute of Technology 

Previous Faculty Positions

Northwestern University (2007-2013); Princeton University (1999-2007); University of Wisconsin (1997-1999)

Education

Ph.D. (1996), Massachusetts Institute of Technology; B.A. (1988), Yale University

Selected Publications

"Why Don't Households Smooth Consumption? Evidence from a 25 Million Dollar Experiment," American Economic Journal: Macroeconomics, forthcoming; "Consumer Spending and the Economic Stimulus Payments of 2008," American Economic Review, 2013 (with co-authors); "Who Bears Aggregate Fluctuations and How?" American Economic Review, 2009 (with Annette Vissing-Jorgensen); "Optimal Expectations," American Economic Review, 2005 (with Markus Brunnermeier); "Consumption Over the Life Cycle," Econometrica, 2002 (with Pierre-Olivier Gourinchas)


Readings

Jonathan A. Parker's MIT website

NBER Annual Conference on Macroeconomics

Subscribe to Econ Focus

Receive an email notification when Econ Focus is posted online.

Subscribe to Econ Focus

By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.

Phone Icon Contact Us

David A. Price (804) 697-8018