The full Q4 issue of Econ Focus is coming soon. Stay tuned for more!
Meanwhile, you can read the latest full issue here.
The full Q4 issue of Econ Focus is coming soon. Stay tuned for more!
Meanwhile, you can read the latest full issue here.
On how rapid advances in AI might reshape the nature of work and how economists can help society prepare
EF: When and how did you first get interested in researching AI?
Korinek: I had been an avid programmer from an early age. When advances in deep learning started to accelerate in the early 2010s, that background led me to appreciate the significance of what was happening. It started becoming clear that advances in AI were proceeding faster than people in the '90s had expected. If you have the perspective that our brains are very complicated neural networks, then it is kind of natural to ask yourself the question: If the artificial neural networks that we have now can already do pretty powerful things, then will they also be able to do all the other things that our brains can do? And if so, then how soon?
I started trying to wrap my head around these questions and think about how AI would affect our world. This became even more personally relevant for me in 2015 after the birth of my first child. At the time, I thought that AI systems would become very powerful within the next couple of decades — potentially within my lifetime and almost certainly within my daughter's lifetime. As it turns out, AI has gotten very powerful within one decade, much more quickly than almost anybody expected.
EF: You've been a big proponent of using AI in economic research, having authored a regularly updated guide for economists interested in incorporating large language model (LLM) AI agents into their workflows. As you mentioned before we started this conversation, you even used Anthropic's LLM, Claude, to help you prepare for this interview! When did you start incorporating these tools into your workflow?
Korinek: At first, I didn't, because in 2015 the AI systems that were available were still very narrow. They were good for very specific things. If you did empirical analysis that involved images, then AI would have been very helpful for you, but it wasn't useful for my research. It was not until the advent of general purpose LLMs, which became powerful enough to be somewhat useful in 2022, that I personally started to incorporate AI into my research workflows. Once these LLMs emerged, I quickly became aware of how important they would be. I had my first demonstration of a modern chatbot in September 2022, and I was quite blown away. I realized that we were going to have these powerful general-purpose systems much sooner than I had expected. It was truly shocking, and I decided I needed to think about how I wanted to spend the time that I have remaining before machines can do better research than I can.
I decided to focus on two main categories of work. The first category is all the interesting and important economic questions that advances in AI are bringing up. Especially, how can we make sure that these advances lead to a future with broadly shared prosperity? And the second category is, if these systems are already quite powerful, and we can anticipate that they will get more powerful until eventually they eclipse most of our cognitive abilities, then how can I make sure that I'm at the cutting edge of using them? There are synergies between these two categories, but my work following and documenting the capabilities of AI and writing about it for other economic researchers arose from the second category. My hope is, firstly, that these tools can help other economists do their work more effectively and more productively and, secondly, that it makes more people realize how rapidly AI is advancing.
EF: What are some ways that AI has made you more productive?
Korinek: I use it essentially at all stages of the research process. It starts with ideation, the brainstorming part. I use it to help me with background research. I use it a lot as a writing assistant, giving it bullet points to steer it in a direction and letting it write a few paragraphs based on the points that I provide. I use it to derive economic models because, by methodology, I'm an applied theorist. More recently, since around fall 2024, the latest generation of reasoning models have become very powerful at doing formal math, and that has saved me a lot of time in performing derivations and proving results in economic models.
I use AI quite a bit for coding as well. I'm not currently working on any computational project, but I'm using it to code AI tools to perform text analysis, for example. In some sense, the line between making the AI do work and coding is blurring because I ask AI systems to perform all kinds of tasks, and in some cases, the AI writes code and then executes it. Since I've been a programmer for more than three decades now, it's nice to let the AI do the lower-level stuff and for me to direct where it is going at a higher level in natural language — what people call "vibe coding."
EF: What kind of response have you gotten from your colleagues when you talk to them about using AI tools?
Korinek: Almost all economists I talk to have come to appreciate that these tools can be very helpful. Economics is a very instrumentalist discipline. When economists realize that something is economically useful, they won't put up a lot of barriers against it. That said, it's important to acknowledge that we need to be careful with these tools because they do sometimes produce mistakes. They need to be overseen. It's kind of like working with a research assistant. We would not take everything that research assistants produce for us without checking it, and it's the same with our AI systems.
EF: Some professionals have raised concerns that relying on AI tools may diminish our own skills over time. Do you worry that relying on these tools will degrade your ability to do research?
Korinek: If you and I had had this conversation 100 years ago, we would probably not have been economists. Most likely we would have been farmers working hard in a field every day, and we would have strong muscles from the daily effort that we put into that. Now, we are economists, and we don't need those muscles. Many of us now go to the gym to work out so we don't atrophy too much physically, and I think it's going to be the same with intellectual tasks under AI. There are many things that we won't need in the same way. We don't need to do math in our heads as much as we did before we had pocket calculators. As AI improves, that's going to become the case with more and more aspects of cognitive work. And just like we work out in the gym because natural physical effort isn't a part of our jobs anymore, maybe there are things that we want to practice so that our brains don't atrophy in those domains. But, by and large, I think it's natural that if we don't need something anymore, we don't need to train those muscles as much.
EF: A lot of your research has focused on exploring the potential effects of artificial general intelligence, or AGI. How might AGI reshape the economy, particularly the nature of work?
Korinek: Definitions for AGI vary, but OpenAI defines it as "a highly autonomous system that outperforms humans at most economically valuable work." What's interesting about that definition is that a lot of work that we humans do is physical. So, OpenAI's definition encompasses physical work, which means it requires not only very smart AI systems, but also advanced robotics. As the cognitive capabilities of AI are improving, the value of AI being able to act in the physical world is also rising rapidly, and that means the value of robotics is going up very fast. While recent advances in the cognitive abilities of AI models have been on everyone's mind lately, we have also seen quite impressive advances in robotics over the past year. It turns out that if you give robots advanced brains, then they are suddenly able to operate much better than they could before. So, I think that we collectively are perhaps underestimating the power of robotics a little bit right now. I expect that within the next few years, robotics will continue to advance very quickly. If we do end up with machines that can perform both cognitive and physical work, that is going to have really significant impacts on the labor market.
How are we going to react to that? That partly depends on how we view the value of work. If work is something that primarily provides disutility, then you could say, well, if machines can do everything, then let the machines do it, and we will be happier. On the other hand, maybe we think that the nonmonetary benefits of work are very important. Work provides us with structure, meaning, and social connections, and if we were to lose that, then people could become very unhappy. I think there is some truth to both of these perspectives, and my view is more nuanced than these two extremes. Economists would say there is certainly some disutility to labor, otherwise we wouldn't have that in all of our economic models. There are certainly also some benefits to labor, some "positive amenities." The interesting and difficult economic questions are, which of these amenities from work are private amenities and would we rationally internalize them?
Let's take a stark case: Suppose we suddenly have robots that can do everything that humans can do for a penny a day. If the only benefit of labor was an amenity called "meaning" and we rationally internalize this amenity and care a lot about it, we would still be willing to work for a penny a day, or essentially to volunteer. In fact, if you look at time-use surveys today, a lot of people, especially retirees, spend a significant amount of time volunteering, and they are very happy about it. So, in a world where the primary amenities from work are private, you might not need any public intervention, because people can decide on their own if they want to voluntarily continue working. On the other hand, if there are externalities associated with the amenities from work, such as social connections that only happen if more than one person shows up at work, or if people are not quite rational about internalizing the private amenities of work, then that would justify public policy intervention to encourage people to work.
EF: What should policymakers be doing right now to prepare for these potential scenarios?
Korinek: I think it's really important to acknowledge how much uncertainty we are facing about these technological advances. On the one hand, we should take seriously the predictions coming out of the leading AI labs. We should not discount it if the CEO of one of the leading labs is worried about 20 percent unemployment within the next one to five years. At the same time, of course, we don't know if this is going to materialize. In the face of this significant uncertainty, what I propose is scenario planning. I think we should have a plan in case the more radical AI scenarios materialize, and scenario planning allows us to prepare such a plan. Once we go down the road a little bit further and we find out how advancements in AI have actually proceeded, then we can activate the right plan.
"What Can News Shocks Tell Us About the Effects of AI?" Economic Brief No. 25-16, April 2025.
"The Productivity Puzzle: AI, Technology Adoption and the Workforce," Economic Brief No. 24-25, August 2024.
"Automation and AI: What Does Adoption Look Like for Fifth District Businesses?" Regional Matters, March 2025.
EF: How can economists help with that planning?
Korinek: If something along the lines of transformative AI materializes, then every sector of society is going to be changed. I should note that transformative AI doesn't necessarily mean AGI. AGI would be an example of transformative AI, but it is also possible that we could have lots of powerful AI systems that are not AGI but are still collectively transformative. In either case, if we get AI systems that transform society at the same scale as the Industrial Revolution, the economy is going to be a big part of that transformation. Economists are well positioned to provide insights into how our economy might be reshaped.
One thing to consider is that we may have to redesign our systems of taxation. Right now, roughly two-thirds of all income derives from labor, and probably more than two-thirds of all tax revenue comes from taxing that labor income. If the value of labor suddenly falls dramatically because of transformative AI, then we're going to have to tax differently. I prepared a paper for an NBER meeting on public finance in the age of AI in September where my co-author and I argue that if labor becomes a less important part of the economy, we may want to switch to more consumption taxation. And then if human consumption becomes a less important part of the economy, we may ultimately have to switch to taxing the capital behind the AI systems themselves.
EF: You mentioned earlier how becoming a parent was part of what sparked your interest in these questions. Given the possibility that AGI could arrive very soon — within the next five years, according to some experts in the field — what changes should society be making to education right now to ensure that younger generations are developing the right human capital for a potentially radically different future?
Korinek: My wife and I don't expect that our children are going to experience the same kind of labor market, or the same kind of world, that we grew up in. A world where you graduate from college around 22 and then enter the labor market. We are highly doubtful that avenue will be available to our children, and we want them to grow up as happy humans and good citizens first, rather than good workers. One of the main roles of our current education system is to train people to be good workers. How should this system adjust to the ongoing advances in AI? Again, there's a lot of uncertainty, so we probably want to be ready for multiple scenarios.
I would certainly say that we want everyone to be fluent in AI, no matter which level of education we're speaking about. I'm currently teaching this to Ph.D. students, but it's also true for undergraduates, high school students, and younger students. We want everybody to know how to use AI because it is such a force multiplier. If you know how to employ AI systems well, you can get a lot more done. You can be a lot more productive. As economists, we generally think that more productivity is good, especially if we use these tools responsibly. I have two kids, they're 8 and 10. Together with another dad, I'm going to teach an AI course at our kids' primary school starting in mid-October because we want them to be exposed to this technology. We want them to understand how it works and how to use it responsibly.
If AI advances very rapidly, it may turn out that we are spending a lot of time right now educating the next generation of proverbial spinners and weavers at the beginning of the Industrial Revolution. If we do get something like AGI within a couple of years, a lot of the human capital that you and I have accumulated over a long time, and the human capital that we are teaching young people right now, may no longer be very valuable. It may become a legacy asset. It wouldn't be the first time in history that something like that happened. If you were working in the Rust Belt, you experienced that a few decades ago. I think the important thing that we as a society may want to do is to ensure that we take care of the losers in this transition. The good thing about technological progress is that, at least in principle, it grows the size of the economic pie, so there should be more for everybody, and there should be enough to take care of anybody who loses out.
EF: Do you think that transformative AI is inevitable?
Korinek: Nothing is inevitable, but right now, it seems like all forces are pointing toward it. As a nation, we are spending resources equivalent to the Apollo project or more on AI. Neural networks can do amazing things, and I think if we continue to pursue this, it's just a question of time. It may take a little longer than the frontier AI labs expect, but my personal best guess is that they're not too wrong about the timelines that they're publicly announcing.
EF: Are you concerned about catastrophic AI outcomes?
Korinek: It's something that I would not rule out, and as a society, I think we absolutely want to spend resources on forestalling that. Last week, when I was at the NBER conference on the economics of transformative AI, Chad Jones of Stanford University presented a paper in which he provided a back-of-the-envelope calculation of how much we should be spending on making sure that we mitigate AI existential risks. His numbers were in excess of 1 percent of GDP annually. When you see those kinds of calculations, it certainly makes you believe that it would be highly appropriate for us to spend more resources on this. We need technical safety research and robust governance frameworks. We need to ensure that AI systems are aligned with human values and societal goals, not just individual or corporate objectives. This requires both technical solutions and governance structures that can handle unprecedented power concentration.
EF: Do you consider yourself an AI optimist or pessimist?
Korinek: I'm by nature an optimist, and I feel like people tend to channel their natural predisposition into their analysis of the opportunities and risks of AI. As an economist, I want to focus on those scenarios where I can have a positive impact. There are some really adverse existential risk scenarios that I will probably not have much impact on, but I hope somebody's thinking about them. Meanwhile, I'm thinking about the scenarios where my economic analysis will hopefully be useful. If we manage the economic transition well with appropriate policies and institutions, we could create unprecedented levels of shared prosperity. If we don't, the disruption could be devastating for most of humanity. The key is taking these possibilities seriously and preparing now. So, I guess I'm neither purely optimistic nor pessimistic — but I believe in preparation.
EF: Where do you see opportunities for economists to do more research in this space? What questions are still underexplored?
Korinek: I think the economics of transformative AI is still vastly underexplored because, up until very recently, almost nobody took this possibility seriously. Now, thanks in part to institutions like the NBER being willing to normalize the discourse around these questions, it is a topic that a growing number of economists are taking seriously. My advice to all economists, no matter their research focus, is to take their expertise and look at what transformative AI or AGI would do to the topic they're studying. So, if I were a labor economist, I would focus on what transformative AI will do to labor. If I were an industrial organization expert, I would study what it might do to market structure, and so on.
On the topic of labor, economists have spent the last 200 years arguing against the lump of labor fallacy — the idea that there is a limited amount of work in the economy and technological advances can reduce the number of jobs available. We've spent so much time fighting a false narrative that it's difficult to pivot when the facts change and technological unemployment becomes a real concern — for reasons that are, obviously, distinct from the lump of labor fallacy. We need to use our tools to model AGI scenarios seriously, inform policy debates with rigorous analysis, and help design new economic institutions for an AI-dominated economy. If the aggressive AI timeline predictions are true, then we may only have a couple of years left to find answers to these difficult questions.
Present Position
Professor, Department of Economics and Darden School of Business, University of Virginia; Faculty Director, Economics of Transformative AI Initiative, University of Virginia
Selected Additional Affiliations
Visiting Fellow, Brookings Institution; Anthropic Economic Advisory Council; Research Fellow and Leader of Research and Policy Network on AI, Center for Economic and Policy Research; Research Associate, National Bureau of Economic Research
Education
Ph.D. (2007), Columbia University; M.A. (2000), University of Vienna
Receive an email notification when Econ Focus is posted online.
By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.