“The Simple Macroeconomics of AI”

That is the new Daron Acemoglu paper, and he is skeptical about its overall economic effects.  Here is part of the abstract:

Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.71% increase in total factor productivity over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.55%.

Note he is not suggesting TFP (total factor productivity, a measure of innovation) will go up by 0.71 percentage points (a plausible estimate, in my view), he is saying it will go up 0.71% over a ten year period, or by 0.07 annually.  Here is the explanation of method:

I show that when AI’s microeconomic effects are driven by cost savings (equivalently, productivity improvements) at the task level—due to either automation or task complementarities—its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. This equation disciplines any GDP and productivity effects from AI. Despite its simplicity, applying this equation is far from trivial, because there is huge uncertainty about which tasks will be automated or complemented, and what the cost savings will be.

Mostly I think this piece is wrong, and I think it is wrong for reasons of economics.  It is not that I think the estimate is off, I think the method is misleading altogether.

As with international trade, a lot of the benefits of AI will come from getting rid of the least productive firms from within the distribution.  This factor is never considered.

And as with international trade, a lot of the benefits of AI will come from “new goods,”  Since the prices of those new goods previously were infinity (do note the degree of substability matters), those gains can be much higher than what we get from incremental productivity improvements.  The very popular Character.ai is already one such new good, not to mention I and many others enjoy playing around with LLMs just about every day.

By the way, the core model of this paper — see pp.6-7 — postulates only a single good for the economy.  Mention of the contrary case does surface on p.11, and starting with p.19, where most of the attention is devoted to bad new goods, such as more effective manipulation of consumers.  Note the paper doesn’t have any empirical argument as to why most new AI goods might be bad for social welfare.

pp.34-35 focus on the possibility of a public goods problem for AI use, similar to what has been suggested for social media.  That discussion seems very far from both current practices with AI and most of the speculation from AI experts.  Do I have to use Midjourney because all of my friends do, and I wish the whole thing didn’t exist?  Or rather do I simply find it to be great fun, as do many people when they create their own songs with AI?  It is dubious to play up the prisoner’s dilemma effects so much, but Acemoglu returns to this point with much force in the conclusion.

Toward the end he writes:

Productivity improvements from new tasks are not incorporated into my estimates. This is for three reasons. First and most parochially, this is much harder to measure and is not included in the types of exposure considered in Eloundou et al. (2023) and Svanberg et al. (2024). Second, and more importantly, I believe it is right not to include these in the likely macroeconomic effects, because these are not the areas receiving attention from the industry at the moment, as also argued in Acemoglu (2021), Acemoglu and Restrepo (2020b) and Acemoglu and Johnson (2023). Rather, areas of priority for the tech industry appear to be around automation and online monetization, such as through search or social media digital ads. Third, and relatedly, more beneficial outcomes may require new institutions, policies and regulations, as also suggested in Acemoglu and Johnson (2023) and Acemoglu et al. (2023).

While many of the points in that paragraph seem outright wrong to me (such as the industry attention point), what he can’t bring himself to say is that the gains from such new tasks will in fact be small.  Because they won’t be.  But whether or not you agree, what is going on in the paper is that the gains from AI measure as small because it is assumed AI will not be doing new things.  I just don’t see why it is worth doing such an exercise.

A more general question is whether this model can predict that TFP moves around as much as it does.  I am pretty sure the answer there is “no,” not anywhere close to that.

On the general approach, I found this sentence (p.4) very odd: “…my framework also clarifies that what is relevant for consumer welfare is TFP, rather than GDP, since the additional investment comes out of consumption.”  I would say what is relevant for consumer welfare is the sum of consumer and producer surpluses, of which TFP is not a sufficient statistic.  This unusual “redefinition of all welfare economics in a single sentence” perhaps follows from how many other gains from trade he has abolished from the system?  And footnote six is odd and also wrong: “For example, if AI models continue to increase their energy requirements, this would contribute to measured GDP, but would not be a beneficial change for welfare.”  Even for dirty energy that might be wrong, not to mention for green energy.  If an innovation induces the market to invest more in a service, the costs of that added investment simply do not scuttle the gains altogether.  And if Acemoglu wants to argue that weird welfare economics is true in his model, that is a good argument against his model, not a good argument that such gains would not count in the real world, which is what this paper is supposed to be about.

Acemoglu explicitly rules out gains from doing better science, as they may not come within the ten-year time frame.  On that one, he is the prisoner of his own assumptions.  If many gains come in say years 10-15, I would just say the paper is misleading, even if his words are defensible in the purely literal sense.

That said, just how much does the “no new science” clause rule out?  In terms of an economic model, how does “new science” differ from “TFP”?  I am not sure, not are we given clear guidance.  Is better software engineering “new science”?  Maybe so?  Won’t we get a lot of that within ten years?  Don’t we have some of it already?

In sum, I don’t think this paper at all establishes the “small gains point” it is trying to promote in the abstract.

It is perfectly fair to point out that the optimists have not shown large gains, but in this paper the deck is entirely — and unfairly — stacked in the opposite direction.

For the pointer I thank Gabriel.

Comments

Respond

Add Comment