Regulating Transformative Technologies

That is a very new paper by Daron Acemoglu and MIT grad student Todd Lensman, here is the abstract:

Transformative technologies like generative artificial intelligence promise to accelerate productivity growth across many sectors, but they also present new risks from potential misuse. We develop a multi-sector technology adoption model to study the optimal regulation of transformative technologies when society can learn about these risks over time. Socially optimal adoption is gradual and convex. If social damages are proportional to the productivity gains from the new technology, a higher growth rate leads to slower optimal adoption. Equilibrium adoption is inefficient when firms do not internalize all social damages, and sector-independent regulation is helpful but generally not sufficient to restore optimality.

Maybe that’s not a great abstract for non-economists, but the paper itself is pretty clear if you read through it.  It is very good to see someone finally start working this out.  Basically they lay out which assumptions might be needed to make a case for slowing down the progress of AI.

The case for accelerationism becomes stronger if you consider:

1. Early adoption of a technology, especially if done selectively, may facilitate learning.

2. The existence of a rival nation(s) with “less aligned” values and also with less reliable safety procedures.

3. The risk that a centralized regulator might make the technology more rather than less risky.  For instance, you might push the development of the technology into harder to monitor open source forms, and inefficiently so.

Of course a key goal is to endogenize the risk of serious social damages from AI in a decentralized system, rather than taking that risk as given.  I don’t expect the authors to have done that in this paper, in any case this literature is now up and running.

Comments

Comments for this post are closed