Who are the real AI experts?

That is the topic of my 2x as long as usual Bloomberg column, here is one excerpt:

It almost goes without saying that there are different kinds of expertise. National security specialists, for example, confront dangerous risks to America all the time, and they have to develop a synthetic understanding of how to respond. How many of them have resigned from the establishment to become AI Cassandras? I haven’t seen a flood of protests, and these are people who have studied how destructive actions can amplify through a broader social and economic order. Perhaps they are used to the idea that serious risks are always with us.

And here is more:

When it comes to AI, as with many issues, people’s views are usually based on their priors, if only because they have nowhere else to turn. So I will declare mine: decentralized social systems are fairly robust; the world has survived some major technological upheavals in the past; national rivalries will always be with us (thus the need to outrace China); and intellectuals can too easily talk themselves into pending doom.

All of this leads me to the belief that the best way to create safety is by building and addressing problems along the way, sometimes even in a hurried fashion, rather than by having abstract discussions on the internet.

So I am relatively sympathetic to AI progress. I am skeptical of arguments that, if applied consistently, also would have hobbled the development of the printing press or electricity.

I also believe that intelligence is by no means the dominant factor in social affairs, and that it is multidimensional to an extreme. So even very impressive AIs probably will not possess all the requisite skills for destroying or enslaving us. We also tend to anthropomorphize non-sentient entities and to attribute hostile intent where none is present.

Many AI critics, unsurprisingly, don’t share my priors. They see coordination across future AIs as relatively simple; risk-aversion and fragility as paramount; and potentially competing intelligences as dangerous to humans. They deemphasize competition among nations, such as with China, and they have a more positive view of what AI regulation might accomplish. Some are extreme rationalists, valuing the idea of pure intelligence, and thus they see the future of AI as more threatening than I do.

So who exactly are the experts in debating which set of priors are more realistic or useful?

Recommended!

Comments

Comments for this post are closed