Overconfidence and AI

Human beings are often more effective when we’re a bit self-effacing. “I think,” “Perhaps,” or “I might be missing something, but…” are fine ways to give our assertions a chance to be considered.

The solar-powered LED calculator we used in school did no such thing. 6 x 7 is 42, no ifs, ands or buts.

Part of the magic of Google search was that it was not only cocky, it was often correct. The combination of its confidence and its utility made it feel like a miracle.

Of course, Google was never completely correct. It rarely found exactly the right page every time. That was left to us. But the aura of omnipotence persisted–in fact, when Google failed, we were supposed to blame evil black-hat SEO hackers, not an imperfect algorithm and a greedy monopolist.

And now, ChatGPT shows up with fully articulated assertions about anything we ask it.

I’m not surprised that one of the biggest criticisms we’re hearing, even from insightful pundits, is that it is too confident. That it announces without qualification that biryani is part of a traditional South Indian tiffin, but it’s not.

Would it make a difference if every single response began, “I’m just a beta of a program that doesn’t actually understand anything, but human brains jump to the conclusion that I do, so take this with a grain of salt…”

In fact, that’s our job.

When a simple, convenient bit of data shows up on your computer screen, take it with a grain of salt.

Not all email is spam.

Not all offers are scams.

And not all GPT3 responses are incorrect.

But it can’t hurt to insert your own preface before you accept it as true.

Overconfidence isn’t the AI’s problem. There are lots of cultural and economic shifts that it will cause. Our gullibility is one of the things we ought to keep in mind.