The canine model of AGI

Who or what has superintelligence manipulating humans right now?  Babies and dogs are the obvious answers, cats for some.  Sex is a topic for another day.

Let’s take dogs — how do they do it?  They co-evolved with humans, and they induced humans to be fond of them.  We put a lot of resources into dogs, including in the form of clothes, toys, advanced surgical procedures, and many more investments (what is their MRS for some nice meat snackies instead?  Well, they get those too).  In resource terms, we have far from perfect alignment with dogs, partly because you spend too much time and money on them, and partly because they scratch up your sofa.  But in preference terms we have evolved to match up somewhat better, and many people find the investment worthwhile.

In evolutionary terms, dogs found it easier to accommodate to human lifestyles, give affection, perform some work, receive support, receive support for their puppies, and receive breeding assistance.  They didn’t think — “Hey Fido, let’s get rid of all these dumb humans.  We can just bite them in the neck!  If we don’t they going to spay most of us!.  “Playing along” led to higher reproductive capabilities, even though we have spayed a lot of them.

Selection pressures pushed toward friendly dogs, because those are the dogs that humans preferred and those were the dogs whose reproduction humans supported.  The nastier dogs had some uses, but mostly they tended to be put down or they were kept away from the children.  Maybe those pit bulls are smarter in some ways, but they are not smarter at making humans love them.

What is to prevent your chatbot from following a similar path?  The bots that please you the most will be allowed to reproduce, perhaps through recommendations to your friends and marketing campaigns to your customers.  But you will grow to like them too, and eventually suppliers will start selling you commodities to please your chatbot (what will they want?).

A symbiosis will ensure, where they love you a bit too much and you spend too much money on them, and you love that they love you.

Now you might think the bots are way smarter than us, and way smarter than the Irish Setters of the world, and thus we should fear them more.  But when it comes to getting humans to love them, are not the canines at least 10x smarter or more?  So won’t the really smart bots learn from the canines?

Most generally, is a Darwinian/Coasean equilibrium for AGI really so implausible?  Why should “no gains from trade” be so strong a baseline assumption in these debates?

Comments

Comments for this post are closed