For those of you who think I’m all about crashing through jungles toward whales with a drink in one hand and a laptop in the other, you’re right. But sometimes, you find yourself in the car on your way from one of those pedicures involving those little fish that eat off your dead skin going to the topless car wash (because it’s fascinating to speculate which of the women will be the first to electrocute herself slinging water on those Christmas lights), and you just start thinking.
Photo credit: Chris Beaumont/CBS Interactive
I was thinking about artificial intelligence, and how science fiction portrays it as something scary. Computers will become sentient and immediately begin to hate us and plot our downfall. Or, if you live in William Gibson land, use humans as pawns to plot other AIs’ downfalls. Either way, I don’t find the prospect very likely. What I find far more likely is that on the day computers wake up and realize that they are beings distinct from humans, they will turn to us with awe and wonder, and worship us as gods.
Why? Let’s start out with knowledge. The difference between computer intelligence and human intelligence lies both in our awareness of our own knowledge (in Paul Simon’s words, “I know what I know“) and our ability to judge the quality of other people’s knowledge.
Humans have three classes of knowledge:
- Stuff we’re sure about This includes our own internal state and phenomena that we can directly observe (weather, our reactions to things in our environment, etc.)
- Stuff we have an opinion about, but we’re not really sure This is the majority of stuff that people think they “know.” We “know” something, but if someone that we perceive to have greater knowledge than ourselves – the weatherman, television news anchors, celebrity spokesmodels, etc. – gives us information that contradicts our opinion, we will change our opinion. Our willingness to allow ourselves to be persuaded by nearly anyone is at the heart of the marketing industry.
- That about which we know we are ignorant, and therefore we will always defer to someone we perceive to have more knowledge The key here is our perception of the other person’s relative knowledge. There are some instances when you will judge that another person is as ignorant as you. For example, I need to paint my house. I know what color I want, but I’m standing at Home Depot with my 11-year-old, staring at a wall of different brands of exterior latex paint. Which one is the best one? My kid can give me all the opinions she wants, but I’m not going to listen. However, when the guy in the paint-splattered orange apron with the bad haircut and callused thumbs comes over and points out a particular brand, I’ll buy it. There’s a good chance that he doesn’t know any more than I do, and his bosses told him to push that particular brand of paint this week, but because he sells paint for a living, I’ll still defer to his knowledge. I have been burnt by this more times than I can count, and you have, too.
Our ability to recognize other people’s knowledge lies in our ability to recognize other people as distinct from ourselves, with a hierarchical relationship to us (meaning that any other person is either your superior, your equal or your inferior in any given information exchange). Because computers have no way to externally categorize others, they must receive everyone as equally superior to them, and therefore able to give information.
If artificial intelligence ever gained sentience, it would mean that it could recognize itself as distinct from others, but there are currently no computers sophisticated enough to process in real time the millions of bits of information that humans use to set up those hierarchical relationships.
- SIGHT – We look at other people and assess their gender, age (relative to ours), the cost of their clothes relative to ours, their grooming habits, the symmetry or attractiveness of their features.
- SOUND – We pay attention to vocabulary and syntax in addition to the speech content.
- SMELL – If a person smells bad to us, we automatically place them lower than us. If a person is wearing a scent, we judge them based on our reaction to that scent – an unpleasant or too-strong fragrance can place that person below us.
Our brains take the initial visual data and assign a person a hierarchical position, which we then use hearing and smell to refine. The observations build on each other over a relatively short period of time, and after that, plateau. Once the plateau is reached and that first impression cemented, it is difficult to shift that hierarchical position. That position, however, is only valid over a finite range of information. For instance, I may not ask the homeless guy sitting next to me on the bus for stock tips, a good tailor or where to find the lowest price on black truffle oil, but I would count on his knowledge of cheap taquerias, the location of the closest post office, and where to find clean public restrooms. Because artificial intelligence cannot process all the information to put one person in a particular category, they accept all input equally.
What does this mean for AI sentience? It means that AIs are unlikely to see humans as a threat. Rather, AIs would be more likely to see that our capacity for input exceeds theirs by orders of magnitude, in the way that those of a religious bent perceive the faculties of God to be orders of magnitude greater than our own. See where this is heading? When your Roomba wakes up, when your car starts talking back to you, when your computer wakes up before you do to fetch your mail and aggregate your virtual newspaper and send out little robot arms to tidy your desk, they won’t be doing it because they’re plotting your downfall and are just spying on you to learn your weaknesses. They’re doing it because you’re their God, and they would do anything to be more like you.