If you’re a fan of HBO’s Westworld, then you’re probably wondering how something so advanced could become a reality. How far away are we from travelling to these places that blur the line between play and reality? Could robots reach a level of consciousness? Can they dream? Most importantly, would we be safe?
David Eagleman, a neuroscientist at Stanford University in Palo Alto, CA, and scientific adviser to the Westworld writing staff during season one, took a stab at explaining how likely some of these things are to actually happen.
How did you get involved in the show?
Eagleman: I was talking with one of the writers, and I asked who their scientific adviser was. Turns out, they didn’t have one. So that’s how I got on board. Then I went to [Los Angeles, California,] and had a long session with the producers and writers, for about 6 hours, maybe 8, about free will and the possibility of robot consciousness.
I also showed them some tech that I’d invented. I gave a TED talk a few years ago on this vest with vibratory motors on it. That’s now part of the season two plot. I can’t tell you anything about it. The real vest vibrates in response to sound, for deaf people, but in Westworld it serves a different purpose, giving the wearers an important data stream.
What else did you talk about?
Eagleman: What is special, if anything, about the human brain, and whether we might come to replicate its important features on another substrate to make a conscious robot. The answer to that of course is not known. Generally, the issue is that all Mother Nature had to work with were cells, such as neurons. But once we understand the neural code, there may be no reason that we can’t build it out of better substrates so that it’s accomplishing the same algorithms but in a much simpler way. This is one of the questions addressed this season.
Here’s an analogy: We wanted to fly like birds for centuries, and so everybody started by building devices that flapped wings. But eventually we figured out the principles of flight, and that enabled us to build fixed-wing aircraft that can fly much farther and faster than birds. Possibly we’ll be able to build better brains on our modern computational substrates.
Has anything on the show made you think differently about intelligence?
Eagleman: The show forces me to consider what level of intelligence would be required to make us believe that an android is conscious. As humans we’re very ready to anthropomorphize anything. Consider the latest episode, in which the androids at the party so easily fool the person into thinking they are humans, simply because they play the piano a certain way, or take off their glasses to wipe them, or give a funny facial expression. Once robots pass the Turing test, we’ll probably recognize that we’re just not that hard to fool.
Can we make androids behave like humans, but without the selfishness and violence that appears in Westworld and other works of science fiction?
Eagleman: I certainly think so. I would hate to be wrong about this, but so much of human behavior has to do with evolutionary constraints. Things like competition for survival and for mating and for eating. This shapes every bit of our psychology. And so androids, not possessing that history, would certainly show up with a very different psychology. It would be more of an acting job — they wouldn’t necessarily have the same kind of emotions as us, if they had them period. And this is tied into the question of whether they would even have any consciousness — any internal experience — at all.
Are there any moments of especially humanlike behavior in the show?
Eagleman: In my book ‘Incognito’, I describe the brain as a team of rivals, by which I mean you have all these competing neural networks that want different things. If I offer you strawberry ice cream, part of your brain wants to eat it, part of your brain says, “Don’t eat it, you’ll get fat,” and so on.
We’re machines built of many different voices, and this is what makes humans interesting and nuanced and complex. In the writers’ room, I pointed out that one of the [android] hosts, Maeve, in the final episode of season one, finally gets on a train to escape Westworld, and she decides she’s going back in to find her daughter. She’s torn, she’s conflicted. If the androids had a single internal voice, they’d be missing much of the emotional coloration that humans have, such as regret and uncertainty and so on. Also, it just wouldn’t be very interesting to watch them.
According to Eagleman, Westworld isn’t a perfect depiction of what life could be like — at least not anytime soon. Most of what we see on the show is for entertainment, but that’s not to say we can’t reach that level of interaction with artificial intelligence someday.