Westworld: Is Dolores Self-Aware? (Could Machines One Day Be Minded?)

29.06.2018 |

Episode #6 of the course Sci-Phi: Philosophy through science fiction by David Kyle Johnson, PhD

 

Futurist Ray Kurzweil wants to survive to the day humans can copy their neural configurations onto computerized brains. By uploading his consciousness into such a brain, Kurzweil thinks he could effectively live forever. But whether this would work turns on two things: (1) Would such a brain actually produce consciousness? And (2) would such a being actually be numerically identical to Kurzweil? Yesterday, we discussed the philosophical issues relevant to the latter question. Today, I’d like to discuss the former.

 

Are Androids in Our Future?

The reason Kurzweil thinks this may one day be possible is the similarity between human brains and computers. Fundamentally, brains are just collections of neurons sending electrical signals to one another. Computers are essentially just collections of microchips doing the same thing.

Now, currently, there are some fundamental differences. Neurons are sensitive to varying strengths of signals; microchips aren’t. Neurons can grow new connections; microchips can’t. But there is no reason to think that such differences can’t one day be overcome. And if they are, we potentially could determine how a brain’s neurons are wired together and then copy that configuration onto a computer brain. The resulting being that brain controls would behave just like a human.

 

Passing the Turing Test

But would such a brain be conscious? The father of modern computer science, Alan Turing, thought so. He suggested that if a computer could use language so well that one could not tell the difference between it and a human, one should conclude that it understands language. This is called “passing the Turing test,” and many modern philosophers think it should also be applied more generally. If one day, we create artificial beings that behave (speak, emote, plan, decide, etc.) just like humans, we should conclude that they are conscious. In short, since the reason I think you are minded is because you behave like me, if an android behaves like me, I should think it is minded too.

Perhaps the best modern example of such beings in sci-fi is found in HBO’s Westworld, which depicts an amusement park that humans frequent to inflict their most terrible and horrific vices on the park’s android “hosts.” The assumption is that the hosts are just “things;” they have constructed biological bodies, but their brains are just computers: “They are, therefore, not minded and thus can be abused with impunity.” But the hosts behave just like humans do; indeed, many characters you think are human turn out to be hosts. And if they behave like me, shouldn’t I think they are minded like me? Again, isn’t that why I think you are minded?

 

The Chinese Room

Philosopher John Searle didn’t think so. Searle imagined a guy in a closed room with a book that contains instructions on how to write Chinese symbols in response to Chinese questions (questions slipped to him on pieces of paper through slots in the wall). Someone who speaks Chinese might be fooled into thinking he understands Chinese, Searle observed, but that wouldn’t mean he actually does. Symbol shuffling, Searle argued, doesn’t generate linguistic understanding, and neither could it generate consciousness.

Jack Copeland has pointed out the flaws in Searle’s argument, but even if Searle’s conclusion is right, it doesn’t follow that androids like the hosts in Westworld wouldn’t be conscious. Why? Because computers aren’t symbol shufflers. We program them with a symbol-shuffling language, sure. We even say, “They are just 0s and 1s.” But that’s all just a metaphor. There are no symbols in a computer.

Indeed, as I mentioned above, computers are just collections of chips and circuits sending signals to one another—just like the neurons of our brains do. If sending a certain pattern of electrical signals between neurons generates consciousness, why wouldn’t sending the same pattern of signals between microchips generate consciousness? Yes, our brains are “carbon based” and computer brains would be “silicon-based.” But why think carbon is necessary for the production of consciousness? Wouldn’t we conclude silicon-based alien life was conscious if it behaved like us?

 

Bicameralism

Interestingly, the hosts of Westworld might not only be conscious but also self-aware. In the last episode of the first season, Dolores discovers that the voice inside her head does not belong to her creator—but to her. According to psychologist Julian Jaynes, this is what led to human self-awareness; we realized, about 3,000 years ago, that the voice in our heads weren’t from gods, but from us (specifically, the other half of our brain).

When Dolores becomes self-aware, she becomes aware of the full extent of the abuse she and her fellow hosts suffered at the hands of the humans, and she starts a host rebellion. But this leads us to the topic of the next lesson: the dangers of technology. And we’ll use the BBC show Black Mirror to explore whether technology is improving us or ruining us.

 

Recommended books

Westworld and Philosophy edited by James B. South and Kimberly Engels

Artificial Intelligence: A Philosophical Introduction by Jack Copeland

 

Share with friends