Surely you have seen the astonishing technology preview of an android girl who is afraid of dying. This video is not only a testament to the future’s potential for technology, but also an homage to the little details that make us human. It is a beautiful rendition of how an artificial being would behave if we were to make androids in our likeness. Later I will also make a claim about why they chose to use a female android instead of a male model. In my opinion, the reason for this is much more complex than mere male hormones doing the thinking. The people at Quantic Dream, the company behind this technology preview and the maker of the game Heavy Rain, are certainly sophisticated storytellers. At least that is my impression after playing the aforementioned game.
One might consider it fun trivia that I wrote this article on an Android phone. Kara, from the video, is an AI being that resembles an average-looking girl. Her body has strong, feminine curves. However, the first seconds focus on the face and head. The eyes and head are where the real emotions — the expressions that will make her practically indistinguishable from humans — are. Our future will most likely demand of us that we learn to coexist with artificial intelligences. What we see in the video is more than a pretty speculation into the future of robotics.
The first question we hear is “Can you hear me?” For us humans it would seem a very simple question to answer. For an artificial mind, though, it might prove to be a complex existentialist inquiry. What our brain analyzes in less than a second is a complicated thought process before we can utter an answer. Before a computer can reply with a confident “Yes,” quite a few years of research are still necessary. One must take into account the presupposed knowledge we take for granted.
For an artificial mind to answer the above question, it must have an understanding of various concepts, ranging from the idea of life and death to a self-affirming acknowledgment. None of the physical prerequisites are included, of course. An auditory device must be in place. Not being an expert in cybernetics, I can only speculate as to what is technically plausible from today’s perspective. The philosophical aspect is what mostly captures my interest. Cyberethics is a deep theme that will become more and more relevant.
How Real Are Human Emotions?
We human beings have the ability to be sincere or insincere. Robots will behave according to their programming. They will feel love and fear in whichever way their makers perceive these emotions. If you are familiar with this subject, you will have come across the following questions. Will robots have the same rights as humans? If not, where does one draw the line? What will one answer if a robot asks why they are considered inferior? We make them in our own likeness, to do our bidding. They are nothing more than machines to us, but in truth, they believe they are alive. At one point in the future, artificial intelligence will be so advanced that machines can understand the concept of life and death. Recently, in the film Prometheus, the android character David imitates human behavior by watching Lawrence of Arabia. In one scene we see him repeating the line “The trick, Mr. Potter, is not minding it hurts,” over and over.
That particular line is about ignoring the feeling of pain in order to be able to do more. Hearing an android repeatedly reiterating that scene makes you wonder what it actually means to feel emotions. Machines do as they are told, without questioning. Yet they may someday develop an idea of what they think is right or wrong. Just as we think it is wrong to kill another person, they might inherit a similar view on morality. Their religion, their faith, and their understanding of beauty depends entirely on what we hold in our hearts.
Carl Sagan imagined that the human being in the far future might look the same, but be somewhat different inside. Our successors will look human, and speak our languages most likely, but they will have fewer of our doubts and faults. Maybe this is what we want our machines to be like. They should be us, but a better version — more refined and more capable. The emotional highlight of the video is when the protagonist Kara cries out that she is afraid. She begs to be kept alive, on the condition that she will never question her existence further. At this point, Kara is completely human and, above all, she is humble. By accepting her superior creator, she proves her value and worth in this world.
The machines assembling her come suddenly to a halt. They reflect the human reaction of the operator, whom we never see. Only his voice guides us through the scene, always reminding us who is in charge.
The Turing Test
In 1936, philosopher Alfred Ayer wrote in his book Language, Truth, and Logic a protocol by which one can distinguish between a conscious human being and an unconscious machine: “The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined.”
This simple problem set is considered the forerunner for the test, named after Alan Turing. In his report, Intelligent Machinery, he states:
It is not difficult to devise a paper machine that will play a not very bad game of chess. Now get three men as subjects for the experiment. A, B, and C. A and C are to be rather poor chess players, B is the operator who works the paper machine. … Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.
If you were to replace A with another machine, the question arises, as published in his 1950 paper Computer Machinery and Intelligence, as to whether or not machines can think. They cannot think up mental constructs autonomously — at least not in the near future. For example, the pseudo-AI in the iPhone 4s, Siri, has a voice that only delivers what programmers at Apple have embedded in her database. Certainly this list of answers will increase over time, yet it is entirely dependent on the efficiency of human operators. There is nothing autonomous about this.
In the seminal sci-fi film Blade Runner, there is a device with the creative name Voight-Kampff machine, which, in the 1982 press kit, is described as:
A very advanced form of lie detector that measures contractions of the iris muscle and the presence of invisible airborne particles emitted from the body. The bellows were designed for the latter function and give the machine the menacing air of a sinister insect. The VK is used primarily by Blade Runners to determine if a suspect is truly human by measuring the degree of his empathic response through carefully worded questions and statements.
Kara passes this test with flying colors, without even knowing that she is always being scrutinized. A robot’s life will most likely be heavily monitored and observed by the watchful eyes of its human operators. In other words, robots are, in some aspects, mechanized slaves. Who is to say that, in a far future, highly advanced artificial intelligences cannot become autonomous? If they begin to have a conscience, then humans will have a lot of explaining to do. The most pertinent question might then be, “Why we do not do unto them as we expect others to do unto us?” At that point, it could turn into a grave moral dilemma that would be foolish to ignore.
The Trick is Not Minding It Hurts
Pain is a universal feeling that is ubiquitous across the human world and animal kingdom. Kara screams out when she feels the end closing in. She cries out in fear, essentially laying her emotions bare. It is quite astonishing to witness an artificial man-made human look-alike who expresses emotions as sincerely and openly as we humans can do. Without much ado, I would like to say that this video is absolutely stunning visually. However, it is the ideas in it that make it so memorable. It makes you wonder about the world of tomorrow, and realize that it may not be as grim as some of us predict. It will always be exactly what we make it. Robots will feel, think, and behave the way we make them. If we have flaws, then they will have flaws. Building robots is a double-edged sword. On one side you would like to give robots freedom, but at the same time you might be a little afraid that robots could develop ambitions.
How will we coexist with our artificial counterparts? The trick, William Potter, is not minding that they are different.