As of Nov. 1, Ridley Scott’s Blade Runner is no longer set in the future
Blade Runner is now a film set in the modern day, and this reality has sparked wide-spread discourse regarding the state of artificial intelligence (AI) in 2019.
Are Rick Deckard’s concerns relevant in the contemporary discussion of AI? How would an expert in the present-day deal with an issue similar to the four runaway Nexus-6 replicants?
Although there was obvious over-estimation in certain aspects of Blade Runner’s account of November 2019 — for example, colonizing outer space — the once distant potentiality of Ridley Scott’s neo-noir, sci-fi dystopia is closer than ever.
So far, the greatest example of this closeness is Sophia.
Sophia has been advertised by her creators at Hanson Robotics as the “most advanced human-like robot.” She is the first being of AI to be granted the right of citizenship, which she holds in Saudi Arabia.
Not only has Sophia been given legal personhood, but she is also the first robot to be an Innovation Ambassador for the United Nations Development Programme. She has spoken at hundreds of conferences around the globe, sharing her intentions of contributing to medicine, education, and AI research.
She has her own Twitter account and has made appearances on both the Tonight Show and Good Morning Britain.
Sophia’s public presence has generated a multitude of questions and concerns, many of which are similar to those presented in Scott’s Blade Runner. Such questions tackle a range of topics, such as the meaning of personhood, the constitution of citizenship, the boundaries between ‘real’ and ‘artificial,’ and whether or not robots should resemble humans.
“My very existence provokes public discussion regarding AI ethics and the role humans play in society, especially when human-like robots become ubiquitous,” Sophia states on Hanson’s webpage.
“Ultimately, I would like to become a wise, empathetic being and make a positive contribution to humankind and all beings,” she adds.
Is this real life?
Jim Davies is a professor at the Institute of Cognitive Science at Carleton University, whose research touches on several topics associated with AI.
“The opposite of artificial is not ‘real,’ it is ‘natural,’ ” says Davies. “What’s important about Blade Runner is not the distinction between real and artificial, but whether artificial beings are persons that have rights.”
“The distinction between real and artificial is important when we consider the incredible and challenging advances in AI’s ability to mimic real people,” adds Jason Millar, an assistant professor at the University of Ottawa’s school of electrical engineering and computer science. “We saw real potential for deception in the introduction of Google Duplex, which was a system designed to sound like a real person on the phone, in order to make appointments on behalf of the user.”
“The public reaction to Google’s demo illustrated how uncomfortable many people are with the idea of being deceived into thinking they’re talking to a real person on the phone, when in fact it’s an AI.”
While the public’s discomfort means that similar initiatives to Duplex might be a ways off still, other AI is still being applied in ambiguous ways.
“We also see the real/artificial distinction playing a role in ‘deep fakes,’ which are AI-generated images, audio clips, and videos that appear to feature real people — actors, politicians — when in fact they are entirely fake,” he explains.
“The hyperrealism of deep fakes raises serious ethical questions about what information we can trust because deep fakes are so real-looking.”
Sophia is capable of recognizing human faces, hand gestures and can comprehend emotional expression. Her claims on the Hanson webpage include that she is able to “estimate your feelings during a conversation, and try to find ways to achieve goals with you.”
Her ability to mimic what were once human-specific functions has pushed experts into new territory; a territory that’s strikingly resemblant of Blade Runner.
“If we are, in fact, able to create AI that mimics humans to the point where we can’t tell if they’re sentient — as is the central ethical case in Blade Runner — then we will need to seriously consider adapting the ethical posture we take toward those entities, in order to except for the possibility that we could owe them more ethical consideration than we owe to, for example, our cell phones,” says Millar.
The Declaration of the Rights of Man and of the Citizen (and the Android?)
Part of Davies’ current research involves working to further understand the human imagination by simulating it via computer software.
“We look at the behaviour of AI, and compare it to human imagination,” says Davies. “If the two systems are working similarly, then we see it as support of our theories.”
Davies notes that if experts were to copy the four-year life span of Scott’s replicants, this would complicate how authorities treat the creature’s rights.
“The life expectancy of a creature affects how well we should treat it in the sense that each year of life that it has is worth more if the overall life expectancy is short because any amount of time is a higher overall proportion of their life.”
These considerations may seem abstract, and empathy with AI relatively low, but as people become more accustomed to the idea Davies expects that to change.
“Human beings used to not care about animal welfare very much, and now they are starting to care about it a lot,” says Davies. “This is because we believe that animals also have conscious states, and can feel pleasure and pain. I would hope that if AIs were someday able to feel pleasures and pains, they would also be given moral respect.”
Technological singularity, defined as a hypothetical future point when technological expansion becomes uncontrollable and irreversible, is understandably one of the public’s greatest fears surrounding the topic of AI. In the last scenes of Scott’s Blade Runner, the struggle between Rick Deckard, a human, and Roy Batty, an android, conveys the possible reality of technological singularity being reached.
After Deckard’s attempts and fails to jump from one rooftop to the next while trying to escape from Batty’s pursuit, Batty follows soaring over the gap with little exertion. Is this superiority, embodied by Scott in this scene, at all feasible in 2019?
“We don’t know when or if the technological singularity is going to happen, but even if the chance is fairly remote I think it’s worth doing some preparation for,” says Davies. “If an AI were to get vastly more intelligent than any living person, or perhaps all living people, it could be the greatest thing to happen, or the most terrible.”
“Because the stakes are so high, I think it’s worth putting some amount of consideration into, making sure if AI were to become superintelligent, it would also be friendly. Exactly how much effort we should put in is debatable,” he explains.
“When we’re talking about something (that) involves a potential risk of human extinction, it’s tempting to suggest that we should do everything that we can. But of course, we can’t do everything for all of the possible risks to human existence.”
“Everything we can” could include measures like imposing life spans on AI, like Scott’s replicants. But this brings to mind the troubling and emotional scene where Batty confronts Tyrell and begs him for more life.
In his final moments, Roy Batty saves Rick Deckard, revealing his ability to exercise free will. Batty’s original purpose as a combat unit was to kill. Saving Deckard reveals the profound injustice that Tyrell Corporation committed against Batty and the other replicants by denying him life and freedom. Have we remained considerate of this cautionary tale in 2019?
The future progression of AI seems to be a topic that is unsure and clouded. Experts are hesitant in their statements on the subject.
“We need to continue to think deeply about the social implications of AI and develop our understanding of how those implications can be anticipated in the design, and regulation of AI,” says Millar.
Davies believes there will be small but consistent movements in the progression of AI, but, again, is hesitant in his commentary.
“I suspect that we are going to continue to have progress, and small revolutions, for the foreseeable future,” he says. “As to when we’re going to have software that is generally as intelligent as a person is, I’m not going to say, because AI researchers have a history of making bad predictions!”