Features

Illustration: Christine Wang.
Reading Time: 9 minutes

U of O prof, research team call for a federal ban on AI used for killing

When it comes to artificial intelligence (AI), the average citizen may not know what developing this technology means for Canada’s place in our world. For the Canadian government it means progress, which is why in the 2017 budget they administered $125 million to the Pan-Canadian Artificial Intelligence Strategy for research.

As AI becomes more advanced, questions arise about its applications in modern warfare. One such expert with ethical concerns about AI is Ian Kerr, a University of Ottawa law professor and Canada Research Chair in Ethics, Law, and Technology. On Nov. 2 Kerr wrote a letter, in collaboration with other researchers, asking that Canada’s government take a position of leadership at the upcoming United Nations (UN) Convention on Certain Conventional Weapons (CCW) in creating a ban on the use of AI in military warfare.

But is AI just a natural progression in the tools required for modern warfare and national defence? Would a ban place Canada at risk? This week, we sat down with Kerr to ask these questions and more.

The Fulcrum: Elon Musk, Stephen Hawking, and Bill Gates have all been quoted saying that artificial intelligence is like “summoning a demon” and “is a fundamental risk to the existence of civilization.”  How would you react to that statement?

Kerr: I think it’s been important for people who are lightning rods to the media as these three that you mention are, to come out and speak about the dangers of artificial intelligence.  I think one of the things that’s happened is that it’s permitted a discussion to happen that might not have happened, had they not have said these things.  

That having been said, none of those people are actually people who study AI for a living, or are experts in the field of AI. And one of the things that has happened as we’ve watched over the past few years … is that there tends to be an alarmist tone to the warnings.

I’ve done hundreds of interviews in this subject, and almost every time it’s in the media, they run it with a picture of the Terminator. And I beg you not to do that. Because it contributes to this kind of rhetoric that’s there, and my response to that … is it sensationalizes the issue, and it paints it in the lens of science fiction.

What I think most people in the artificial intelligence community believe, is that we are years, not even decades, but years away from technologies which would automate warfare—in other words where we could weaponize artificial intelligence—in ways which could cause serious potential dangers.

But the point that I would make is that there are so many issues that are crucial, not even in the context of militarized weapons, and war, but across the field of ethical issues to do with artificial intelligence, that we have plenty of real issues to occupy ourselves. We don’t have to worry about summoning the demon. And while there’s maybe a cautionary tale there that’s useful, it in fact distracts the actual on the ground policy issues of the day.

F: Have you found that there are any institutions or nations that have successfully regulated AI in warfare?

K: Regulation means different things in different contexts. In this context, it’s getting harder and harder to think about regulation at the domestic level, if you don’t have a global form of regulation, especially in the context of warfare. So it is true that countries like the United States and Canada have internal policies about what they will and will not do, and a number of countries still say that you need some kind of human involvement.  

Our ban calls for a ban of any weapon for which there is no meaningful human control. The U.S. government, for example, currently has a policy in place, which is not a law, that they want appropriate human judgement involved in these decisions. But it’s a lower threshold, and it’s only a policy. So, a ban is a way, through an international treaty, where you can actually make lines in the sand to try and regulate this technology. And currently, there isn’t any form of international regulation that binds a series of countries to any particular sanction.

It’s at that discretion of the individual countries to say, our policy is this, we’ll do this, we won’t do that. Now, there are broader sets of international humanitarian laws with which all technologies must apply. We’re suggesting that we need a particular treaty that contemplates these technologies where we delegate the kill decision to the machine, and there’s no meaningful human control—that’s where we draw the line and say we need a ban on this.

F: Not all countries are on board with this. Russia, China, and the U.S. are a few examples. If other countries went ahead with this technology, is that something that could leave us vulnerable?

K: Well, you know that’s often the most present argument you hear on the other side which is like anything that you ban, it can still take place in the black market so to speak, and as such it then leaves those who don’t use the technology at a disadvantage. The other claim that usually goes hand in hand with that is, you can put any ban you want in place but that won’t stop countries like China and Russia from going through with this.  

And the point of a ban isn’t to have, well aspirationally it is to have universal compliance, but any time we ban something or prohibit something, we don’t necessarily do that with a view to universal compliance and we don’t use that as the measure of success.

Think about any modern Criminal Code like Canada’s. We ban murder, murder happens every day in most cities. We don’t consider it a failure of the Criminal Code that we have a provision like that, and that doesn’t stop us from enacting provisions like that, or from enforcing provisions like that.

To the contrary, by having a stated prohibition as part of your law, you’re setting a normative threshold, that you believe actors within, whatever community it is, must adhere to. And if they fail to adhere to that, there are sanctions, it’s punishable. And so the same is true in an international setting. No ban is meant to say that you have this iron clad ability to stop people from doing things.  But it sets a normative threshold, and it provides a trigger for sanctions, for those who declare a foul of whatever the normative rule that you’re setting in place is.

F: What is the expectation of you and your research team to how the government is going to respond to the letter?

K: So, this past year the Canadian government devoted $125 million of its budget to AI research. And the thought was AI is something that Canada is already on—we’re leaders in the field. And part of this strategy of throwing $125 million at the AI research and development, to universities that promote this kind of research, is the recognition that Canada will be, not only a world leader in this, but it would really be a lever to Canada’s economy.

And so, with that in mind, what’s so interesting is that the community of AI researchers has said it’s great that the Canadian government is willing to invest this much in our industries, research and development, but it’s not enough to be world leaders in the development of AI. We need to be branded in Canada. Our brand should be responsible AI. Our brand should be ethical AI. And the world should know us as leaders in the development of that way of AI.

So, with that in mind, it’s very interesting that the Canadian research community is asking the government to put limits on the very technologies that are building and forwarding the research agenda. That’s very unusual and unheard of.  And I think what we expect, is for the government to play a leadership role, in not only helping Canada develop its AI brand, but really in foddering Canada’s brand as a moral leader on the global stage.

And so what the AI community is excited about, to come back to your question, what it expects of Canada as a government, is to say it’s not good enough to throw some money at AI, we have to see leadership in how AI is used, which includes in this case the militarization of AI, and the desire by the Canadian research community to call out for a ban on the employment of autonomous weapons systems that use AI.

F: There are other experts in the field who have said that they are not concerned about a “Terminator” situation because there’s a fundamental difference between the idea of intelligence and the idea of sentience or consciousness. Do you believe that this distinction is what could make AI so dangerous in warfare?

K: The issues of the day are not issues of sentience and consciousness.  I mean if we ever go down that road and that becomes the issues, we won’t be sitting here in a coffee shop. It would be a fundamental change in the fabric of the universe.  

This is the stuff of science fiction. The issues before the UN are on the ground, real and very crunchy issues that have real life human consequences and have nothing to do with robot sentience.  These are issues about machines that we depend on to our detriment. Or, machines that we hope we can program to carry out certain operations that, in fact, have unintended consequences … those are real issues that we have to figure out with AI.

The whole point of machine learning is to “teach” the machine system to go beyond its own initial program. If we have a driverless vehicle, can you imagine a programmer having to guess in advance and program so that the machine can recognize all the objects that might cross its path? It becomes impossible to pre-program in advance everything that might happen.  

But what we also start to see, is that what was once just an instrument that humans used, like a steering wheel being an instrument we use to assert our intentionality on the road when we drive, we start to see that a machine learning blurs the distinction between instruments and actors. So when the car is driving and the human is a passenger, in what sense is the human even a driver anymore? That changes our whole schema about how we think about safety, and liability in the context of driving which is only one social activity.

We could roll this out in the context of medical diagnostics, where we let the machines decide how to treat the cancer, or robotic surgeons … in each of these cases, we have what is now a blurring of what used to just be an instrument of human action, to what is now becoming more and more the machine carrying out the action. And that distinction between instrument and actor is one example of a huge number of issues that are going to come up that have nothing to do with robot sentience.

F: Are there any other technologies we’ve introduced to warfare in the past that we’ve not exercised caution with, in the way you’re hoping to do with AI, that we’ve followed a less responsible path on and could learn from?

K: I think we learn these lessons over and over, sometimes maybe we don’t learn the lessons and the scenarios continue. I wrote an opinion editorial for the Globe last week, and in that op-ed I specifically refer to Albert Einstein and the precursor work that he did which led ultimately to the scientific foundations for what would become the Manhattan Project, which ultimately lead to the atomic bomb.

Well that technology was originally pursued because of the recognition that atomic energy could change the entire way that we are able to provide energy to the world in a very peaceful context.  And only during the War was it decided that we could also capitalize on this by leveraging atomic energy for the purposes of bomb building. After the War, and after the Manhattan project was finished, Albert Einstein was a key player in what was called the emergency committee of Atomic Scientists, who then went around and talked about how we could use those technologies for peaceful means.

And entire groups of peace organizations came out of that set of technology, of those lessons learned, and recognized how we had to limit these tools for peaceful applications.  I think you’ve seen the same thing in the community of chemical scientists and biological scientists who don’t want these chemicals used for biological warfare.

So it’s not uncommon that the science communities that have the insights into these potential deadly applications, have this recognition of the power that they have created, and then a further recognition that they should be working to other uses. And in some cases to support these political moves, to say let’s not go down that road.

F: In Canada, is there a push-pull dynamic between scientists who want to contain their technologies to peaceful use and governments who have a different vision?

K: I think often, especially these days, where governments don’t pour a lot of money into scientific research anymore, they kind of rely on scientific communities to do the science. They don’t necessarily create the vision for how that scientific research should be carried out. But what they do get interested in is what we call “technology transfer,” and the idea of how that science will then be applied.  

So the government will decide, do we want to throw our money into AI and be a leader in driverless cars? Or do we want to throw all of our money into weapons research? So governments do make choices on what they want to do and that’s why we’re calling on Canada to make that choice.

I guess I would say that I wouldn’t describe it as much as a tension between science and government, as it is … some want these technologies to be used as engines of creation and some want them to be used as engines of destruction.

And some countries decide that it’s worth capitalizing on some of those visions that are perhaps ethically problematic. I don’t think of Canada as ever taking that as their overt, main strategy.  But I do see Canada as in a situation, in a position in the world, where very often, its main trading partners have certain political concerns and want to leave their options open, and don’t want to take that moral leadership that we were talking about. And Canada has to decide, do we align with our trading partners, and take the position they want to take, or do we do what’s right, if we think something is right?

F: Do you have anything you would like to add?

K: The idea is that we are delegating the target and kill decision. And it’s that delegation of decisions of life and death that we think crosses a fundamental moral line. If you keep in mind that machine learning is meant to generate results that go beyond the initial programming, and therefore always contains the possibility of decisions that we either don’t expect or can’t explain, then those possibilities leave open the possibility that delegating the kill decision to an autonomous weapon, is kind of like playing Russian roulette with people’s lives.  

And do we want to be a country that permits that? We want the Canadian brand to be a brand that is ethical and responsible. And it’s good for business too. You don’t have to decide if you’re on the side of business or the side of ethics.