#inspiredbystories

A and I – on the ethics of Artificial Intelligence in the military

by Asumpta Lattus

What is artificial intelligence? What are the challenges and opportunities brought by AI? In our new column A and I, Asumpta Lattus delves into the matter by talking to women moving and shaking the AI sector. This time she interviewed Ilse Verdiesen from Dutch Cyber Command in The Hague

Artificial intelligence in the military sector has been under criticism for some time and has recently sparked intense debate. The idea that machines known as autonomous weapons – or “killer robots” – could fall into the wrong hands frightens not only ordinary people but also concerns experts in the field. We spoke with Ilse Verdiesen, an expert on the ethics of autonomous weapons, on the benefits and implications of AI in the Dutch military. Verdiesen also speaks about risks, her concerns, and how to best handle this sensitive system. The 42-year-old entered the army at 18, now works for Dutch Cyber Command in The Hague, and is a part-time PhD student at the Delft University of Technology.

In your work at the army, what are the most recent AI applications?

In the Dutch army we have a Robotics and Autonomous Systems unit. This has just recently been established and consists of two parts: the first part is the one dealing with robotic hardware systems that can be used in a military environment; for example, unmanned cargo/load systems or drones that are used to supply appliances to our units. And the second is the autonomous system which uses sensors and AI software to create better situational awareness in a military operation. With this unit we are applying unmanned systems to do the dull, dangerous, and dirty tasks.

One of the goals of using AI in the military is to make up for the limitations that humans have creating a peaceful world. Where is the difference in letting the army’s job done by humans or by a machine?

I think the main difference is that AI is basically a machine – it’s a computer and we program it. And human beings have a lot more to offer than machines can right now. We have intuition and emotions that machines don’t have. We have not only creativity, but we can also reflect on decisions we make and question those decisions. We can use machinery to carry out some tasks, or to help us carrying out tasks, but humans should be the ones making decisions, especially when it impacts people’s lives. That is, for me, the biggest difference between humans and machines.

Rights groups have been campaigning against autonomous systems in the army. But some countries have categorically refused to budge on their demands to stop investing in “killer robots”, as they call them. Can you tell us a little more about what autonomous weapons do?

Well, to put it simply, autonomous weapons are weapons systems equipped with AI. But there are a lot of different ways of defining autonomous weapons and there is no one definition that everybody agrees upon. I use the definition in which ‘a weapon without human intervention that engages and selects targets matching predefined criteria following a human decision to deploy the weapon’. I like this definition because it makes sense from an engineering and military perspective. For me that is very important because a weapon doesn’t decide on its own to be launched or to engage a target. A human decides that.

Scientists have long warned about the potentially disastrous consequences that could arise when complex algorithms incorporated into autonomous weapons systems can select a target and engage without meaningful human control. Where are ethics in all that?

Well, the ethics of AI systems is the thing we’re trying to develop, at least in a scientific world. You have certain moral values and ethical principles that we have in the Western world. In Europe, for example, we value privacy a lot. So how would you then build systems that adhere to those values? That is a very big question that many people are working on right now. So, if privacy is important, we also have to think about how to develop AI system that makes sure that our privacy isn’t violated and also how the data recorded, or data being transferred through the system, is kept secure. This makes us also think about regulations. The GDPR in Europe is the next step in regulating AI or regulating data.

What are the most important ethical steps that scientists and regulators should start thinking about immediately?

I think we need to think about which decisions we would like to hand over to machines, and how humans can still supervise those decisions. There are a lot of talks in autonomous weapons debates on meaningful human control, which is actually the point that I am studying. The biggest question is how to remain in control while supervising decisions made by machines and how to regulate that? How much control do we want to hand over to machines? When is a human consulted when a machine has reached a decision based on an algorithm? I think those are tricky ethical questions right now.

What is the dangerous part of using AI in the Army? Where are your concerns?

I think the most dangerous part is that we build systems that we do not exactly know. So these can start becoming not as predictable as we would like them to be. Or they could just have consequences that we didn’t expect or intend to have. That’s a risk that I see. In the army, we always want to have systems that we can rely upon and exactly know what effects they are going to bring if we use them. We wouldn’t want a black box AI system that someday is going to do things uncontrollably.

Could you give me an example of what you are talking about, a real example?

It’s really hard to think of a real-life example right now. I am not sure if we are using it in the Netherlands, but I know face recognition technology exists and is used in China. Face recognition systems are used to automate video processing. Here we don’t have human operators, but have an AI system which is looking at videos to see if we see people moving suspiciously or trying to do something undesirable. Currently the technique, if I understand the scientists correctly, is only 80 percent reliable. So that’s where I start questioning the technique, if it could for sure tell the person I am seeing is actually the one I am seeing. And I know from research that it is very easy to manipulate images and videos.

When did you get in touch with AI for the first time?

The first time I heard of AI was in 2016 when I was doing my master program at Delft University of Technology in the Netherlands. I went to summer school which was on responsible AI. There we got lectures about AI applications in autonomous vehicles, intelligent agents, and on the ethical design of AI systems. It was then that my interest in AI was sparked and I started wondering about the implications of AI in the army, and what the benefits and risks of this technology could be.

How do you think AI is going to change your day-to-day life, positively and negatively?

I think AI will start making more and more decisions in our daily life. I could imagine an AI system assisting me in my daily schedule and learning from my sleep pattern; it would set my alarm clock differently each day for me to get to work at the same time every day. This would, for me, be a positive change because it would make my life easier and I wouldn’t have to think too much about my routine. Personally, I would not like it if AI systems started to make important decisions in my life; for example, when I should get a mortgage or financial support from the bank.

What is the most interesting AI gadget that you would really love to use?

Ohhh that is a good question. I don’t know!

What about a virtual assistant, just to name one?

No, I am very cautious about using technology right now, because it isn’t secure enough. I think I would love to use a Lego mindstorms robot that is not connected to the Internet. Anything that is not connected to the Internet – I am fine with it.

 

Tags: , , , , ,