Should We Leave AI Ethics Talks to Engineers Alone?

by Asumpta Lattus

An Interview with Dr. Dorothea Baur, an expert with many years of international and interdisciplinary experience in ethics

Ethics in AI has been the topic in the AI industry the past few months. Both online and offline, experts have been out and about looking for ways to better regulate AI as they are terrified and alarmed by the possible consequences and risks if this powerful tool falls in the wrong hands or is used mischievously by criminals. Can AI really be governed and should we leave AI ethics to engineers alone? In this episode of A and I, Asumpta Lattus speaks to Dr. Dorothea Baur about these issues. Dr. Baur is the founder and owner of Baur Consulting, based in Switzerland. She advises various companies and organizations on ethics with an emphasis on the financial and technology industries.

Why is ethics so important to you?
I am always surprised how diverse and how confusing my career path comes across to other people because, to me, there is a red read thread in everything I do. It all started with a question of justice and I would like to think that I was born with this question. Justice and ethics are, of course, closely related. I am always oscillating between adapting /integrating into the system and criticizing it from the outside. 

When did you start to take business ethics seriously?
It all started when I first graduated from high school and was working in a factory. I cared a lot about workers’ well-being. I wanted to know the reality of the people there and this is where I realized that the job isn’t that easy. This is where I decided I need to understand how this whole system works, how the economy works, and how business works.

What kind of a factory was this?
It was a soup (Maggi) factory. I was cleaning and running some routine tests after my graduation from high school as an interim job before I went on with university studies. Then, one day, the productivity team came to measure how productive the workers were. The weird thing was that they took me as a benchmark in terms of speed and efficiency. Consequently, they fired another woman who earned her living from the job. I was really shocked. I found it very unjust. This is what also lead me to do my PhD on business ethics. 

Where does this urge in you come from?
My father was a conventional manager and I kind of challenged him when I was a teenager – about justice, about fairness. I questioned everything about business logic and whether everything was being fair. 

Why the abrupt change to AI and ethics?
AI ethics is an extension of classic business ethics. Questions like what some responsible tech-companies are doing to tackle the subject is what I have been doing in business ethics while doing research and teaching at university for many years. It is very much a continuation of a discourse that we have in business ethics. AI ethics, at the same time, also touches upon environmental ethics. Both are subjects that I had lectured on or done research on or written about for a long time. 

How did you bump into AI then?
It was soon after I got onto Twitter and experienced a vivid live debate on AI ethics. Some of these great minds invited me into their personal groups and I found that the whole universe was opening up to me. And I discovered that I had intrinsic motivation that I had been lacking when I should have had it during my academic career. 

How important is ethics in AI in comparison to business and environmental ethics, a subject that you always deal with?
Ethics in AI is very important because it has far-reaching consequences in our future. One of my anchor points in talks about AI ethics is Hans Jonas, who wrote a book on “the principle responsibility”; in it, he talks about nuclear power, which changes our horizons and our scope of responsibility because there we have to make ethical decisions that have consequences on future, in decades or centuries. And so AI ethics carries some of that with it. What happens when we let machines make decisions? There is a clear difference between that and finance, but when finance uses AI, which is my favorite potentially toxic combination, things look different. Since AI is an engineering discipline, it mostly tends to be sealed off by engineers who struggle to open their black boxes. For them, everything is logical and has a technical explanation. For me it is very important to build a bridge and a translation between these two disciplines. I consider myself an interdisciplinary and some have said I am a trans-disciplinarian. 

How would an ideal trans-disciplinarian look in that sense between AI and other non-engineering disciplines
I would say different disciplines need to be integrated from the start of AI projects. One has to make sure that organizations have someone who has some ethics knowledge, lawyers, and also sociologists whose roles would be to challenge people building AI. I think the idea of having someone who is allowed to challenge you and ask you difficult questions is very important. It doesn’t have to be someone who has to do with AI directly. It is someone who is not satisfied by only being told by the engineer that this is a programming logic. Like what we have seen with Google case, with that ethics board that has failed to take off. There we have learned that such integration has to be done smartly.  

Talking about the short-lived external advisory board which was dissolved soon after it was announced, on the one hand some people praised Google for creating an apparatus to oversee the ethics matters and, on the other, many people thought the integration wasn’t done to accomplish the task intended. What is your take
What struck me is that this failure came from Google, the very company on whose corporate engagement for gay marriage I wrote an article a few years ago. Google even boasted themselves for pursuing a ‘Legalize Love’ campaign worldwide in countries like Singapore and Poland where, of course, the majority of people are opposed to gay rights. Appointing someone from the Heritage Foundation who is against gay rights to their board constitutes a clear conflict and undermines everything and the credibility of everything they have done so far, so that is one thing. The other thing is: how can you be so awkward and launch a board without making it clear what powers it has? So, I kind of find it really sad that this action gave more food to all those who instinctively always cry out “ethics washing” every time someone does ethics. 

A few months ago the EU released its ethical guidelines for a trustworthy AI. This has been criticized by many people, among them Thomas Metzinger, one of the members of the group, saying that you can’t make AI trustworthy because it is a machine. What is your take on that?
I know that every time you add an attribute like “trustworthy” to AI people feel forced to remind you that AI doesn’t have agency so you cannot add any meaningful attribute to it. Of course AI itself can’t be trustworthy. Talking about the usefulness of the guidelines as such, I know that people say that they are very abstract, but guidelines should be abstract, they cannot give answers to all the questions in AI. They are not and should not be the same as hard law. We don’t have hard law for every ethical question in life and that is for a reason. Most of us agree that it is not ok to cheat on your partner, or it is not ok to lie to your partner. Do we need a law on that and how would that be enforceable? 

What is ethics to you then?
I come from normative ethics. Normative describes what should be. It is the opposite of descriptive ethics that defines what is. Accordingly, for me ethics is the discipline that asks how we should live together in a just way and what is the meaning of a good life for an individual. The basic question of ethics is what we should do in order to have a good society and have good lives individually.  

Will there be ethics in AI?
We need to have normative goals. There are very few inherent necessities that block us from making progress. So there is room for ethics in AI but it depends on what our priorities are. If we prioritize economic growth above everything then there will be less ethics. If we prioritize security above everything there might also be less ethics because there will be more surveillance, which will have an impact on other ethical values. If we, however, prioritize human rights or privacy, there will be more room for ethical considerations – maybe at the expense of economic growth in the short term, but ideally to the benefit of it in the long term.

Tags: , , , , ,