Where artificial intelligence gets it wrong, why it affects us and what we can do about it
I am not a specialist in Computer Science issues, but have been at home in the IT environment for many years as an Agile coach and organisational developer. I can recommend the book with the title above especially to “non-IT people” like me who want to take a closer look at the cryptic and not easily accessible topic of artificial intelligence and associated algorithms. In casual language, the author explains to us what algorithms are, how they work, what difficulties the unrestricted use of algorithms entails and at what point we can (and must) exert influence on their uncontrolled use. Katharina Zweig is a professor of Computer Science at the Technical University of Kaiserslautern, where she heads the Algorithm Accountability Lab and is the founder of the “Socio-informatics” degree programme, which is unique in Germany.
The sympathetic reader knows that most decisions are made “from the gut” and are sometimes, usually later, verified with logic and Excel spreadsheets. Since the human way of making decisions involves a certain susceptibility to error, especially when it comes to complex decision-making systems, resourceful innovators have developed technical solutions for various issues designed to make decisions more objective. The machine is, after all, incorruptible.
Although, as a professor of Computer Sciences, she has a positive attitude towards the subject, the author describes a variety of problems that arise from adaptive algorithms: e.g. the opaque recommendation algorithm for Netflix films, or the “decision-making dilemma” of self-driving cars, which also can affect the life and death of people, and then COMPAS, the software predicting re-offending, which is used in some US states and not infrequently leads to false convictions or wrong sentences. Or, as HR managers like to tell us, the “right” decision about filling a job vacancy is left to an algorithm, which, however, did not even invite the really suitable applicant for an interview because a certain criterion had not been met in their application.
In the first part, Ms Zweig tells us about “tools” with which we can better assess the general decision-making processes behind algorithms.
First tool: the so-called algoscope
The crucial question here is which systems should be examined. In Ms Zweig’s opinion, it is only the systems that
– make decisions about people, or
– about resources that affect people, or
– make decisions that change people’s opportunities to participate in society.
So it is a small part of all possible algorithms (p.25) that are directly or indirectly concerned with human well-being.
Then she leads us on to the second tool: the quality of a machine’s decisions depend on the following success factors:
– the quality and quantity of the incoming data,
– the basic assumptions about the nature of the issue, and
– what society actually considers to be a “good” decision.
The last point in particular leads to the question of “morality”. In order for the algorithm to observe a moral, we need to make the degree to which a decision corresponds to this moral measurable for the machine. And: who makes this decision.
The great danger – Zweig emphasises – is that the user loses a feeling in the process for when their automated support is by no means a self-evident objectification, but full of theoretically fallible human decisions and priorities.
Which brings us to the third tool, which she calls the “long chain of responsibilities”, i.e. who contributes what to the actual decision, what problems can arise when it comes to predicting human behaviour and at what point we can exert influence.
She calls the fourth tool the “measurement of regulatory need”, which is linked to various checking procedures and provides information on how much a machine needs to be monitored.
Example: When a robot sorts out the scrap during the production of screws, the algorithm needs less control than in systems that indirectly or directly influence humans.
We readers learn about these tools in detail with comprehensible examples.
In the second part, she describes “The Computer Science Primer”. In it, she explains what an algorithm is, what is meant by big data and data mining and how “computer intelligence” is created from algorithms. Here, too, she formulates things in an explicitly positive way, without concealing anything critical.
The third part, “The Way to a Better Decision” describes the application of the tools presented in Part 1 to algorithms. It shows at which points in the decision-making process we can influence the development of algorithms. She even goes one step further and demands that we should, even must, interfere and influence. An example of this is how judges can be influenced in their sentencing by the likely recidivism of a criminal, as determined by the algorithm. In some US states, resistance has led to this software no longer being used.
In her research, the author calls for transparency, for example, towards society, stakeholders or certain expert groups about e.g. the level of quality and fairness, the type of input data used and the machine learning method, the evaluation process and its outcome. Furthermore, she expects transparency about the traceability of processes and results.
Finally, I would like to highlight her “risk matrix”, a model that provides five different levels of regulation. “The classification is based on the potential for harm of an algorithmic decision-making system and the opportunities to challenge and change a decision made by the system.” (p. 234) She uses a variety of examples to support this matrix.
For me, as someone who is IT-savvy but often overwhelmed by the technology, the book reveals the dark side of AI in easy-to-understand language and with cartoon humour (though the quality of the drawings is debatable…). And: that I, as an “analogue” person, am not without influence on the digital future. I feel invited by the book to continue to inform myself, to get involved and to do so in various ways pointed out by Ms Zweig. Most of the questions surrounding AI are about ethical and moral issues, which she again lists in detail on page 285. The author encourages her readers to take a differentiated look at AI and to determine which algorithms need a close and critical look and which can remain as they are.
I can especially recommend this book to those who want to learn more about AI.
Original text: HFI
English translation: BCO
- Titel Ein Algorithmus hat kein Taktgefühl: Copyright: Heyne-Verlag