report

Keeping IT Real: Communication strategies for Ethics of AI

Published in Artificial Intelligence by

AA

The growing attention to the ethics of AI is moving from the scientific to the public sphere. Principles and solutions for an ethical relationship with AI no longer concern only experts in the field, but the broad public, which is increasingly able to grasp the importance of the issue. Yet, communication about it is very ineffective. On the one hand, there is an extremely technical form of communication that uses specific terms that are difficult for the lay audience to understand. On the other hand, there is a form of communication that focuses on catchy but unrealistic issues, such as catastrophic scenarios where atomic crises are foreshadowed or machine domination over humanity. Both of these forms distance the public from the actually relevant questions.

 

Psychological distance

In this context, it is interesting to consider the ‘psychological distance’ concept. Yaacov Trope and Nira Liberman founded their psychological theory, called ‘Construal Level Theory’ (CLT), on the idea that to actually understand a concept the self always has to deal with different types of distance from the concept. The bigger the distance is, the harder it will be for the self to understand the concept. The main type of distance highlighted by Trope and Liberman is abstraction. The less concrete details there are, the less a person is able to feel the importance of an issue. If we apply this principle to the Ethics of AI, it will emerge that for allowing people to feel the importance of an ethical relationship with AI it is necessary to talk about AI in a very concrete way. This is exactly the opposite of what public communication is doing at the moment. Instead of talking about the physicality of AI, journalists and experts often focus on abstract concepts or on hypothetical scenarios, increasing the ‘psychological distance’ of the audience. A clear example of this communication error is the way AI is represented in the images. It is very hard to see concrete images of AI hardware, while it is way easier to encounter fictional representations of humanistic robots that are associated with AI but that actually are far from what they represent. This hinders recognition and thus the essence of an AI, masking the real properties of the object and enacting a substitution effect that deceives the subject when establishing a relationship.

Environmental factors

If we consider the environmental impact of AI, the problematic side of representing AI in an abstract way becomes even more evident. Researchers highlighted that recent generative AI (such as ChatGPT) have an enormous environmental impact due to the amount of energy that they need. In fact these systems consume a huge amount of energy because of their computational systems. But it is very hard for users to realise that when they use ChatGPT they are actually consuming a lot of energy. That is also because AI is never represented as it is, i.e. a machine that consumes enormous amounts of energy to elaborate complex data. Moreover, building up an infrastructure to train LLMs requires many physical artefacts such as CPUs, drives, and more importantly GPUs. In an age of shortage of components and materials this sets another distance: extraction of rare-earth elements to produce those technologies perpetuates inequalities among people and exploitation of labour. Representing AI in abstract ways augments the psychological distance from the concreteness of AI, therefore from its environmental impact and the economic      and social consequences it creates.

Less is more

Beside representations, another factor that increases psychological distance is the number of principles for Ethics of AI. Similarly to abstractness, complexity can be an obstacle for understanding. An AI user or designer that wants to follow ethical principles suggested by AI ethicists may be confused. In this sense, by the high number of principles and sub-principles can result as in this field: too abstract because it is important to avoid conflict between them; too many to decide which to apply to which risk, as increasing attention for a particular one automatically decreases the others; too ramified and thus leading to one single action bias . For effective communication to the audience it is necessary to have few clear principles that everybody can easily understand. Obviously, scholars want to express the multiple aspects and perspectives in the ethics of AI, but such complexity is an obstacle to non-experts.

Emotional reactions to AI

Following the concept of ‘psychological distance’ it is also crucial to highlight how these systems are constructed and how they are perceived when reaching the public sphere. Algorithms working in the underlying structure of AI systems are trained on data produced by subjects through their online activities. This, however, creates a distance between the subject and their data through externalisation, which occurs when contents are placed on the network and collected and combined with similar ones, and condensing them into the AI training dataset. The invisible architecture of the new system poses a final distance, that of the power to create images and      texts,      even out of nothing, and reproduce representations of reality in an unprecedented manner.

Exalting AI’s concreteness

Delegating these operations to AI systems makes them appear as independent agents with their own autonomy and control, while actually AI systems are very concrete. They’re made by workers invisible to society, low paid mechanical turkers, targeting pieces of information and contents in categories to prepare the data for the machine learning algorithms or instructing systems to avoid or suppress some content (e.g. child abuse, pornography, racism, etc.). Highlighting the importance of these workers reduces the abstractness of AI, showing how it relies on humans more than we actually think. It’s due to ethicists, sociologists and the public sphere to reduce psychological distances to AI, keeping it real and concrete, and showing the actual impact that it has on society and on the environment.

 

This analysis originates from the conference ‘AI and the Humanities’ held at the University of Rome ‘La Sapienza’ on 20 June 2023 and, in particular, from some of the insights by Prof. Guglielmo Tamburrini, Professor of Logic and Philosophy of Science at the University of Naples ‘Federico II’.

Service involved