1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

How AI impacts societies and reflects colonial continuities

Prof. Caesar Atuire from the University of Ghana explains how algorithmic systems reproduce colonial mechanisms – and what can be done to make AI more inclusive.

Caesar Atuire
Prof. Caesar Atuire reflects on AI developments from a philosophical point of view.Image: privat

DW Akademie: Artificial intelligence (AI) is reshaping people’s lives all over the world. What are the implications for freedom of expression and public debate?

Caesar Atuire: In real life, we learn how to engage with diversity, especially when the diversity is radical. This is needed for democratic societies. Unfortunately, these spaces are currently threatened. We see great polarization in many democratic societies and an incapacity to speak across divides. I will not say AI has caused this, but AI algorithms are offering a refuge that can make it even more difficult for people to deal with radical value disagreements and engage across the divides.

How is AI changing the way we interact with others?

AI offers us a retreat from engaging. It can push us to become less tolerant of diversity and difficult engagement because we can always retreat into that world. When we find people who disagree with us, we can block them, unfriend them, or just swipe in the other direction, and we are good to go. The algorithms will lead us towards the people who agree with or share our opinions.

Many argue that AI systems widen digital divides and reflect colonial continuities. What is your view on that?

The digital era was born in a world that was already entrenched in forms of colonialism. It is simply reproducing those mechanisms. We talk about data mining, for example. Data is taken away from people across the world and is used by a limited set of people to generate solutions that sometimes those people do not even have access to.

There are many other ways of entrenching these forms of colonialism. If we assume certain lifestyle models or societal models, and then we design AI tools according to that, we export them around the world. We are certainly imposing a certain way of being, knowing and acting on people around the world.

What exactly does digital colonialism mean in that context?

The digital era reproduces mechanisms of colonialism. Colonialism as something that is morally unacceptable is when one group of people take over the social and political agency of another people. When they take away their agency, they subtract their possibility to self-determination. This can be expressed in various ways, including taking hold of their resources, taking hold of their knowledge, and even basically suppressing their knowledge and substituting it with other forms of knowledge.

Scholars speak of epistemicide, or the killing of knowledge systems. What role does AI play here?

Here we are borrowing from, and then pushing further, the work of Miranda Fricker. She talks about epistemic injustice describing the failure to recognize another person in their capacity as a knower. Epistemicide brings together these considerations of epistemic injustice with the idea of killing, like homicide. One of the main projects behind the colonial agenda around the world was a form of killing other ways of knowing whilst imposing a universal standard of knowing and acting.

AI tools are given a certain set of knowledge based on data available, which is the standard scientifically available knowledge and not everyone is equally represented by data available. But there are other ways of knowing that are also important. So, we continue to kill other forms of knowledge and other forms of knowing. Epistemicide already exists. Digital tools reinforce it.

How can we achieve epistemic justice?

We need to accept that there is a plurality of ways of understanding in the world. We talk about this concept of pluriversality, which comes from Latin American philosophers who describe a world in which many worlds fit together. We need to respect the way people know, understand, live, and behave so that we avoid hasty generalizations.

And we do not solve this problem easily by picking up more data and feeding it into the AI, which is what today we are doing and which you could call ethical fixing. Because AI learns through data, you create your algorithm and if it doesn't work for black people, you simply find black data and you feed into it. What we need to ask is whether there are other assumptions that are hidden, that are not stated in the algorithm that the data is trying to counterbalance.

How can we change AI to make it more inclusive?

It is not going to be an easy task. We can only change AI if there is good governance. And good governance means that states should start really upping their game in taking up their responsibilities. The duty of the state is not just to protect citizens, it's to help us have a good life.

The big tech companies — especially the ones based in the United States, and now we also have China, Korea, and others — are sometimes even more powerful than individual states in terms of economic power. Therefore, a single state cannot come up with enough power to be able to change the rules, which is why I like some of the actions that the EU is taking and trying to hold some of the tech giants accountable. We need more of those actions at regional levels to rewrite some of the rules of the game. Otherwise, it will be very difficult.

What can civil society and media contribute?

Civil societies are very important because at least in democratic societies, civil society can influence what governments do. And therefore, they have a very active role to create awareness and even push for policy change and accountability in these fields.

And the same thing can be said for journalists because in many societies we talk about journalism being the fourth pillar of democracy. And in this sense, journalists can also hold either big players or governments to account and point out things that are not acceptable.

Can we hold AI morally responsible?

Yes, I think we can, but not in the way we hold humans responsible. Because if you jail an AI tool, it doesn't mean anything. In the design of AI tools, we should be able to incorporate at least some moral evaluations of the actions or the choices that AI can make. And if the AI fails to do that, it should be curtailed. That would be one way of holding it responsible.

Prof. Caesar Atuire


Caesar Atuire is a philosopher and health ethicist from Ghana who works on the dialogue and overlaps between African and Euro-American philosophy. Atuire is the Ethics Lead for the MSc in International Health and Tropical Medicine, NDM Centre for Global Health Research at the University of Oxford.


Interview: Alexandra Spaeth