D is for denial

D is for denial

Can AI be ethical?

 

In this series we are discussing the pros and cons of AI, and the ethical issues that arise out of its use.

Today’s subject is: D is for denial and disintegration.

 

AI and machine learning present us with the potential to develop new ideas and possibilities. However, there are two equally possible scenarios to consider. First, as discussed in earlier blogs, there can be misuse in the form of bias and incomplete aggregation of data, both leading to potential harm to the data subject. Secondly, it is possible that the use of AI could improve life for more people than it disadvantages. So, is the equation as simple as, one bad decision is acceptable if there are one-hundred good ones? This is an ethical argument, and as long as we’re in the group of one hundred, we may feel that some negative consequences are acceptable. But what if we are not?

 

The Human Rights Act and UN Declaration of Human Rights both reference our right to personal autonomy and physical and psychological integrity, in respect of your private and confidential information, alongside against the right to freedom of opinion and expression. The two human rights acts mentioned both use the words, ‘all’, ‘everyone’ and ‘no-one’, which suggests that whilst we might decide to do the best for the majority, we should be trying to do the best for everyone, independent of what we might think of them, their views, or their personal information.

 

The concept of the ‘benefit of the many’ as an ideal is sometimes necessary, for example, Governments might make laws, or spend money, that brings more benefit to some than others, but these actions tend to be an either or, prioritising the vital interests of the majority, where the alternative could be detrimental to large numbers of people, or it might occur in situations where resources are limited. However, human rights function on a more basic level, and give us, as autonomous human beings, fundamental rights that should be observed by everyone, be it Government or our next-door neighbour. 

 

Consequently, any automated activity that has the potential for benefit, but which also poses a threat to some, must be seen as ‘in need of further development’ to eliminate effects that might be life changing. The UK Data Protection Act says that where a decision impacting a data subject has been made purely on the grounds of automated processing, that person has the right to request a new decision, not based solely on automated processing. If AI tools are going to become the unsupervised arbiters of access to services and decisions affecting our lives, it is essential that they do not deny an individual autonomy over that access or deny them recourse to reinstate rights that might have been lost.

 

This will require developers to continuously review how their AI tools work, to establish if individuals are experiencing negative effects, and make changes accordingly. This is no different to the process humans use when assessing information that might have legal effects for a person; in this case they follow predefined processes, or make decisions as a group, hopefully eliminating unconscious bias. Herein lies a potential issue. Organisations, large and small, have budgets and to continually rework a tool can become expensive, so the tendency might be to cut corners and only deal with issues when they are highlighted, rather than as they arise. As it currently stands, there are no overarching frameworks or regulations that require consistent outcomes, rather there are a multitude of global strategies, guidelines, and visions, for example, the OECD AI principles and the UK Government’s strategy and AI deal, as well as organisations that look at ethical issues like the Alan Turing Institute in the UK. As the rate of use of AI increases, the need for regulation becomes greater.

 

In summary, AI can be a benefit, but it can also be biased, drawing the wrong conclusions, potentially disadvantaging individuals, or small groups. Developers and organisations providing these tools must focus on suitability and maintaining the highest levels of scrutiny of their products so that no-one suffers a loss of rights or legal standing because of their products, or because of global crises, for example, Covid, which put significant pressure on relatively untested tools.

 

Next time: we will look at personal isolation and the disintegration of social connections resulting from the use of AI, e.g., automated helpdesks, and the potential of tools like Chat GPT. Click here to read the next blog.