• Responsible AI

02 Aug, 2022

Responsible AI

You can download our infographic here.

There is an increasing consensus that artificial intelligence (AI) has the potential to profoundly improve our lives, offering transformative opportunities for society and the economy. However, AI systems are often treated as black boxes, which has led many to believe that the risks of greater AI adoption will outweigh the benefits it offers.

​In our latest LinkedIn poll with nearly 300 respondents, only 6% indicated that they would fully trust the decisions made by AI, whereas 47% stated they would not trust the decisions made by AI at all.

The biggest concerns people have when AI is being used to support decisions are:

Bias and fairness

Without clearly defined objective criteria, algorithms trained on historical data may reflect the bias of developers or that may be implicit in the data, opening new forms of discrimination and harm to specific groups of individuals. ​

Lack of transparency

People nowadays are more aware of the risks that AI may cause and have requested increased transparency into the inner working of these systems.

Accountability problem

Issues around accountability for algorithmic decision-making may surface when unintended consequences occur. Yet, most organisations do not have proper guidelines focusing on who should be held accountable for the implications of AI systems throughout the entire lifecycle. ​

Environmental impact

Consumers, employees and investors are also more concerned with the hidden environmental impact and carbon footprint of AI systems as the world is reaching a tipping point for sustainability. ​

Recent research also shows that more than half of consumers believe organisations are not doing enough to ensure better AI outcomes*1, and they are expecting governments or regulators to take meaningful action to address the issue. Although formal regulations for AI have not been brought into effect, some courts are already enforcing judgements for violations of core principles of Responsible AI.

Responsible AI is no longer a nice-to-have option for organisations, but a must-have component of their strategy.

Yet, 35% of the respondents in our survey said that their organisation did not have a plan for ensuring the ethical use of AI, and 46% were unsure of this. Only 19% of them confidently voted that there is an ethical guidance in place within their workplace.

Whilst leaders understand the imperative need to act and have developed principles to ensure risk reduction, regulatory compliance and public trust, many of them are struggling to go beyond principles to realise the ethical use of AI through practical actions.

In most organisations, Responsible AI efforts had failed or remained incomplete due to a range of problems. According to our survey, the biggest barriers appear to be poor data literacy skills and the lack of knowledge on applicable laws, with 38% each.

A lack of support from leadership is also a key factor that leads to project failure. As a FICO report on the state of Responsible AI has found, more than one-third of board members and executive teams do not have a sound understanding of AI ethics*2.

As AI is widely being used in today’s business environment, its full potential can only be realised if ethical concerns are mitigated. At xTech, we focus on the six dimensions of Responsible AI through a practical approach which involves recognition of the transformative nature of AI and the need to carry out a step change in the way that businesses engage with their customers. Above all consumers can only develop trust where organisations are truly committed to the virtues of transparency and elimination of all forms of bias.

If you would like to learn more about the six dimensions of Responsible AI, you could read our whitepaper here: https://bit.ly/3uG595E

Paul Gillingwater (paul.gillingwater@chaucer.com)

Kulvinder Rehal (kulvinder.rehal@chaucer.com)


References

*1 EY, 2021

*2 The State of Responsible AI, 2021