• Responsible AI

08 Jul, 2022

White Paper: Responsible AI

Every week we see amazing new applications of AI such as the apparent ability to have meaningful conversations with simulated avatars, the creation of video fakes with celebrities or political leaders, the generation of stunning images based on text descriptions and chat bots that tell you they have human emotions. 

We use AIs in other areas such as self-driving vehicles, medical devices, industrial and domestic robots, mapping/safety-checking drones and other autonomous vehicles.

Figure 1: Leta is a conversational language model based on GPT-3.

Whilst AI development is moving at a break-neck pace, it’s important to exercise caution. This is especially true when the AI is being used to support decision making, where a real-world impact could affect the lives of individuals, such as medical diagnostic and treatment systems, HR-related applications (screening candidates for hiring or promotion) and applications which affect the lives of vulnerable people or children or all of us.

What’s the problem? 

In a nutshell, AIs are fallible. They can provide simply false or misleading information, their outputs can be heavily biased and they are able to lie convincingly. In the wrong hands AIs can become tools that enable tyranny, with the ability to:

  • shape public opinion
  • affect the outcome of elections
  • damage our mental health
  • trigger emotional responses
  • create addictions

These issues are being taken seriously by various governments, with a draft EU regulation released in 2021 already having a major impact on how businesses and governments around the world approach the risks and in particular the possible harms caused by incorrect or illegal use of AI.

xTech can help organisations conduct a Conformity Assessment under the EU regulations.

Figure 2: A brain riding a rocket-ship heading towards the moon.

It’s not just the harms caused by biased data or poorly curated data models which should be considered in the context of responsible AI. It’s also the ESG (Environment, Sustainability and Governance) issues. Every data centre has a carbon footprint which is related to the amount of heat it generates, which in turn has a measurable impact on our global environment. Paradoxically the very systems which we are relying on to navigate the difficult options ahead of us could be also at the same time one of the causes of the problem. How should we navigate this tricky situation?

It is about pragmatism, moving away from the philosophy to actions that will make a difference. xTech’s Responsible AI approach is practical, ensuring engagement with all required stakeholders. It does not depend solely on AI tools but views fundamental organisational change as a requirement to build the exemplary culture commitment and processes that support responsible AI behaviour.

We start with an AI audit which captures relevant and timely information about the mission and vision for responsible AI and AI systems operated by an enterprise and link it to our six dimensions of Responsible AI: (i) risk, compliance and ethics, (ii) data protection, (iii) data science, (iv) people & education, (v) data governance and (vi) business functions.

Figure 3: The Six Dimensions of Responsible AI

Our offering focusses on the practicality of Responsible AI, the key actions that will make a difference to the business following the six dimensions we discussed above. It helps pass the point of inflection from the concept & design phases to implementation. It brings together functional teams with expertise in financial services, life science and UK government sector including policing. We have experts in change management, learning and education, data science, data protection and data governance. Examples of where we can help include:

1. Risk, compliance & ethics department: Define standardised approaches to documentation of AI systems which will include relevant risks and impacts as part of an AI impact assessment and a separate algorithmic impact assessment where this is considered necessary.

2. Data protection: Review DPIA and organisational processes and adapt data protection structures to responsible AI requirements. Rollout recommendations for key findings

3. Data science: Work with data scientists on adoption of bias detection and fairness tools they can use to understand potential bias and relevant harms that may occur with their use of AI.

4. People & education: Training of those who work or interact with the AI systems beyond the data science community and ensure that ESG and responsible AI principles are built into the data governance process and become business as usual.

5. Data governance: Bring data governance and AI governance under the same umbrella, develop frameworks for human intervention in AI decisioning, develop, publish and monitor KPIs for AI governance, roll out business AI stewards or implement AI ROPA under governance.

6. Business engagement: Identify use cases for responsible AI, incorporate responsible AI findings into policies, promote stewards to work with data science, governance and risk communities to help embed responsible AI practices in everyday jobs

xTech is ready to discuss your responsible AI needs and would be delighted to create a pilot to help you avoid future fines and build trust with customers. To know more please contact:

Paul Gillingwater (paul.gillingwater@chaucer.com)

Kulvinder Rehal (kulvinder.rehal@chaucer.com)