21 Apr, 2021
Responsible Data: about AI and its new risks, and the draft EU law to regulate them
Do you remember when Uncle Ben told Peter Parker that “with great power comes great responsibility”?
This excellent advice applies to AI too... AI can transform organisations into superheroes in the same way that the radioactive spider made Peter into Spiderman. If AI brings those great powers, it also brings potential dangers to the rights and freedoms of individuals. To respond to these dangers, the European Union has just announced a draft regulation, targeting some of the issues around the dark side of AI-driven technical innovation. In a nutshell it says that:
- We should restrict abusive AI manipulation of people and society as a whole, especially in politics and civil society;
- We should have the right to know when we are dealing with AI, whether it be in the form of a chatbot, AI-created music, deep fakes or faceless corporations making automated decisions that have a meaningful impact on our lives;
- We should prevent AIs from being used for mass surveillance, resource allocation or social profiling for the purpose of societal control.

We also know that an AI can only be as good as the data it is fed with and will not bring tangible results if there is no good quality data behind it. The better the data is, the better the AI becomes. Therefore, having the right approach to data underpins any work to be done on responsible AI. This brings Data Ethics to the forefront of the agenda and, as we see it, this draft regulation.
Data ethics is about having the right data practices to support the organisation’s goals and to meet its social responsibilities. Data ethics applies to all stages of the data lifecycle:
- When collecting data, storing it and sharing it;
- When creating information from that data;
- When acting on and exploiting that information;
- When recording, processing, deleting and cataloguing that information.
It is about Trust, Transparency and Explainability. It is not a luxury; it is a necessity.
The challenge for organizations and businesses is to navigate through and comply with the new regulations that will govern AI and Data Ethics. Their approach should be to establish comprehensive Data Ethics transformation, embracing the positive aspects of AI, but remaining focused on the people-centric view that acknowledges the human needs of all stakeholders.

From our experience, the journey is not straightforward.
More and more organisations have become aware of data-related risk and acknowledge the need to embark on Data Ethics development work. Often consensus at executive levels is achieved. They may have looked at appointing a Chief Data Ethics Officer (CDEO) and an ethics committee, and begun to publish codes of conduct and terms of reference. However, often “off-the-peg” frameworks and approaches are used to set out basic measures.
We often see organisations stalling after this initiation phase as they battle with executing the mandate. Even avowedly data-centric organisations struggle to fully embed Data Ethics frameworks, so that Data Ethics remains a “fringe” activity. Moving from initiation to execution is the time when a significant change in effort and focus is required. This is when the transformation programme needs to expand and accelerate in all dimensions.
In practice, we see the Data Ethics journey as being no different than any other transformation exercise. It should be seen as a strategic goal, focussing on all aspects of the change: leadership behaviour & engagement, culture, learning & skills, tools, policies & process change including data governance, messaging & communication, in order to be able to achieve confidence, feel ownership and become self-sufficient at all levels of the organisation.
During this journey, organisations need to
- ensure that data ethics is at the core of their values,
- commit to an enterprise-wide transformation journey, delivered in an agile, sprint like manner;
- look at data ethics as a long-term ambition not as ticks in a set of boxes;
- treat data ethics with the same attention as you would for other important business assets
- make it real for everyone in the organisation and the community they serve;
- get the data quality right - if the data cannot be trusted, the ethics around it won’t be seen as an important thing to do.
Chaucer brings the right set of expertise built upon having delivered large scale transformation programmes for over 30 years for global organisations, combined with our approach on how to build the right Data Ethics Programme, from assessing levels of maturity for data ethics, identifying the value of data in the brand, building the data ethics framework, to getting the data Ethics BAU up and running.
Our success includes helping major corporations with designing their ethics programs, helping non-profits and charities (targeting of vaccine distribution in Africa, and helping sick kids receive life-fulfilling wishes) shape their data ethics codes of conduct and supporting clients on Data Protection, Cyber Security, Governance and everything data.
To find out more, please contact Elodie de Fontenay; elodie.defontenay@chaucer.com, Will Sillar; will.sillar@chaucer.com, Paul Gillingwater; paul.gillingwater@chaucer.com