• Responsible AI

09 Aug, 2022

Struggle for the Soul of the Internet: How AI can help

We are living in a time when the influence of the Internet is stronger than ever and is dramatically increasing.

Social media is the battleground where various factions and forces are arrayed in their global ongoing efforts to influence public opinion, shape the choices of voters, and increasingly, use artificial intelligence (AI) to enhance their power, profit and prestige.

Caught up in these battles are the public, and especially children and vulnerable persons. Most people have no clue what AI looks like, or that many of their interactions with government or large corporations are mediated by an AI decision-making system. While it is the latter two groups which are most prone to being influenced by the extremely clever technology, psychology and brain chemistry manipulation arising from AI innovation, it affected EVERYBODY.

Fortunately, activists, data scientists, politicians and human rights advocates have started to develop a body of thought which exposes these abuses and proposes ways to remedy the harms caused by unbridled use of AI. This new discipline is known as Responsible AI. This is defined as the active application of ethical principles to AI systems, encouraging explainability, sustainability, transparency and ensuring that any bias is identified and minimised before an AI tool or algorithm is put into production.

This blog gives context on some of the harms currently faced by the public with some suggestions of actionable countermeasures and mitigations.

The following table lists some of the key ethical issues under discussion:

In conclusion, more and more parts of our lives are being spent or our personal data is collected online by one or more of the large social media businesses. Many actions are being taken across the world to make things better.

  • In July 2019, the UK became the first country in the world to try and bring in age-verification for online pornography. A new law, the Digital Economy Act of 2017, partly to be enforced by the British Board of Film Classification (BBFC), required that porn sites must check the age of users, or risk facing sanctions. Strict measures were put in place to protect users’ data and privacy, with the BBFC creating an Age Verification Certificate (AVC). ISPs were asked to block sites that didn’t use effective age verification. This never really worked as the proposed measures ran into legal, technical and practical difficulties. However this allowed the focus to be shifted away from just filtering porn, to a wider remit intended to protect children from a variety of online harms, with all websites (social media, porn and others) responsible for self-policing their systems.
  • In February 2020, the European Commission published the “White Paper on AI – A European Approach to Excellence and Trust.” The goal is to build an “ecosystem of trust” by creating a legal framework for trustworthy AI. Following this consultation, in April 2021 the Commission released the draft Regulation calling for “ethical principles” to be applied to AI, and for the outlawing of uses of AI that contradicted those principles. This will be passed into law as the EU AI Act, probably later in 2022.

Benefit of Responsible AI

In reality, the online battles mirror precisely similar debates which have been running for many years – for example, the freedom of the press versus the right to individual privacy. At its core, the struggle reflects how we want our society to behave. The choices we make should be informed by and consistent with the data ethics guidelines within which we operate.

It’s up to each one of us, not as individuals but collectively, to make the choices that will shape how we are protected from the worst abuses of AI systems, and to avoid giving up our freedom and privacy in return for convenience and lower costs. AI-enabled games can be as addictive as nicotine—it won’t help to moralise, but persistent education over generations may be necessary to wean ourselves off the digital addiction, and enable us to make positive choices that will protect us from the more serious harms (to avoid becoming online prey).

Responsible AI dictates how large companies and government should behave when deploying online artificial intelligence systems and give the relevant enforcement agencies “teeth” to use when enforcing violations of the responsible AI laws.

Paul Gillingwater (paul.gillingwater@chaucer.com)

Paul Gillingwater MBA, CISSP, CISM, RHCE

Associate Partner

GDPR, ISO27001, PCI/DSS, GRC, DPA18

Paul is Head of IT Security and Data Privacy Team and Registered DPO at xTech and has worked for more than 30 years as a cyber security specialist and advisor to businesses with their governance, regulatory and compliance requirements. More recently he has advised on data protection and is a passionate advocate of online privacy rights education.