17 Jun, 2021
Chaucer and BIP xTech's point of view on the new European Union Artificial Intelligence Act
“If our era is the next industrial revolution, as many claims, Artificial Intelligence is surely one of the driving forces.”
– Fei-Fei Li, Sequoia Capital Professor of Computer Science at Stanford University

Artificial Intelligence (AI) is the generalisation of intelligence demonstrated by machines. Today AI is widely adopted by organisations from start-ups to small or large enterprises, both in private and public sectors, with some countries more advanced than others, especially the UK. The AI software market has matured fast with Cloud Operators as the leaders thanks to their “Open Source” based approach. In parallel, even if the market is consolidating, a multitude of start-ups have emerged and have built vertical applications. Market growth is unprecedented and the only factor slowing it down is the lack of AI talent which countries are trying to remediate through the education system and academia. In terms of maturity, large organisations started their Data & AI journey years ago but are still mostly at the “experimental phase” in terms of development but also application and usage. AI systems are primarily focussed on either “revenue boosting” such as customer clustering, cross/up selling, churn prevention, or on “business and operational efficiencies”, such as optimising costs / manpower and preventing anomalies and faults across their entire estate.
The reality is that AI comes with its responsibilities. Some organisations (primarily Telcos and Financial Services) have started to address risk topics related to AI internally before this regulation proposal appeared. However, good behaviour needs to happen across all organisations and industries.
In this context the EU aims to set up a common regulatory framework for AI in Europe. The Proposal for a “Regulation Of The European Parliament And Of The Council - Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts“, alias European AI ACT, was published on April 21st 2021 as a response. It was suggested by President von der Leyen in her political guidelines, following a white paper on the subject. This proposal is the result of an extensive and long period of consultation with major stakeholders including academics, businesses, social partners, non-governmental organisations, Member States and citizens. Below were key milestones in the journey:
- 2018: High-Level Expert Group on AI (HLEG) composed of 52 well-known experts tasked to advise the Commission on the implementation of the Commission’s Strategy on Artificial Intelligence
- April 2019: Key requirements set out in the HLEG ethics guidelines for Trustworthy AI,
- February 2020: Launch of an online public consultation to collect views and opinions on the White Paper that address problems posed by the development and use of AI
- July 2020: Assessment list for Trustworthy Artificial Intelligence (ALTAI) as part of requirements of the HLEG ethics guidelines for Trustworthy AI operational in a piloting process
Still in its infancy, this new proposal is met with mixed expectations and concerns that it may slow down AI adoption. However, from a user’s perspective, it has been long awaited and seems to be welcome. Uncontrolled usage of AI has shown to lead to substantial breach of fundamental human rights, biased, even health and safety problems. It is important to note that it is still at proposal stage, a debate and an approval process need to follow, which might last until 2022 to make it a regulation. Even after its publication in the Official Journal of the European Union, its implementation is likely be gradual with full application after 24 months.
The purpose of this article is to provide a summarised perspective on the EU AI Act proposal as it stands today looking at proposed rules on AI and high risk AI systems, data transparency, approvals and conformity processes.
A summary of the European AI ACT
The AI Regulation identifies the most troublesome Artificial Intelligence practices based on their potential risks and impacts on society. It defines a set of requirements for both providers and users of such systems. It distinguishes between:
- Prohibited Artificial Intelligence Practices. Different AI systems fall within this category with possible relevant impact on individuals. Examples can be when AI algorithms using subliminal technique that may result in distorting a person’s consciousness or when AI being used by public authorities to evaluate the trustworthiness of individuals potentially leading to unfavourable treatment. There are only few exceptions, considered in the AI Regulation, in which such systems may be used.
- High-Risk AI Systems. The Regulation defines high-risk AI system if two conditions are fulfilled:
a. the AI system is intended to be used as a safety component of a product, or is itself a product
b. the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment
Additionally, the proposal provides definitions on high-risk AI systems.

Considering specifically high-risk AI systems, the proposal outlines a series of requirements that these practices shall comply with:
- A Risk Management framework to identify “known and foreseeable” consequences associated with the AI system, estimate and evaluate those risks from possible misuse, using well defined risk management measures
- If the system is using models that are trained with data, the training, validation and test data shall meet specific quality criteria and data sets shall be subject to data governance and management practice across their development
- Logging capabilities ensuring traceability and monitoring of the entire AI systems
- Transparency requirements by providing detailed instructions for their appropriate use (e.g. specifying the contact details of its provider, the characteristics of the AI system, the expected lifetime etc.)
- Human oversight requirements
It is the responsibility of the high-risk AI systems providers to ensure compliance and conformity in line with national competent authorities. The same obligations apply to manufacturers of products including high-risk AI systems. Any distributor, importer, user or other third-party will be subject to the same obligations when releasing or modifying a high-risk AI system. Finally, the proposal defines obligations for consumers of high-risk AI system with mandatory instructions and monitoring based on the instructions received.
What is a high-risk AI system?
High-risk AI system are those intended to be used:
1.For the ‘real-time’ and ‘post’ remote biometric identification of individuals.
2. As safety components in the management and operation of infrastructure such as road traffic, the supply of water, gas, heating and electricity.
3. To determine access or assigning individuals to educational and vocational training institutions, and for assessing participants in tests commonly required for admission to educational institutions.
4. To recruit or select natural persons, for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships.
5. By public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, creditworthiness or credit score;
6. To dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
7. By law enforcement authorities for making individual risk assessments for offending or reoffending, use of polygraphs and similar tools, for detecting the emotional state of a natural person; deep fakes; the reliability of evidence in criminal offences, predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of individuals, profiling of natural persons, crime analytics using large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
8. By competent public authorities for migration, asylum and border control such as polygraphs or similar to detect the emotional state of the person; assess a risk, including a security risk, irregular immigration, or a health risk; for the verification of the authenticity of travel documents and supporting documentation; for the examination of applications for asylum, visa and residence permits and associated complaints.
9. To assist a judicial authority in researching and interpreting facts
Key considerations outlined for high-risk AI System
AI systems that create a high risk to the health and safety or fundamental rights of natural persons are permitted on the European market subject to compliance with certain mandatory requirements (applicable to the design and development) that must be assessed before putting into service through harmonised technical standards. The proposal sets out specific legal requirements for high-risk AI systems in relation to:
- data quality and data governance: to ensure relevance, accuracy, true representation and prevent and correct bias in the data sets.
- documentation and record keeping: the technical documentation should be provided to national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system High-risk AI systems should also ensure a level of traceability enabling the automatic recording of events (‘logs’) while the systems are operating
- transparency and provision of guideline to users: High-risk AI systems should be designed and developed to enable users to interpret the system’s output and use it appropriately. High-risk AI systems should come with detailed instructions on their use, capabilities and limitations, required human interventions, the expected lifetime and necessary maintenance.
- human oversight: High-risk AI systems should be effectively overseen by natural persons during the period in which the AI system is in use. The human oversight should be equipped to fully understand the capacities and limitations of the system, correctly interpret the results and decide, in any particular situation. Additionally, for Biometric identification, two human oversights are required for actions or decisions based on results from the system.
- robustness, accuracy and security: High-risk AI systems should achieve an appropriate level of accuracy, robustness and cybersecurity and be resilient to attempts by unauthorised third parties.
The risk management framework for these high-risk AI systems should be a continuous and iterative process throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It should cover:
- The known and foreseeable risks associated with each high-risk AI system,
- The risks that may emerge when the high-risk AI system is used and
- Other possible arising risks highlighted by the post-market monitoring system.
As part of the risk framework, the proposal imposes the adoption of risk management measures, such as mitigation and control measures or provision of adequate information. It recommends that testing procedures be implemented to identify the most appropriate risk management measures.
The paper mainly refers to providers of AI systems but importers, distributors and users of high-risk AI systems are also subject to a number of compliance obligations too as per the above.
For now, providers of non-high-risk AI systems are not required to comply with the above requirements but should be encouraged to create codes of conduct intended to foster the voluntary application of the above requirements in order to lead to a larger uptake of trustworthy artificial intelligence in the Union.
A strong focus on transparency
One of the main objectives of the regulation is to define the degree of transparency required for AI systems in terms of interaction with their users, individuals, and authorities. The regulator states in the proposal that “transparency obligations are meant to be limited only to the minimum necessary information and must be carried out in respect of relevant legislation and the right to protection of intellectual property”.
High-risk AI systems must guarantee transparency and provision of information, ensuring that users can interpret the system’s output and use it appropriately, and eventually register their system in a public EU database. Even though only high-risk AI systems must abide by these rules, these requirements can be considered as a code of conduct or guidelines for all AI systems. In all cases, AI systems that interact with natural persons or generate content, regardless of whether they qualify as high-risk or not, must avoid the risk of impersonation or deception. As such:
- Individuals should be notified when they are exposed to an emotion recognition system or a biometric categorisation system
- Users should be made aware of any image, audio or video content that has been artificially generated or manipulated (“deep fakes”).
- Note that these rules do not apply when the use is authorised for legitimate purposes (ex: law enforcement).
Due to the complexity of AI systems and the risks to fundamental rights and discrimination, high-risk AI systems must be designed and develop following the requirements mentioned in the previous section with transparency at its core and full documentation on the provider, the systems’ characteristics and their limitations, including:
- a description of the intended purpose
- the expected level of performance, robustness, and cyber security
- any known or foreseeable circumstance which may lead to risks to the health and safety or fundamental rights
- specifications for the input data and training methodology (when appropriate)
- human oversight measures and technical measures to facilitate the interpretation of the outputs
- the expected lifetime and any necessary maintenance and care measures.
Besides providing the correct documentation, providers of high-risk AI systems must also register their system in an EU database accessible to the public. The goal is to increase public transparency and strengthen supervision by competent authorities. The required data includes:
- Information on the provider, the AI system trade name and any reference allowing identification and traceability of the system
- A description of the intended purpose of the AI system
- The status of the AI system (on the market, no longer placed on the market, recalled)
- The conformity certificate issued by the notified body and the name or identification number of that notified body
- Member States in which the AI system is to or has been placed on the market, put into service or made available
- Digital instructions for use (unless the system is used in certain fields).
Views on Approval & Conformity for high-risk AI systems
To be compliant with the proposal and create the right level of confidence in the adoption of AI solutions, the proposal brings forward a formal approval and conformity process at country level.

Approval process and conformity
The process will require for High-Risk AI systems the production of "Technical Documentation" and "Instructions for Use". Specifically:
- Technical Documentation should prove that the high-risk AI system complies with the requirements and, should provide the competent national authorities and notified bodies with the information they need to assess the conformity of the AI system. The Technical Documentation shall contain information on the general aspects of the AI system, its development, monitoring, control and operational processes, and the risk management system.
- Instructions for Use: Each high-risk AI system should be accompanied by instructions for use containing generic vendor information, characteristics, capabilities and performance limits of the high-risk AI system, human oversight measures, expected lifetime of the high-risk AI system, and any necessary maintenance and support measures.
Providers of high-risk AI systems will also be required to:
- Establish a quality management framework in the form of policies, procedures and instructions covering various aspects, including a strategy for regulatory compliance, techniques, procedures and actions for AI system control and monitoring, procedures related to major incident reporting, and management of communications with appropriate authorities.
- Undergo the conformity assessment prior to going market or put into service.
- Following successful conformity assessment by notified bodies, a certificate will be issued for a maximum period of five years.
A high-risk AI system may also be required to demonstrate compliance with the requirements for high-risk AI systems listed in the proposal. If so, the provider should draw up a declaration of conformity, take responsibility for it, and keep it at the disposal of the relevant national authorities for 10 years after the high-risk AI system is live.
Finally, prior to placing a high-risk AI system on the market or putting it into service, the supplier or authorised representative should register the system in the EU database.
Post Market monitoring and incidents managing
Post-market monitoring includes all activities carried out by providers of AI systems in a real-life setting to proactively collect and review experience gained from the use of their AI systems. Post-market monitoring ensures:
- Proper monitoring of the performance of an AI system and
- Corrective actions taken in a timely manner.
The monitoring shall be proportionate to the nature of the AI technologies and the risks of the high-risk AI system. Providers of high-risk AI systems should report any serious incident (or malfunctioning), not later than 15 days after the provider becomes aware of the serious incident. After the incident, the market surveillance authority should inform the national public authorities, but guidance on how to do so will be released 12 months after the regulation is published.
The aim of the post market monitoring is to ensure that once the AI system has been put on the market, public authorities have the authority and resources to intervene in case AI systems generate unexpected risks. The market surveillance authorities should be granted full access to the training, validation, testing datasets used by the provider and the source code.
Penalties
As for GDPR, penalties for noncompliance to the AI ACT will be severe and steep. The proposal suggests infringements be subject to administrative fines of:
- up to 30 million euros or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year (whichever is higher) in case of non-compliance with the prohibition of the artificial intelligence practices or non-compliance of the AI system with the requirements
- up to 20 million euros or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year (whichever is higher) in case of non-compliance with any requirements or obligations under the Act other than those mentioned above
- up to 10 million euros or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year (whichever is higher) in case of supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.
The amount of the administrative fine would take into consideration the gravity of the infringement, any previous violations and the size and market share of the operator. The proposal also provides that EU Member States must set out effective, proportionate, and dissuasive penalties applicable to infringements of the Act and should take all measures necessary to ensure that they are properly and effectively implemented in their jurisdiction. The European Data Protection Supervisor may also impose administrative fines on Union institutions, agencies and bodies falling within the scope of the Regulation.
Notifying Authorities And Notified Bodies
A notified body is an organisation designated by an EU country to assess the conformity of certain products before being placed on the market. These bodies carry out tasks related to conformity assessment which is a process demonstrating whether specified requirements relating to a product, service or system have been fulfilled. The conformity assessment can include inspection and examination of a product, its design, and the manufacturing environment and processes associated with it. The notified body for Artificial Intelligence will verify the conformity of a high-risk AI system in accordance with the conformity assessment procedures.

Our perspective
This is a long-awaited proposal. For some it is received with cynicism, criticisms and seen as a burden. For others it is incomplete, focussing on individual rights such as health and safety rather than taking a leap at regulating the adoption of AI for all purposes.
From a provider/company’s perspective - If one had concerns before its publication, that the regulation would put a stop on personalisation use cases, autonomous AI or black box algorithms like text/voice classification, this is not the case. It just sets boundaries. If one had concerns that “informed consent” would be required for each algorithm users would be subject to, again this is not the case. The regulation simply asks this for high-risk Use Cases. Over time we can figure out this might extend to all or great part of AI Use Cases however.
From users’ perspective - User groups, human rights activists, ethical bodies and researchers have raised concerns about AI and the fact that its use was not regulated to be “ethically fair” to all users. In the proposal document it is clearly stated that the Commission aimed at introducing a “horizontal legislative instrument that follows a proportionate risk-based approach associated with social impacts and fundamental rights of individuals” and that the proposal “is limited to regulating the minimum requirements necessary to address AI-related risks without hindering technological development and without unduly increasing the cost of bringing AI solutions to market”. From a user perspective, it may not go far enough, yet.
Our opinion is that this proposal is a move in the right direction that will help organisations accelerate the use of AI for the good of the community they serve. None of the required actions listed in it are new, instead it reinforces best practices required for the consumption of data and AI projects:
- Data quality, data governance and data management: to ensure relevance, accuracy, true representation, and bias correction in the data sets.
- Importance of risk management framework and monitoring.
- Documentation whether it is technical for developers and notifications or instructions for the consumer of the outcome.
- Record keeping: traceability is key.
- Transparency and provision of guideline to users and relevant bodies.
- Human in the loop. Humans make decisions!
- Robustness, accuracy, resilience and security.
- Responsibility: AI powers comes with responsibilities.
Our view is that this act would raise awareness and force focus on the risks associated in the use of AI, not only in organisations but also in the population as users, consumers or citizens. We feel that recipients of the output of AI systems will demand to know more about the use of AI and ethical fair treatments from organisations. The proposed regulation should become normality.
Last but not least, this proposal makes total business sense. The most valuable business commodity is trust, as Richard Branson said. To sustain trust, we feel that organisations will see the creation of their own “AI codes of conduct” to comply to good AI behaviour as a strategic business decision to compete, continue serving their community and grow.
Bip Group and AI Regulation
Business Integration Partners (Bip) is one of the fastest growing global consultancies, with cumulative growth of ~50% over the last 4 years. Our 3,500 consultants, based in 20 offices in 12 countries are passionate about delivering the best for our clients, are value centric and help organisations transform to be truly digital, keeping people at the core.
Bip has been deeply active in AI since 2013 and, together with Chaucer - part of Bip Group since 2020 - counts the largest AI Professional Competence Centre in Italy and Southern Europe, named Bip xTech. Delivering more than 100 Data projects per year, Bip xTech has collaborated with clients across 12 countries worldwide supporting large scale organisations and SMEs alike in their AI adoption path, defining AI Strategy, Platforms, Organisation and Processes, and facilitating the ramp-up of Data Driven Organisations. This network was strengthened through the acquisition of Chaucer in UK.
Bip xTech and Chaucer have helped clients in the last two years define their AI Risk frameworks and AI Ethics codes of conduct. We are adapting our framework to reflect the EU AI Act proposal and support our clients prepare their journey towards being compliant to this proposed regulation.
Please click here to get in touch with our experts and ask for more information.