Tutorial
9 min read

EU Artificial Intelligence Act - where are we now

It's coming up to a year since the European Commission published its proposal for the Artificial Intelligence Act (the AI Act/AI Regulation). 

The public consultation received over 300 responses including from industry stakeholders, NGOs, academics and others indicating significant interest in the proposed AI Regulation.

Let me provide you with a short overview of the proposed EU AI regulation and current developments during the legislative process. 

Please note that the AI Act draft is still under legislative review and the following proposals may change.

What is an Artificial Intelligence System?

Artificial intelligence system is software that is developed with one or more of the following techniques and approaches

  • machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

  • logic and knowledge-based approaches, including knowledge representation, intuitive (logical) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; or

    statistical approaches, Bayesian estimation, search and optimization methods.

It can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The proposed definition is broad and seems to be technology neutral and future-proof. In addition, the EU can then modify and update the definition by adding/changing the techniques and approaches listed. 

One of the latest compromise texts proposes to include an explicit reference indicating that any such system should be capable of determining how to achieve a given set of human defined objectives by learning, reasoning or modelling.

Extended Application

The AI Act will apply to:

  1. Providers that first supply commercially (free of charge or not) or put an AI system into service in the EU (putting into service involves making it available for use by a “user” or for use by the provider itself), regardless of whether the providers are located inside or outside of the EU;
  2. Users of AI located in the EU;
  3. Providers and users located outside the EU, if the output produced by the system is used within the EU.

Artificial Intelligence system categories

The AI Act proposes a risk-based approach and divides AI systems into three main categories:

  1. Unacceptable-risk AI systems that include
  • Subliminal, manipulative, or exploitative systems that cause harm. 

    Real-time, remote biometric identification systems used in public spaces for law enforcement. 

    All forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.

  1. High-risk AI systems (listed in Annex III to the AI Act) that include i.a:
  • Biometric identification and categorisation of natural persons – i.e AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons. 

    Management and operation of critical infrastructure – i.a. AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.

    Education and vocational training - i.a. AI systems intended to be used for:

    • the purpose of determining access or assigning natural persons to educational and vocational training institutions;
    • the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions. 
  • Employment, employee management and access to self-employment – i.a. AI systems intended to be used for recruitment or the selection of natural persons, notably for advertising vacancies, screening or filtering applications and evaluating candidates in the course of interviews or tests. 

  • Access to and enjoyment of essential private services and public services and benefits – i.a. AI systems intended to be used to evaluate the creditworthiness of natural persons or to establish their credit score or to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including firefighters and emergency medical aid.

  • Law enforcement – i.e.  AI systems intended to be used by law enforcement authorities for doing individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending, or the risk for potential victims of criminal offences. 

  • Migration, asylum and border control management - i.a. AI systems intended to be used by competent public authorities to assess risk, including a security risk or a risk of irregular immigration.

  • Administration of justice and democratic processes - i.a. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

  1. Limited and minimal-risk AI systems include many of the AI applications currently used throughout the business world, such as AI chatbots and AI-powered inventory management.

The list of unacceptable risk and High-risk systems shall be updated through regular assessments conducted by competent authorities.

It’s also worth mentioning due to a recent legislative proposal, that the high-risk list has been updated to include, amongst other things, AI systems intended to be used to control or as safety components of digital infrastructure and AI systems intended to be used to control fuel emissions and pollution.

The use of unacceptable risk Artificial Intelligence system is banned

Certain systems in the limited/minimal-risk category are subject to transparency obligations. 

The proposed AI Act focuses mainly on high-risk AI systems, which will not be strictly prohibited, but will be subject to strict compliance obligations, as well as technical and monitoring obligations. 

Key obligations for high risk AI systems: 

  • Risk management system
    Providers must establish and document a continuous risk management system. The risk management system must ensure that such risks are eliminated or reduced to the highest extent possible through adequate design and development. 

  • High quality data sets 
    The AI systems must be trained, validated and tested by “high-quality” data sets that are relevant, representative, free of errors and complete.

  • Technical Documentation
    The provider is obliged to create and keep up-to-date technical documentation that proves the system’s conformity and compliance with the AI Act to regulators.

  • Transparency
    Users must be able to sufficiently understand how a high-risk AI system works to enable them to interpret and use its output. 

  • Quality management system and logs
    The provider must implement a quality management system, which includes technical standards and a regulatory compliance strategy and design automatic logging capabilities. 

  • The supervision of AI systems
    Must be designed in such a way that they can be effectively overseen by competent natural persons. These persons should fully understand the capacities and limitations of the AI system and be able monitor its operation.  

  • Robustness, accuracy, and cybersecurity 
    AI systems must be designed and developed with an appropriate level of accuracy and resilience against errors and attempts by unauthorized third parties to alter the system. 

  • Conformity assessment
    The provider must perform a conformity assessment of a high-risk AI system to demonstrate its conformity within the requirements of the AI Act before the AI system may be used or introduced into the EU market (for example in a way of self assessment). In addition, CE marking must be visibly affixed. 

  • Artificial Intelligence system EU Register 
    AI systems must be registered in a publicly accessible EU-wide database established by the European Commission. 

  • Monitoring
    Providers must implement post-marketing monitoring to evaluate continuous compliance of the AI system by collecting and analysing performance data. Providers are also required to inform national authorities about serious incidents or the malfunctioning of the AI system. 

Where are we now? Next steps 

In mid-January 2022, the French Presidency's latest compromise text on the Artificial Intelligence Act (AIA) emerged.  It proposed changes to articles on risk management systems, data management, technical documentation, record keeping, transparency and provision of information to users, human oversight, accuracy, robustness and cyber security. 

The most important changes:

High risk systems

High risk systems that should comply with the requirements set out in the AI Act have been significantly modified. It is proposed that compliance with the AI Act should also mean taking into account the generally acknowledged state-of-the-art technology, as reflected in relevant harmonised standards or common specifications.

Risk managment

In terms of risk management systems, the project clarified which elements should be taken into consideration when creating such solutions, especially in the context of identifying AI-specific risks. Risks identified within such systems should be limited to those that can be mitigated or eliminated through the process of creating and developing high-risk systems or through the use of appropriate technical documentation.

Data governance

It has been proposed that for the development of high-risk AI systems not using techniques involving the training of models, requirements shall apply only to the testing of data sets. The reason for this is that training, validation and testing data sets can never be completely free of errors.

It has also been proposed that AI systems use data minimization (as laid out in the GDPR Regulations) or limit the collection of personal data to what is strictly necessary, throughout the lifecycle of the AI system. This means that personal data used for these purposes will need to be limited to the purposes of processing.

Technical documentation

The proposal includes developments in order to provide more flexibility for SMEs and start-ups with regards to compliance with technical documentation.

Transparency

High-risk systems should be created in a way that ensures compliance with the requirements of the regulations, but also allows users to understand what such an artificial intelligence system is all about. It is proposed to clarify the scope of information that should be included in the manual accompanying the system itself.

The legislative process is still ongoing and we can expect further proposals for change

In order for it to become legally binding, the AI Act must go through the EU’s ordinary legislative procedure, which requires the consideration and approval of the proposed Regulation by the Council and the European Parliament. Once adopted, the AI Act will come into force twenty days after it is published in the Official Journal. However, the draft proposes a period of 24 months before the law will apply

The AI Act will be directly applicable in all EU countries and will not require implementation into local laws of member states.

In the spirit of the draft AI Act, the EU Parliament adopted on 6th October 2021 a non-binding resolution concerning the use of artificial intelligence by the police and judicial authorities in criminal matters. In that resolution, the Parliament called for, amongst other things, a ban on the use of facial recognition technology for law enforcement purposes which leads to mass surveillance in publicly accessible spaces. Read more here.

big data
AI
AI Regulation
Artificial Intelligence
15 February 2022

Want more? Check our articles

transfer legacy pipeline modern gitlab cicd kubernetes kaniko
Tutorial

How we helped our client to transfer legacy pipeline to modern one using GitLab's CI/CD - Part 2

Please dive in the second part of a blog series based on a project delivered for one of our clients. If you miss the first part, please check it here…

Read more
radiodataalessandro
Radio DaTa Podcast

Data Journey with Alessandro Romano (FREE NOW) – Dynamic pricing in a real-time app, technology stack and pragmatism in data science.

In this episode of the RadioData Podcast, Adama Kawa talks with Alessandro Romano about FREE NOW use cases: data, techniques, signals and the KPIs…

Read more
1sK7ModpT4v02ujZ379Samg
Tech News

Celebrating GetinData’s Inclusion on Clutch’s Lists of Top Big Data and IoT Companies!

Founded by former Spotify data engineers in 2014, GetInData consists of a team of experienced and passionate Big Data veterans with proven track of…

Read more
bloggcpobszar roboczy 1 4
Tutorial

Data isolation in tenant architecture on the Google Cloud Platform (GCP)

Multi-tenant architecture, also known as multi-tenancy, is a software architecture in which a single instance of software runs on a server and serves…

Read more
7 reasons to invest in real time streaming analytics based on apache flink
Tech News

7 reasons to invest in real-time streaming analytics based on Apache Flink. The Flink Forward 2023 takeaways

Last month, I had the pleasure of performing at the latest Flink Forward event organized by Ververica in Seattle. Having been a part of the Flink…

Read more
apache2xobszar roboczy 1 4
Tutorial

Introduction to GeoSpatial streaming with Apache Spark and Apache Sedona

We are  producing more and more geospatial data these days. Many companies struggle to analyze and process such data, and a lot of this data comes…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy