Building an ecosystem of excellence and trust in artificial intelligence are two pillars of the EU Regulation we have proposed.
The proposed regulatory provisions are an expression and fulfillment of principles underlying the whole strategic action that Europe is conducting towards a balanced development of the digital society.
The proposed Regulation on Artificial Intelligence therefore contains a system of rules that will be deliberated and adopted in the coming years by the European Parliament and the Council according to the democratic procedures provided for in the EU Treaties.
The provisions foresee the introduction of the CE marking system to artificial intelligence systems considered to be high risk, i.e. those systems that may jeopardize the protection of fundamental rights of citizens such as non-discrimination and high standards of health and safety present in Europe.
The challenge of dovetailing EU rules with existing ones
One of the challenges is precisely to dovetail AI rules well with existing regulations. The first things that meets the eye in our text is the list of high-risk applications and the bans on biometric recognition (with exceptions); but one aspect that has not been fully grasped is how the regulation relates to rules already in place on products and services, such as rules on the safety of medical devices and industrial robots.
It allows the sending of promotional communications regarding products and services of third parties other than the Joint Venture that belong to the manufacturing, services (in particular ICT) and trade sectors, with automated and traditional contact methods by the third parties themselves, to whom the data are communicated.
It has taken several months to define an answer to this problem. It should be noted that there are two main regulatory systems regarding product safety in Europe: an “old regime” system where the EU regulatory act contains very detailed provisions, with each rule being a rule in itself (see motor vehicle or aviation regulations) and then a “new regime” system – medical devices for example – where the EU regulatory act focuses on essential principles, more general in scope, and leaves room for standards or technical solutions adopted by the manufacturer.
When a product receives a CE mark, such a mark is an indication that the product is safe. Since artificial intelligence finds application in so many products that are subject to different sectoral regulations, our response was to adopt a differentiated approach that was as much in line with existing regimes as possible.
How the new rules apply
Therefore, where artificial intelligence is an essential component of a product’s safety, the new rules apply directly with respect to “new regime” products.
On the other hand, with respect to “old regime” products, the new rules will only apply following the Commission’s adoption of revisions to sectoral regulations.
The existing rules also dialogue with the regulation on the rights. The regulation on AI is not a story in itself and therefore it is incorrect to say that the text of the Commission does not prohibit mass surveillance, which, moreover, is already prohibited by the constitutions of many countries and by the GDPR; in the final text – compared to the draft that was circulated before the adoption of the final text – we have better clarified what the role of artificial intelligence is in a surveillance system: the fact that AI is used to identify specific persons who are the object of a search by law enforcement.
Facial recognition, balancing security and privacy
The use of remote biometric identification (such as facial recognition) by police in public areas when it takes place in real time is prohibited, unless the police use it for major crimes, as is social scoring done by public authorities.
When it is allowed, facial recognition is always considered a high-risk application, and as such must comply with CE marking requirements.
For real-time biometric identification, we have provided exceptions, but with very stringent requirements (e.g. in relation to crimes and dangers of high severity); national legislators may choose to further restrict them. Here we have found a balance between those who wanted to use AI more for security reasons and those who wanted to ban it altogether (some civil rights groups).
On this point, I would only add that more attention should be paid to the fact that not much is known about the video surveillance technology already used at airports: who produces it, where the data is.
Dialogue with future rules: liability
Speaking of future rules, however, with regard to AI liability, a specific rule will likely come out.
One issue – addressed by the latter – is how to put an AI-enabled car on the road (what obligations and criteria to follow); the second is what happens if something goes wrong, how and who should be insured.
Dialogue with rules of other countries
Another aspect of the regulation’s dialogue is with the rules developed by other countries. We are the first to have a 360-degree approach to the subject and hope to inspire others. I can already see that there is a growing awareness in the world of the need for human-centered technology supported by rules.
Canada has an algorithm transparency law; New Zealand and Australia are moving in this direction.
We have an informal dialogue with the US that we hope to make formal soon. With them we have a not dissimilar approach, based on risk; we need rules to mitigate it and standards.
It will be appropriate to converge on global rules. A first attempt is the French-Canadian G7 initiative Global Partnership on Artificial Intelligence (GPAI), which works like the climate conferences (COP)”.
No penalization of the market. On the contrary
Ours is after all a reasonable, balanced and proportionate approach to innovation. We do not want to penalize it and thus hinder the benefits that can come to society. We only prohibit applications that are incompatible with a civil, democratic society.
For the rest, less than 10% of current AI applications are regulated and are those that – impacting on fundamental rights, safety, health – it is correct that there are criteria to be followed and a third party control. And even for this area, there is not a profusion of rules, but five requirements to be met.
It should be also noted that the regulation provides a sandbox for innovative startups. And we have several measures on track – totaling about 1 billion a year in funding – to support companies that want to innovate with AI. This be said always in the perspective of the opportunity to consider (and understand) the regulation as a piece of an ecosystem of actions, measures.
Finally, we believe that these rules, far from hindering innovation and the European market, actually foster it. Because they support the building of trust in digital technologies, and therefore greater demand. And because a startup that follows these principles will gain an international competitive advantage.
We have provided a list of high-risk applications precisely to limit developers’ uncertainty and not penalize their work. We know that these areas may change between now and when the Regulation is implemented, and that discussions will likely lead to a modification of the list in the current Annex 3.
The Regulations establish long-term principles: next steps
But we wrote the Regulation so that it would be a progressive release medicine and it would not easily become old.
In fact, it sets general principles and dovetails with more detailed elements that will emerge in the coming months and years.
Indeed, standards will have to come along that will help meet the obligations in the Regulation, making it easier for AI developers.