The Luxembourg Chamber of Commerce and its Enterprise Europe Network organised a second Ready4AI conference cycle on the theme “Artificial Intelligence: Legal Aspects and Practical Issues for Businesses”, which took place on Tuesday 23 January 2024; Dr Juha Heikkilä, Adviser for Artificial Intelligence, Directorate-General for Communications Networks, Content and Technology from the European Commission provided insights into the future European AI strategy.
This conference cycle aimed to raise awareness among companies of essential legal aspects relating to the integration of artificial intelligence tools within them as well as to address the practical issues, opportunities and challenges they face in this context. Experts discussed the legal requirements inherent to the development and use of artificial intelligence tools in one’s business. During a round table discussion, participants had the opportunity to listen to professional testimonies relating to the concrete applications and practical challenges of AI.
Juha Heikkilä, PhD, Adviser for Artificial Intelligence, Directorate-General for Communications Networks, Content and Technology at the European Commission gave a quick overview of the future European regulation “AI Act”, took stock of the evolution of its adoption process and drew the audience’s attention to the main aspects of this European legislation. This will reportedly be a world-first due to its scope.
The Parliament and the Council adopted the legislation proposed on 21 April 2021. An agreement was reached on 8 December 2023. The final text is not yet out. Juha Heikkilä noted that the European Commission is aware that AI is a “useful” technology with much “potential for good” both for business and private endeavours. He stressed that difficulties arise in the area of “private rights” and the use of AI can bring with it potential risks. The goal of this AI Act European regulation was to “maximise benefits and minimise the risks”.
AI has certain special characteristics: it is very complex, opaque, can be unpredictable and autonomous and can handle vast amounts of data which can bring risks in terms of safety, fundamental rights, enforcement, legal uncertainty, distrust and fragmentations. The aim was to prevent fragmentation, make it innovation-friendly, and make sure AI cannot be developed and put on the market in the EU if it did not respect these rules.
Juha Heikkilä went on to explain what the risk-based approach is, showing in pyramid-style what use of AI is “unacceptable” at the top (social scoring, untargeted scraping), followed by certain “high risk” potential uses (recruitment, medical devices, permitted if they comply with AI requirements and ex ante conformity assessment). The lower-level risk activities such as for education and vocational training or producing deep fakes merely require “transparency”. He added that General Purpose AI models (GPAI) such as ChatGPT need to have sufficient transparency about their training data set for GPAI with systemic risks, notably for a high cumulative amount of computing data used for training.
Agreements were reached concerning the usage of some AI systems that do not influence human decisions or do purely preparatory tasks that are not considered high-risk (for example: using the technology to sort CVs from cover letters).
On the timeline of these regulations, he added that six months after they enter into force (likely in April), the prohibitions will become applicable. Twelve months into the entry into force, the rules on GPAI governance will take effect. Some high-risk applications will become possible after 24 months and other risk applications after 36 months.
Discussing innovation and excellence, he stressed that the EU intends to accelerate investments in AI and act on AI strategies and that some key actions are being taken to enable conditions for AI development and uptake in the EU. He added that the EU is seeking “excellence from lab to market” and that ensuring AI technologies work for people is a main concern. He highlighted that the European Commission seeks coalitions, and aims to get involved in multilateral cooperation with an increasing number of countries, notably: Canada, the US, India, Japan, Korea and Singapore. He highlighted a particular achievement in the fact that the G7 leaders managed to agree on the Guiding Principles and Code of Conduct for Advanced AI, to be implemented in the autumn. They did this in about two months.
Juha Heikkilä emphasised: “Yes we do regulation, we think it’s necessary and we want to do AI the right way, but we very much promote innovation. […] We want to guide global innovation, set guardrails on AI and develop the global governance on AI.”
Juha Heikkilä recalled that, about a year ago, the European Commission was criticised for emphasising regulation, adding that the landscape is not fixed, it is in constant change. Though Europe was unfavourably compared to the US and China, and told they should be “creating and not banning”, he added there is now a growing awareness that there is a need for regulation since the rise of generative AI, with multiple countries rethinking whether they should regulate AI or not.