A European Approach to Artificial Intelligence (AI)
A European Approach to Artificial Intelligence (AI)
Authors: Elisabeth Eleftheriades, Evangelia Makri, KG Law Firm
On 19 February 2020, the European Commission published a White Paper on Artificial Intelligence proposing a common European Approach to AI that included policy means to boost investments in research and innovation, enhance the development of skills and support the uptake of AI by SMEs, as well as proposals for key elements of a future regulatory framework. The White Paper underlines the importance of AI and was available for comments and proposals through open public consultation until 14 June 2020.[1]
EU regulatory activity in AI evokes its activity on GDPR: the intention behind the EU’s AI strategy will be to “set the framework for the world” an aim which can be expected to affect compliance activity within and outside the EU.
The White Paper is the EU’s initiative starting point towards AI, given that according to recent surveys[2] AI initiatives remained fragmented in Europe until now and investment in AI was nothing like the size of that in the United States or China. Europe attracted only 11 per cent of global venture capital and corporate funding in 2016, with 50 per cent of total funds devoted to US companies and the balance going to Asia (mostly China). That share was about the same in 2018. Only four European companies are in the top 100 global AI startups.
Europe develops and diffuses AI according to its current assets and digital position relative to the world. It could add some €2.7 trillion, or 20 per cent, to its combined economic output, resulting in 1.4 per cent compound annual growth through 2030.
The below is an effort to summarize the key action points and takeaways presented in the White Paper at its current drafting stage.
1. AI is important for the EU
AI is identified as one of the most important applications of the data economy. Europe has the capacity to develop an AI ecosystem that benefits the whole European society and economy. But also, beyond European confines, Europe can become a global leader in innovation in the data economy and its applications, guided by the European fundamental values.
Europe’s biggest challenge for seizing fully the potential that AI offers is to avoid the fragmentation of the single market through the fragmentation of national initiatives which risk to endanger legal certainty, to weaken citizens’ trust, and to prevent the emergence of a dynamic European industry.
Commission’s President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI, thus the Commission supports a regulatory and investment oriented approach to AI with the twin objective of promoting the uptake of AI and addressing the risks associated with this new technology.
2. In relation to AI, the European Commission is introducing
-
- Its policy framework towards an “ecosystem of excellence”, setting out measures to align efforts at European, national and regional level combining private and public sector;
- Key Elements of a future regulatory framework with the aim of creating a unique ecosystem of trust.
3. Creating an Ecosystem of Excellence
Already in December 2018, the Commission presented a coordinated plan that proposes 70 joint actions for closer and more efficient cooperation between the European Commission (EC) and the Member States. The key ideas of this plan are set out below:
- Working with the Member States – Dedicating Funds: The EC’s objective is to attract over EUR 20 billion of total investments in the EU per year on AI in the next decade. To stimulate private and public investment, the EU will make available resources from the Digital Europe Programme, Horizon Europe as well as from the European Structural and Investment funds to address the needs of less-developed regions as well as rural areas.
- Focusing the efforts of the research and innovation community – Creation of Excellence and Testing Centers: Europe has identified the need for a ‘lighthouse’ centre of research, innovation and expertise that would coordinate internal efforts and be a world reference of excellence in AI, both in terms of human capital and financial resources. To this end, the EC will ease the creation of excellence and testing centres that can include European, national and private investments with the possibility to include a new legal instrument. The EC has proposed an ‘ambitious and dedicated amount’ to enhance world reference testing centers in Europe under the Digital Europe Programme and supported where applicable by research and innovation actions of Horizon Europe as part of the Multiannual Financial Framework for 2021 to 2027.
- A focus on skills – Updated Digital Education Plan, to fill shortages and make better use of data of AI-based technologies, by enhancing education and training systems.
- Focus on SMEs through InvestEU & Digital Innovation hubs in each MS: SMEs and start-ups will need to have access in funds to innovate using AI. The EC plans to further the access to finance already built through the pilot investment fund on AI and blockchain, under the InvestEU The EC plans to ensure that at least one digital innovation hub per Member State will focus on AI.
- Partnership with the private sector in the context of Horizon Europe: The EC will introduce a new public-private partnership in AI, robotics and data to combine efforts, ensure coordination of research and innovation in AI.
- Promotion of the adoption of AI by the Public Sector – Adopt AI: It is crucial that public hospitals, administrations, financial supervisors and other areas of public interest quickly begin to develop products and services that rely on AI in their activities. The healthcare and transportation sectors are identified as “mature for large-scale deployment”.
- Securing access to data and computing infrastructures – FAIR Principles: The enormous amount of new data to be generated introduces a new opportunity for Europe to position itself in the forefront of Data and AI transformation through responsible data management and compliance of date with the FAIR principles (Findable, Accessible, Interoperable and Reusable) as per the EU’s data strategy.
4. Creating an Ecosystem of Trust – A future regulatory Framework
In the present absence of a common European Regulatory Framework to protect all European citizens and help create a frictionless internal market, the EC is aiming at the creation of such a regulatory framework that minimizes risks of potential harm, particularly of fundamental rights including personal data and privacy protection and risks of safety and effective functioning of the liability regime.
A. Possible Adjustment to existing EU regulatory framework relating to AI: An existing body of EU product safety and liability legislation, including sector-specific rules, further complemented by national legislation, is relevant and potentially applicable to a number of emerging AI applications. The EC is of the opinion that the legislative framework could be improved to address the following risks:
- Effective application and enforcement of existing EU and national legislation;
- Limitations of the scope of existing EU legislation;
- Changing functionality of AI systems;
- Uncertainty as regards to the allocation of responsibilities between different economic operators in the supply chain;
- Changes to the concept of safety.
B. Scope of a future EU Regulatory framework – A Risk-based approach – Universal rules: a risk-based approach should be followed. The new regulatory framework will need to strike a balance between flexibility so that it could create a disproportionate burden, especially for SMEs and criteria to differentiate between the different AI applications and whether or not they are high risk. For an application to be considered as high risk it should meet two criteria:
- The AI application is employed in a sector where given the characteristics of the activities typically undertaken, significant risks can be expected to occur.
- The AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g. healthcare, transport, remote biometric identification).
Mandatory legal requirements imposed for high-risk activities could consist of training data, data and record-keeping, robustness and accuracy and human oversight. Remote biometric identification would have its own set of specific requirements since the gathering and use of biometric data (e.g. facial recognition in public places) carries specific risks for fundamental rights. In accordance with the current EU data protection rules and the Charter of Fundamental Rights, AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards.
Furthermore, regarding the addressee of these requirements, it is the Commissions view that each obligation/requirement should be addressed to the actors of the economy that are best placed to address any potential risk and in matters of geographic scope these requirements should be applicable to all economic operators providing AI-enabled products or services in the EU regardless of whether they are established in the EU or not.
To ensure that AI is trustworthy, secure and in respect of European values and rules, the applicable legal requirements need to be complied with, in practice and be effectively enforced both by competent national and European authorities and by affected parties. Moreover, an objective, prior conformity assessment would be necessary to verify and ensure that certain of the mandatory requirements applicable to high-risk applications are complied with.
The prior conformity assessment could include procedures for testing, inspection or certification. These conformity assessments for high-risk AI applications should be part of the conformity assessment mechanisms that already exist for a large number of products being placed on the EU’s internal market.
For AI applications that do not qualify as ‘high-risk’ and that are therefore not subject to the mandatory requirements discussed above, the EC suggests the option, in addition to applicable legislation, to establish a voluntary labelling scheme.
C. ARTIFICIAL INTELLIGENCE AND COVID-19
The practical interface between AI and Europe was also highlighted on 13 March 2020, when the EC called all StartUps and SMEs with AI Innovative Technologies that could help tackle the Covid-19 outbreak to apply for the next phase of funding from the European Innovation Council.
In the case of COVID-19, according to a European Parliamentary Research Service document[3], AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations.
It is also argued that AI’s capacity to quickly search large databases and process vast amounts of medical data could drastically accelerate the development of a drug that can fight COVID-19 but also raises questions about the criteria used for the selection of the relevant data sets and possible algorithmic bias. The majority of public health systems certainly lack the capacity to collect the data needed to train algorithms that would be reflective of the needs of local populations, take local practice patterns into account and ensure equity and fairness.
However, given the absence of a comprehensive human rights framework that would underpin effective outbreak surveillance at the international level, it is argued that the management of the risks associated with infectious diseases is likely to remain an ongoing challenge for global health governance.
According to the same EPRS document, the massive use of AI tracking and surveillance tools in the context of the COVID-19 outbreak, combined with the current fragmentation in the ethical governance of AI, could pave the way for wider and more permanent use of these surveillance technologies, leading to a situation known as ‘mission creep’. Coordinated action on inclusive risk assessment and strict interpretation of public health legal exemptions, such as that envisaged in Article 9 of the General Data Protection Regulation, will, therefore, be key to ensuring the responsible use of this disruptive technology during public health emergencies.
A new regulatory framework for AI [4]could prevent the establishment of new forms of automated social control through AI and ensure that the benefits of AI are fully obtained but at the same time the risks associated with this technology are mitigated.
[1] White Paper, available in : https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
[2] McKinsey Global Institute, 2019: https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-europes-gap-in-digital-and-ai#
[3] European Parliament Research Document (2020), https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/641538/EPRS_ATA(2020)641538_EN.pdf.
[4] Suggested material for further information: Competition Policy for the Digital Era Report, Trustworthy AI: Joining efforts for strategic leadership and societal prosperity, Communication “Artificial Intelligence for Europe”.
Authors: Elisabeth Eleftheriades, Evangelia Makri, KG Law Firm
On 19 February 2020, the European Commission published a White Paper on Artificial Intelligence proposing a common European Approach to AI that included policy means to boost investments in research and innovation, enhance the development of skills and support the uptake of AI by SMEs, as well as proposals for key elements of a future regulatory framework. The White Paper underlines the importance of AI and was available for comments and proposals through open public consultation until 14 June 2020.[1]
EU regulatory activity in AI evokes its activity on GDPR: the intention behind the EU’s AI strategy will be to “set the framework for the world” an aim which can be expected to affect compliance activity within and outside the EU.
The White Paper is the EU’s initiative starting point towards AI, given that according to recent surveys[2] AI initiatives remained fragmented in Europe until now and investment in AI was nothing like the size of that in the United States or China. Europe attracted only 11 per cent of global venture capital and corporate funding in 2016, with 50 per cent of total funds devoted to US companies and the balance going to Asia (mostly China). That share was about the same in 2018. Only four European companies are in the top 100 global AI startups.
Europe develops and diffuses AI according to its current assets and digital position relative to the world. It could add some €2.7 trillion, or 20 per cent, to its combined economic output, resulting in 1.4 per cent compound annual growth through 2030.
The below is an effort to summarize the key action points and takeaways presented in the White Paper at its current drafting stage.
1. AI is important for the EU
AI is identified as one of the most important applications of the data economy. Europe has the capacity to develop an AI ecosystem that benefits the whole European society and economy. But also, beyond European confines, Europe can become a global leader in innovation in the data economy and its applications, guided by the European fundamental values.
Europe’s biggest challenge for seizing fully the potential that AI offers is to avoid the fragmentation of the single market through the fragmentation of national initiatives which risk to endanger legal certainty, to weaken citizens’ trust, and to prevent the emergence of a dynamic European industry.
Commission’s President Ursula von der Leyen announced in her political Guidelines a coordinated European approach on the human and ethical implications of AI, thus the Commission supports a regulatory and investment oriented approach to AI with the twin objective of promoting the uptake of AI and addressing the risks associated with this new technology.
2. In relation to AI, the European Commission is introducing
-
- Its policy framework towards an “ecosystem of excellence”, setting out measures to align efforts at European, national and regional level combining private and public sector;
- Key Elements of a future regulatory framework with the aim of creating a unique ecosystem of trust.
3. Creating an Ecosystem of Excellence
Already in December 2018, the Commission presented a coordinated plan that proposes 70 joint actions for closer and more efficient cooperation between the European Commission (EC) and the Member States. The key ideas of this plan are set out below:
- Working with the Member States – Dedicating Funds: The EC’s objective is to attract over EUR 20 billion of total investments in the EU per year on AI in the next decade. To stimulate private and public investment, the EU will make available resources from the Digital Europe Programme, Horizon Europe as well as from the European Structural and Investment funds to address the needs of less-developed regions as well as rural areas.
- Focusing the efforts of the research and innovation community – Creation of Excellence and Testing Centers: Europe has identified the need for a ‘lighthouse’ centre of research, innovation and expertise that would coordinate internal efforts and be a world reference of excellence in AI, both in terms of human capital and financial resources. To this end, the EC will ease the creation of excellence and testing centres that can include European, national and private investments with the possibility to include a new legal instrument. The EC has proposed an ‘ambitious and dedicated amount’ to enhance world reference testing centers in Europe under the Digital Europe Programme and supported where applicable by research and innovation actions of Horizon Europe as part of the Multiannual Financial Framework for 2021 to 2027.
- A focus on skills – Updated Digital Education Plan, to fill shortages and make better use of data of AI-based technologies, by enhancing education and training systems.
- Focus on SMEs through InvestEU & Digital Innovation hubs in each MS: SMEs and start-ups will need to have access in funds to innovate using AI. The EC plans to further the access to finance already built through the pilot investment fund on AI and blockchain, under the InvestEU The EC plans to ensure that at least one digital innovation hub per Member State will focus on AI.
- Partnership with the private sector in the context of Horizon Europe: The EC will introduce a new public-private partnership in AI, robotics and data to combine efforts, ensure coordination of research and innovation in AI.
- Promotion of the adoption of AI by the Public Sector – Adopt AI: It is crucial that public hospitals, administrations, financial supervisors and other areas of public interest quickly begin to develop products and services that rely on AI in their activities. The healthcare and transportation sectors are identified as “mature for large-scale deployment”.
- Securing access to data and computing infrastructures – FAIR Principles: The enormous amount of new data to be generated introduces a new opportunity for Europe to position itself in the forefront of Data and AI transformation through responsible data management and compliance of date with the FAIR principles (Findable, Accessible, Interoperable and Reusable) as per the EU’s data strategy.
4. Creating an Ecosystem of Trust – A future regulatory Framework
In the present absence of a common European Regulatory Framework to protect all European citizens and help create a frictionless internal market, the EC is aiming at the creation of such a regulatory framework that minimizes risks of potential harm, particularly of fundamental rights including personal data and privacy protection and risks of safety and effective functioning of the liability regime.
A. Possible Adjustment to existing EU regulatory framework relating to AI: An existing body of EU product safety and liability legislation, including sector-specific rules, further complemented by national legislation, is relevant and potentially applicable to a number of emerging AI applications. The EC is of the opinion that the legislative framework could be improved to address the following risks:
- Effective application and enforcement of existing EU and national legislation;
- Limitations of the scope of existing EU legislation;
- Changing functionality of AI systems;
- Uncertainty as regards to the allocation of responsibilities between different economic operators in the supply chain;
- Changes to the concept of safety.
B. Scope of a future EU Regulatory framework – A Risk-based approach – Universal rules: a risk-based approach should be followed. The new regulatory framework will need to strike a balance between flexibility so that it could create a disproportionate burden, especially for SMEs and criteria to differentiate between the different AI applications and whether or not they are high risk. For an application to be considered as high risk it should meet two criteria:
- The AI application is employed in a sector where given the characteristics of the activities typically undertaken, significant risks can be expected to occur.
- The AI application in the sector in question is, in addition, used in such a manner that significant risks are likely to arise (e.g. healthcare, transport, remote biometric identification).
Mandatory legal requirements imposed for high-risk activities could consist of training data, data and record-keeping, robustness and accuracy and human oversight. Remote biometric identification would have its own set of specific requirements since the gathering and use of biometric data (e.g. facial recognition in public places) carries specific risks for fundamental rights. In accordance with the current EU data protection rules and the Charter of Fundamental Rights, AI can only be used for remote biometric identification purposes where such use is duly justified, proportionate and subject to adequate safeguards.
Furthermore, regarding the addressee of these requirements, it is the Commissions view that each obligation/requirement should be addressed to the actors of the economy that are best placed to address any potential risk and in matters of geographic scope these requirements should be applicable to all economic operators providing AI-enabled products or services in the EU regardless of whether they are established in the EU or not.
To ensure that AI is trustworthy, secure and in respect of European values and rules, the applicable legal requirements need to be complied with, in practice and be effectively enforced both by competent national and European authorities and by affected parties. Moreover, an objective, prior conformity assessment would be necessary to verify and ensure that certain of the mandatory requirements applicable to high-risk applications are complied with.
The prior conformity assessment could include procedures for testing, inspection or certification. These conformity assessments for high-risk AI applications should be part of the conformity assessment mechanisms that already exist for a large number of products being placed on the EU’s internal market.
For AI applications that do not qualify as ‘high-risk’ and that are therefore not subject to the mandatory requirements discussed above, the EC suggests the option, in addition to applicable legislation, to establish a voluntary labelling scheme.
C. ARTIFICIAL INTELLIGENCE AND COVID-19
The practical interface between AI and Europe was also highlighted on 13 March 2020, when the EC called all StartUps and SMEs with AI Innovative Technologies that could help tackle the Covid-19 outbreak to apply for the next phase of funding from the European Innovation Council.
In the case of COVID-19, according to a European Parliamentary Research Service document[3], AI applications such as the use of facial recognition to track people not wearing masks in public, or AI-based fever detection systems, as well as the processing of data collected on digital platforms and mobile networks to track a person’s recent movements, have contributed to draconian enforcement of restraining measures for the confinement of the outbreak for unspecified durations.
It is also argued that AI’s capacity to quickly search large databases and process vast amounts of medical data could drastically accelerate the development of a drug that can fight COVID-19 but also raises questions about the criteria used for the selection of the relevant data sets and possible algorithmic bias. The majority of public health systems certainly lack the capacity to collect the data needed to train algorithms that would be reflective of the needs of local populations, take local practice patterns into account and ensure equity and fairness.
However, given the absence of a comprehensive human rights framework that would underpin effective outbreak surveillance at the international level, it is argued that the management of the risks associated with infectious diseases is likely to remain an ongoing challenge for global health governance.
According to the same EPRS document, the massive use of AI tracking and surveillance tools in the context of the COVID-19 outbreak, combined with the current fragmentation in the ethical governance of AI, could pave the way for wider and more permanent use of these surveillance technologies, leading to a situation known as ‘mission creep’. Coordinated action on inclusive risk assessment and strict interpretation of public health legal exemptions, such as that envisaged in Article 9 of the General Data Protection Regulation, will, therefore, be key to ensuring the responsible use of this disruptive technology during public health emergencies.
A new regulatory framework for AI [4]could prevent the establishment of new forms of automated social control through AI and ensure that the benefits of AI are fully obtained but at the same time the risks associated with this technology are mitigated.
[1] White Paper, available in : https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
[2] McKinsey Global Institute, 2019: https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-europes-gap-in-digital-and-ai#
[3] European Parliament Research Document (2020), https://www.europarl.europa.eu/RegData/etudes/ATAG/2020/641538/EPRS_ATA(2020)641538_EN.pdf.
[4] Suggested material for further information: Competition Policy for the Digital Era Report, Trustworthy AI: Joining efforts for strategic leadership and societal prosperity, Communication “Artificial Intelligence for Europe”.