Understanding the EU’s Artificial Intelligence Act: Key Insights
Understanding the EU’s Artificial Intelligence Act: Key Insights
August 2024
Anisa Tomic, Partner, and Zerina Karahmet, Associate
Maric & Co Law Firm
On August 1, 2024, the European Union’s Artificial Intelligence Act (AI Act) came into force, marking a significant milestone in the regulation of AI technologies. This pioneering regulatory framework is designed to ensure the safe and ethical deployment of AI across the EU, balancing innovation with fundamental rights and safety. At Marić & Co. d.o.o., we recognize the importance of understanding and complying with these regulations as AI continues to revolutionize the way we live and work, from personalized recommendations on streaming services to advanced medical diagnostics. However, these advancements also come with significant ethical, legal, and societal concerns. The AI Act addresses these challenges, ensuring that AI technologies are developed and deployed in a manner that respects individuals’ fundamental rights. In this article, we will explore the key provisions of the AI Act, examine its prohibitions on certain AI practices, and consider the implications for Bosnia and Herzegovina. Here’s a closer look at the crucial aspects of the AI Act with examples to illustrate its impact.
Objective and Scope
The AI Act seeks to enhance the functioning of the internal market by promoting AI that is human-centric and trustworthy, while safeguarding health, safety, and fundamental rights, including democracy and the rule of law. It establishes harmonized rules for placing AI systems on the market, prohibits certain AI practices, proscribes specific requirements for high-risk AI systems, promotes transparency rules for certain AI systems, includes market surveillance and enforcement mechanisms and supports measures for innovation, particularly for SMEs and startups.
Personal Data Protection
AI systems often handle vast amounts of personal data, raising significant privacy concerns. The AI Act emphasizes compliance with existing EU data protection laws, including the General Data Protection Regulation (GDPR). Key requirements include that AI systems must use the minimum necessary data and ensure data accuracy to prevent erroneous outcomes. For instance, an AI system used in hiring should only process relevant job application data, ensuring that outdated or irrelevant information doesn’t bias the decision-making process.
Users must be informed when they interact with AI systems. For example, chatbots on customer service platforms should disclose that they are automated systems, and provide clear explanations on how decisions impacting users (like credit scoring) are made. Also, individuals have rights to access, rectify, and erase their data. If an AI system inaccurately flags a person’s loan application as high-risk, they must be able to challenge and correct this decision.
Prohibited AI Practices
The AI Act strictly prohibits certain AI practices deemed unacceptable due to their potential to violate fundamental rights. These include:
- Subliminal Techniques: AI systems must not use subliminal techniques to manipulate users’ behavior in ways that can harm them. For example, an AI-driven advertisement subtly encouraging excessive spending or unhealthy eating habits would be banned.
- Exploitation of Vulnerabilities: AI systems cannot exploit vulnerabilities of specific groups (like children or disabled individuals). An AI toy collecting data on children’s behavior without parental consent would be prohibited.
- Social Scoring by Governments: The use of AI for general social scoring by public authorities is banned. This prevents discriminatory practices based on an individual’s behavior or socio-economic status.
- Real-time Remote Biometric Identification in Public Spaces: This is generally prohibited except for specific public security purposes, and only with robust safeguards. Imagine a constant surveillance system using facial recognition to track individuals in a city – such a system would face stringent restrictions.
High-Risk AI Systems
Certain AI applications are classified as high-risk and must meet stringent requirements:
- Risk Management and Mitigation: High-risk systems, such as those used in healthcare, transport, or law enforcement, must undergo rigorous risk assessments and implement measures to mitigate identified risks. An AI diagnostic tool in a hospital must ensure high accuracy and reliability to avoid misdiagnoses.
- Data Governance: High-risk AI systems must ensure high-quality data sets, continuous monitoring, and human oversight to maintain compliance. For instance, an AI system used in autonomous driving must continuously learn from real-world data to prevent accidents.
Implications for Bosnia and Herzegovina & Conclusion
As part of the broader European regulatory environment, Bosnia and Herzegovina will need to align its national laws with the AI Act. This alignment ensures that AI systems used within its borders comply with EU standards, promoting safe and ethical AI use. At Marić & Co. d.o.o., we are committed to assisting clients in staying informed and preparing for these regulatory changes, ensuring they are well-positioned to comply with emerging AI frameworks.
The EU’s Artificial Intelligence Act sets a landmark precedent in regulating AI technologies, emphasizing the protection of personal data and prohibiting harmful AI practices. Legal professionals and businesses must stay informed and compliant with these regulations to foster trustworthy AI innovations. By ensuring transparency, accountability, and respect for fundamental rights, the AI Act aims to build a future where AI benefits society while safeguarding individual freedoms.
The Act will become fully applicable in two years. However, some requirements of the Act, mainly related to prohibited AI practices, will become applicable as of February 2, 2025.
August 2024
Anisa Tomic, Partner, and Zerina Karahmet, Associate
Maric & Co Law Firm
On August 1, 2024, the European Union’s Artificial Intelligence Act (AI Act) came into force, marking a significant milestone in the regulation of AI technologies. This pioneering regulatory framework is designed to ensure the safe and ethical deployment of AI across the EU, balancing innovation with fundamental rights and safety. At Marić & Co. d.o.o., we recognize the importance of understanding and complying with these regulations as AI continues to revolutionize the way we live and work, from personalized recommendations on streaming services to advanced medical diagnostics. However, these advancements also come with significant ethical, legal, and societal concerns. The AI Act addresses these challenges, ensuring that AI technologies are developed and deployed in a manner that respects individuals’ fundamental rights. In this article, we will explore the key provisions of the AI Act, examine its prohibitions on certain AI practices, and consider the implications for Bosnia and Herzegovina. Here’s a closer look at the crucial aspects of the AI Act with examples to illustrate its impact.
Objective and Scope
The AI Act seeks to enhance the functioning of the internal market by promoting AI that is human-centric and trustworthy, while safeguarding health, safety, and fundamental rights, including democracy and the rule of law. It establishes harmonized rules for placing AI systems on the market, prohibits certain AI practices, proscribes specific requirements for high-risk AI systems, promotes transparency rules for certain AI systems, includes market surveillance and enforcement mechanisms and supports measures for innovation, particularly for SMEs and startups.
Personal Data Protection
AI systems often handle vast amounts of personal data, raising significant privacy concerns. The AI Act emphasizes compliance with existing EU data protection laws, including the General Data Protection Regulation (GDPR). Key requirements include that AI systems must use the minimum necessary data and ensure data accuracy to prevent erroneous outcomes. For instance, an AI system used in hiring should only process relevant job application data, ensuring that outdated or irrelevant information doesn’t bias the decision-making process.
Users must be informed when they interact with AI systems. For example, chatbots on customer service platforms should disclose that they are automated systems, and provide clear explanations on how decisions impacting users (like credit scoring) are made. Also, individuals have rights to access, rectify, and erase their data. If an AI system inaccurately flags a person’s loan application as high-risk, they must be able to challenge and correct this decision.
Prohibited AI Practices
The AI Act strictly prohibits certain AI practices deemed unacceptable due to their potential to violate fundamental rights. These include:
- Subliminal Techniques: AI systems must not use subliminal techniques to manipulate users’ behavior in ways that can harm them. For example, an AI-driven advertisement subtly encouraging excessive spending or unhealthy eating habits would be banned.
- Exploitation of Vulnerabilities: AI systems cannot exploit vulnerabilities of specific groups (like children or disabled individuals). An AI toy collecting data on children’s behavior without parental consent would be prohibited.
- Social Scoring by Governments: The use of AI for general social scoring by public authorities is banned. This prevents discriminatory practices based on an individual’s behavior or socio-economic status.
- Real-time Remote Biometric Identification in Public Spaces: This is generally prohibited except for specific public security purposes, and only with robust safeguards. Imagine a constant surveillance system using facial recognition to track individuals in a city – such a system would face stringent restrictions.
High-Risk AI Systems
Certain AI applications are classified as high-risk and must meet stringent requirements:
- Risk Management and Mitigation: High-risk systems, such as those used in healthcare, transport, or law enforcement, must undergo rigorous risk assessments and implement measures to mitigate identified risks. An AI diagnostic tool in a hospital must ensure high accuracy and reliability to avoid misdiagnoses.
- Data Governance: High-risk AI systems must ensure high-quality data sets, continuous monitoring, and human oversight to maintain compliance. For instance, an AI system used in autonomous driving must continuously learn from real-world data to prevent accidents.
Implications for Bosnia and Herzegovina & Conclusion
As part of the broader European regulatory environment, Bosnia and Herzegovina will need to align its national laws with the AI Act. This alignment ensures that AI systems used within its borders comply with EU standards, promoting safe and ethical AI use. At Marić & Co. d.o.o., we are committed to assisting clients in staying informed and preparing for these regulatory changes, ensuring they are well-positioned to comply with emerging AI frameworks.
The EU’s Artificial Intelligence Act sets a landmark precedent in regulating AI technologies, emphasizing the protection of personal data and prohibiting harmful AI practices. Legal professionals and businesses must stay informed and compliant with these regulations to foster trustworthy AI innovations. By ensuring transparency, accountability, and respect for fundamental rights, the AI Act aims to build a future where AI benefits society while safeguarding individual freedoms.
The Act will become fully applicable in two years. However, some requirements of the Act, mainly related to prohibited AI practices, will become applicable as of February 2, 2025.