The European Parliament has approved the bloc’s landmark rules for artificial intelligence, known as the EU AI Act, clearing a major hurdle for the first formal regulation of AI in the West to become law.
The AI Act sets out a risk-based approach to regulating AI, with systems that pose the highest risk to human rights and fundamental freedoms being subject to the strictest requirements. These systems include those that are used for social scoring, real-time biometric identification, and decision-making that could have a significant impact on individuals.
The AI Act also includes provisions for ensuring that AI systems are transparent, accountable, and fair. Developers of AI systems will be required to provide users with information about how the systems work and to take steps to mitigate any potential risks.
The approval of the AI Act is a major victory for those who have been concerned about the potential risks of AI. It sends a strong signal to the tech industry that Europe is serious about regulating AI in a way that protects human rights and fundamental freedoms.
The AI Act is not without its critics, however. Some argue that the rules are too complex and burdensome, while others believe that they do not go far enough to protect against the potential risks of AI.
Despite these criticisms, the approval of the AI Act is a significant milestone in the regulation of AI. It is likely to have a major impact on the way that AI is developed and used in Europe, and it could serve as a model for other jurisdictions around the world.
What does the AI Act mean for businesses?
The AI Act will have a significant impact on businesses that develop, use, or sell AI systems in Europe. The most significant requirements of the AI Act include:
- Risk assessment: Businesses will be required to conduct a risk assessment for any AI system that they develop, use, or sell. The risk assessment will need to identify the potential risks posed by the system and assess whether the system is necessary and proportionate.
- Transparency: Businesses will be required to provide users with information about how their AI systems work. This information will need to be clear, concise, and easy to understand.
- Accountability: Businesses will be required to be accountable for the way that their AI systems are used. This means that they will need to be able to demonstrate that they have taken steps to mitigate any potential risks posed by their systems.
- Fairness: Businesses will be required to ensure that their AI systems are fair. This means that they will need to ensure that their systems do not discriminate against individuals on the basis of protected characteristics such as race, religion, or gender.
Businesses that fail to comply with the requirements of the AI Act could face significant fines. The European Commission has said that it is prepared to issue fines of up to €20 million or 4% of global annual turnover, whichever is greater.
The AI Act is a complex piece of legislation, and businesses will need to take steps to ensure that they are compliant. However, the AI Act also provides businesses with an opportunity to demonstrate their commitment to responsible AI development and use.
Technology/Digital Assets Desk