Blog Post

The European Union’s AI Act: What is It and What You Should Know

Natalie S. Lesnick | April 2024

On March 6, 2024, the European Union solidified its position as a global leader in AI legislation by giving final approval to its December 2023 provisional agreement regarding its Artificial Intelligence Act (AI Act). The proposed legislation is expected to take effect between May and July 2024, with various enforcement provisions going into effect in the future. The AI Act is meant to provide artificial intelligence (AI) developers and others with clear requirements and obligations for using AI while reducing administrative and financial burdens for smaller organizations.

Intended with consumer safety in mind, the AI Act uses a “risk-based approach” to evaluate AI products or services; essentially, the riskier the AI, the more scrutiny it faces before it can be available for use. Many AI systems, including spam filters and AI-enabled video games, are considered minimal or no-risk, and many companies can choose to follow voluntary requirements and standards of conduct to minimize the systems’ risks. However, higher-risk uses of AI, such as medical devices, safety components of products (i.e., AI application in robot-assisted surgery), or biometric identification systems, face stringent requirements before being allowed on the market. These requirements include:

  • Adequate risk assessment and mitigation systems;
  • High-quality datasets going into the system to minimize risks and discriminatory outcomes;
  • Logging activity to ensure results are traceable;
  • Detailed documentation of all information necessary on the system and its purpose;
  • Clear and adequate information to the deployer;
  • Appropriate human oversight measures; and
  • High-level robustness, security, and accuracy.

Developers of generative AI will also need to assess and mitigate the risks for their products.

For U.S.-based companies and providers, the question of, “why does this matter to me” may pop into their head. For organizations that do business in the European Union, answering that question is easy – you will need to comply with the Act. For others, it is still important to pay attention. In October 2023, the Biden Administration signed an Executive Order on AI as a first step in addressing the advancements in this area. Additionally, legislatures in several states are already taking steps to regulate AI, whether that be discrimination – which remains a complex problem for the technology – deepfakes or chatbots like ChatGPT. While many proposals have a way to go before being finalized, the AI Act serves as a framework for AI governance that other states and countries may follow – including the United States. Organizations thinking about employing AI technologies, or those already doing so, should consider monitoring and auditing controls, risk assessments, security measures, etc., to ensure they are prepared for AI governance and requirements moving forward.

This blog was written by Natalie Lesnick J.D., CHC, CHPC. For more information on this topic, contact her at [email protected].

You can also keep up-to-date with Strategic Management Services by following us on LinkedIn.

About the Author

Natalie Lesnick is a Consultant at Strategic Management Services, LLC. Ms. Lesnick has expertise in assessing provider compliance with the federal healthcare program rules and federal healthcare laws, including HIPAA and the Affordable Care Act.

Subscribe to blog