Four Years After Introduction, European Union Enacts AI Act
Last Updated: August 2nd, 2024 by Dawn Allcot Original Article by CNBC.com
Always a few steps ahead of the U.S. when it comes to user privacy and transparency in regard to technology and the internet, the EU recently became the first government entity to put the AI Act into effect.
The AI Act was proposed four years ago as a way to govern the development, use, and applications of AI. It is the first “comprehensive and harmonized regulatory framework for AI across the EU,” wrote CNBC. In May, EU member states, lawmakers and the European Commission gave final approval for the law. On August 1, 2024, the EU AI Act went into effect as the first of its kind across the world.
What It Means
The legislation will regulate AI based on the risks specific technology and platforms impose. Regulations will include:
Adequate risk assessment and mitigation systems
High-quality training datasets to minimize bias
Routine activity logs
Detailed documentation shared to assess compliance
Higher risk systems include autonomous vehicles, medical devices, lending algorithms, educational scoring, and biometric security systems. In other words, areas that Google has previously referred to as “your money, your life,” or crucial topics that affect people’s health or livelihoods.
The legislation could significantly affect U.S-based tech giants, including Amazon, Apple, Microsoft, Meta, and Tesla, since these corporations are behind the world’s most advanced AI systems. Because the rules will apply to any organizations that operate in the EU, it will affect the way certain AI systems are rolled out and regulated across the world. And, according to experts who spoke with CNBC.com, it could affect non-tech companies who deploy AI.
“It is likely to impact many businesses, especially those developing AI systems but also those deploying or merely using them in certain circumstances,” Tanguy Van Overstraeten, head of law firm Linklaters’ technology, media and technology practice in Brussels, told CNBC.com.
Business owners in the U.S. should be poised for similar legislation that could put stricter regulations on SMBs and enterprise organizations that operate strictly in the U.S.
What’s Next?
Will the current legislation, along with anything the U.S. might enact to follow suit, stifle innovation?
We think it will place healthy limits on high-risk applications and create needed accountability. It might even encourage more people to experiment with AI since they know there are safeguards in place – similar to the way you may be more comfortable visiting a website with security and data privacy systems in place.
Stay tuned to TopAITools.com to see how these regulations develop and affect U.S. tech companies.