Artificial intelligence. Finally, it’s here. But with its arrival comes a tidal wave of questions: how do we harness its power responsibly? How do we navigate its risks? The world watched with bated breath as the EU took a bold step – passing the AI Act on December 8, 2023. This was the big one, the game-changer. Everyone knew: the rules of the game had just shifted. Below we speak to five Ai industry executives about what they see happening now that the Act has been proposed.
The EU Ai Act: A Glimpse
The EU Ai Act is a landmark regulation aiming to make AI in Europe safe, trustworthy, and respectful of fundamental rights. It’s the world’s first comprehensive AI law, and here’s a quick rundown:
What it does:
- Classifies AI based on risk: High-risk systems (think facial recognition or credit scoring) face stricter rules, while low-risk ones have lighter requirements.
- Prohibits certain harmful AI: This includes social scoring, manipulative AI targeting minors, and subliminal advertising.
- Mandates transparency and explainability: Developers must be able to explain how their AI works and ensure human oversight.
- Protects against bias and discrimination: AI systems must be fair and non-discriminatory, with systems used in critical sectors like healthcare or law enforcement facing extra scrutiny.
- December 2023: Provisional agreement reached between Parliament and Council.
- 2024-2025: Expected formal adoption and entry into force.
- 2025-2027: Transition period for businesses to comply.
- Boosting innovation and trust in AI: By setting clear rules, the Act aims to foster responsible AI development and build public trust.
- Protecting citizens and promoting ethical AI: Safeguards against harmful applications and biases ensure ethical and human-centric AI.
- Global leadership in AI governance: The Act could set a global standard for regulating AI, influencing other countries’ policies.
Here are some resources:
- The Act website: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- European Parliament summary: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
- Council of the European Union press release: https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/
Julie Myers Wood, CEO at Guidepost Solutions
“Amid the excitement about AI progress and the potential to transform many industries, the EU recently passed the first comprehensive AI legislation. The Act provides a risk-based framework and rules, including requiring certain reviews on large foundational models and outlawing certain types of AI uses, such as social scoring. Despite the fanfare, many provisions of the Act won’t go into effect for two years, however, leaving plenty of time for AI advances to further leap ahead of the regulators. The Act’s lead-in time is beneficial to give companies time to start working through compliance frameworks, but unfortunately also gives time for other jurisdictions to designate competing enforcement regimes and requirements. We recommend that all companies think through what obligations they will have under the Act, what compliance looks like for them, and start the process well in advance.”
David Lasky, Co-CEO & Managing Director at ScaleNorth Advisors
“The EU’s AI Act, currently in the process of formal adoption, is poised to have a substantial impact on the development and use of AI not only within the EU but globally. On the positive side, the Act seeks to enhance trust and transparency in AI by making systems more explainable, addressing concerns related to bias and discrimination. It also emphasizes human control and oversight for high-risk AI systems, ensuring the implementation of risk management measures to prevent harm and upholding fundamental rights such as privacy and non-discrimination.
Moreover, the Act is anticipated to stimulate innovation in trustworthy AI by establishing clear standards, potentially setting a global precedent for ethical AI development. However, some foresee negative impacts, arguing that the Act’s strict requirements might stifle innovation and pose challenges, particularly for smaller companies in the AI market. Compliance burdens and potential protectionism concerns, given the Act’s extraterritorial scope, are also highlighted as potential drawbacks.
Uncertainties remain, including the effectiveness of enforcement and the specific impact on sectors like healthcare and law enforcement. While the EU’s leadership in tech law, exemplified by regulations like GDPR, suggests a commitment to responsible innovation, the ongoing debate surrounding the effectiveness of such regulations underscores the delicate balance needed between regulation and fostering innovation. In essence, the EU’s AI Act represents a significant stride toward ethical AI regulation, with the potential for positive impacts on trust, transparency, and human control over AI systems, albeit with acknowledged challenges and uncertainties.”
Nazmul Hasan, Founder & CIO at AI Buster
“The EU AI Law is indeed a groundbreaking achievement in global technology legislation, characterized by its strict regulations and robust enforcement mechanisms. The establishment of an AI Office and Council, complemented by an Advisory Board of independent experts, underscores the EU’s commitment to closely monitor and regulate the rapidly evolving AI field. This framework marks a pivotal shift from voluntary guidelines to mandatory regulations, emphasizing the need for compliance and promoting standardized testing practices in AI. With its enactment, the law is poised to profoundly influence AI governance models across both public and private sectors, potentially setting a new global standard for AI regulation.
Navigating this new legal landscape, businesses are faced with a complex environment of risk categorization, particularly for high-risk AI systems which are vital for areas like infrastructure and fundamental rights. These systems are subject to rigorous requirements including security, transparency, and human oversight. The law’s expansive reach, applying to any entity using AI systems that impact EU citizens, extends beyond the EU, necessitating a proactive approach by companies worldwide to ensure compliance. The law’s emphasis on fundamental rights, through mandatory legal impact assessments and restrictions on certain AI applications, reflects a deep-seated commitment to harmonize technical progress with ethical considerations.
Positioned to potentially set a global benchmark in AI regulation, similar to the impact of the GDPR on data protection, the EU’s AI Law holds a strategic advantage. It not only formalizes comprehensive rules around AI but also influences international technology standards. Its evolving legal requirements, designed to keep pace with technological advances, ensure the law’s relevance in the face of rapid tech evolution. This landmark legislation goes beyond mere regulation of AI technology; it places significant emphasis on the protection of civil rights, setting a precedent for future AI-related laws on a global scale.
I have thoroughly enjoyed responding to this request, and if there is any way I can assist you further with your article, please don’t hesitate to ask.”
Rafał Pisz, CEO at QuantUp
“Insurance and banking are among the sectors that are considered high-risk when it comes to implementing AI. But anyone who works in these industries knows that you don’t need the AI Act to know that the implementation of AI in the financial sector is already in line with the strict recommendations of local financial regulators, such as the Polish Financial Supervision Authority. So the question is, do we really need a new regulatory document, and why couldn’t we use the existing ones to update them?
The AI Act is too contradictory in its wording. It requires both general-purpose AI systems and AI models to be transparent. Especially in the case of large models, it won’t be possible to find out why the model makes a certain decision. It could be a dead letter.
AI law is also dominated by prohibitions, obligations and sanctions. It’s hard to find anything about encouragement and real support.
To make small and medium-sized enterprises more competitive thanks to AI, they don’t need so-called regulatory sandboxes and real-world test environments set up by national authorities to develop and train innovative AI before it goes to market. In this case, the AI Act is worthless.
And let’s face it, there is no gain without pain. If you want to innovate, you have to be willing to take risks. You can’t have your cake and eat it too. And if you want to innovate, you need the opportunity to do so, not a new set of rules. Unfortunately, the AI Act doesn’t provide that.
In fact, the EU AI Act is the world’s first regulation on AI. But I don’t think it can inspire the rest of the world, or even become a global benchmark.
The AI Act is based on Western values such as fundamental human rights, democracy, and the rule of law. Is it possible for 5.5% of the world’s population (people living in the EU) to set a standard for the rest of the world when it’s so different?
According to the World Justice Project’s 2022 data, only 40% of countries are based on the rule of law and democracy. Even within this group, the quality of the values is very different.
The world is different if we take into account, for example, the position of a person as an individual or as part of a collective community. Moreover, the interpretation of democracy is different in Europe than in Asia or South America.
Since the AI Act should be transferable to other parts of the world, it should be based on things that are common to all humanity, something universal. That’s why, unfortunately, the AI Act will be of marginal importance in the global game of AI regulation.”
Alexis Porter, Privacy Researcher at BigID
“There’s a new sheriff in town, and that’s the EU’s AI Act, solidifying it as one of the world’s first comprehensive attempts to govern the use of AI. Enforcement won’t kick in until 2025, but IT leaders are already trying to stay ahead lest they risk falling behind. To figure out where the AI Act will take us, we only need to look at GDPR.
When the EU introduced GDPR five years ago, the world underwent a seismic shift as organizations scrambled to update their privacy notice, ensuring transparency on how they govern their data and everything in between. Since then, GDPR has accrued over $4 billion in fines.
While there’s still a while before the new AI regulations kick in, there are steps organization leaders can take today to ensure that they are adopting AI responsibly, including embracing a forward-thinking data governance strategy and engaging with AI developers and security experts.”