Apply To Startups Of The Month


EU AI Act by vestbee.com
13 February 2024·6 min read

Katarzyna Groszkowska

Editor, Vestbee

EU AI Act: all you need to know before the legislation comes into force in 2024

The growth of artificial intelligence in recent years has been unprecedented. OpenAI's release of ChatGPT in November 2022 significantly boosted the development of this technology, marking a new era in the AI boom. Last year, McKinsey conducted a global survey among the companies from different industries, revealing that one-third of its respondents are using gen AI tools regularly in their business processes, with more than 40% planning to increase their investment in AI. The European VC and startup ecosystem is witnessing the same trend — more and more startups are creating new solutions based on AI, while funds unanimously point towards further expansion of the AI market as a key trend.

The dynamic rise of AI sector is not without its challenges, many of them centering around the legislative and regulatory aspects. The European Union is currently leading the way toward a more comprehensive and responsible use of AI systems. After months of negotiations, on February 2, EU member countries unanimously reached a deal on the bloc’s AI Act, the world’s first comprehensive regulation on AI. 

Vestbee gathered all the information about this law, as well as the key issues and compliance obligations it brings up. 

What is the EU AI Act about? 

The EU AI Act is the world’s first comprehensive legislative framework issued on the subject of artificial intelligence. It aims to regulate AI’s deployment and use within its member states.

Not all AI systems will be affected by the new legislation equally. The EU has proposed a tiered risk-based framework that is based on the AI use cases. It means the primary issue addressed by the act is not the technology behind the product, but rather how it is or could be used, and what risk it can pose. This model of compliance framework assigns different requirements for each of the categories, and they are as follows:

  • Unacceptable risk is prohibited. Prohibited AI systems are characterized by uses that pose unacceptable risks to the safety, security, and fundamental rights of people. These prohibitions encompass a range of applications, including AI systems that can circumvent the users’ free will in a manner likely to cause harm. 
  • High-risk AI systems are to be regulated. They fall into two main groups: those serving as safety components or products subject to existing safety standards, such as medical devices, and those used for specific sensitive purposes such as biometrics, critical infrastructure, access to essential services, law enforcement, justice administration, education and employment. 
  • Limited-risk AI systems are subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI such as chatbots and deepfakes.
  • Minimal risk is unregulated, including the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters – however, this is changing with generative AI.
  • General Purpose AI systems face obligations akin to high-risk AI systems due to their breadth of potential use cases. 

Why is the AI Act needed, and who does it apply to?

According to the European Commission, there are four key objectives behind the legislation. 

Primarily, it strives to ensure the safety of AI systems entering the EU market, aligning them with existing EU laws to safeguard the customers. It will also provide a legal certainty to foster investment and innovation in AI, and its human-centric approach will guarantee that AI policies are enhancing governance mechanisms for fundamental human rights and safety requirements. Ultimately, the Act aims to create a unified market for safe AI applications, whilst preventing market fragmentation and positioning the EU as a leader in responsible AI adoption.

To put it in simple words — this new AI package will set clear requirements for developers, deployers, and users of AI systems, impacting various entities involved in the AI value chain within the 27 EU member states.  Moreover, its scope extends extraterritorial. It means that all the providers of AI systems will be affected — their location is not important as long as they are putting their products on the EU internal market. 

What is more, the law also takes into consideration the needs of small and medium-sized enterprises — startups, developing AI solutions and seeks to reduce their administrative and financial burdens. 

What about startups and their outlook on this new legislation? 

Sifted has interviewed some of the founders and investors, showing that the AI Act has stirred mixed feelings among them, especially regarding the potential hindrance it may pose to the growth of the AI industry in Europe. VCs especially highlight that these new regulations can reduce the competitiveness of European AI startups, with many of them pondering relocation outside the EU, for example to the US, where they can develop without the same level of regulatory burden. 

“Startups will go to the US, they’ll develop in the US, and then they’ll come back to Europe as developed companies, unicorns, that’ll be able to afford lawyers and lobbyists. Our European companies won’t blossom, because no one will have enough money to hire enough lawyers,” Piotr Mieczkowski, managing director of Digital Poland, commented to Sifted.

Some of the major worries expressed by startup founders include an unnecessary bureaucracy that could impede the growth and development of smaller startups in the AI space. Startups will now have to consider compliance and legal aspects from the beginning of their projects, rather than addressing them only when scaling up or seeking investors. The legal overhead will unequally affect early-stage companies, as the established ones have much more resources to be agile and adapt.

The regulation also points to other issues that will have to be addressed by startups, including the obligation to transparency on training data, as companies building general-purpose models will have to provide detailed public summaries of the data used to train their models. This brings forward a whole new discussion about intellectual property issues and sharing sensitive information with competitors.

However, as Sifted reports, many voices in the market also take a more optimistic stance on this issue, pointing out that regulatory sandboxes as well as guidance counselling will be offered to smaller companies. The Act will ultimately lead to a strengthened trust in AI in Europe, providing clarity for both developers and users, they said. 

What’s next?

Following a long period of discussions and negotiations, on December 8, 2023, the EU Commission, Council, and Parliament reached a provisional political agreement on the key issues regarding the new framework. The text of the law was finalized and published on January 24, 2024, and it is expected to be adopted by both Parliament and Council to become official EU law in the second half of 2024.

On February 2, the EU countries have unanimously agreed on the bloc's Artificial Intelligence Act, overcoming concerns from countries like Germany and France that the regulations could stifle innovation. The deal includes the creation of the EU's Artificial Intelligence Office to enforce the AI Act, and the text now awaits formal approval from the European Parliament, with a plenary vote expected in April.

 

Analysis#Startups#Tips


Subscribe to our newsletter
Join Vestbee
Join the leading matchmaking platform for startups, VC funds, angels, accelerators and corporates
Join Now