As artificial intelligence continues to boom, so do the efforts to regulate and contain its growth within legal frameworks. The EU’s Artificial Intelligence Act (EU AI Act), the world’s first attempt at comprehensive AI regulation, has transitioned from lawmaking to rollout. On August 2, 2025, new obligations for general-purpose AI models came into application, following the release of the Commission-endorsed Code of Practice for AI innovators.
But Europe’s AI ecosystem is restless. From unicorns like Synthesia to fast-scaling challengers like Mistral, founders warn that the law risks choking competitiveness and driving talent abroad. Open letters signed by dozens of startup leaders and investors urge Brussels to "stop the clock", citing unclear rules and delayed guidelines, leaving startups unprepared.
To calm tensions, Europe is inviting industry experts to participate in the drafting process. But the question remains: will the AI act to balance regulation and innovation, or push talent elsewhere?
Why regulate AI
AI’s spread has been fast and consequential. Systems that generate text, classify faces, or suggest medical triage are already woven into services and products. However, alongside value creation come well-publicised harms: biased decision-making, hidden surveillance, and automated manipulations of public discourse. The EU’s policy answer is a comprehensive regulation intended to protect rights, boost trust, and harmonize rules across the single market, a strategy Brussels frames as both ethical and strategic (akin to a GDPR for AI).
Public sentiment gives the Commission room to act: opinion polls across major EU states show broad support for stronger AI governance, especially where privacy and misinformation are concerned. Regulators argue that clarity and trust will, in the medium term, encourage adoption and investment. But for many founders, this is a debate about tempo: regulation that’s too complex, too fast, or unevenly implemented can raise costs, slow product roadmaps, and tilt incentives away from Europe.
In simple terms, it seems that disagreement is not over whether to regulate at all, but how to do it the correct way. The Act’s GPAI obligations were outlined in 2024, but detailed, practical guidance arrived late in the implementation cycle — the Code of Practice was published only in July 2025, just weeks before certain obligations became applicable. For founders who must plan hiring, data acquisition, and technical roadmaps months in advance, that gap has real consequences.
The AI Act is intended to have broader geopolitical implications, extending beyond Europe. It aims to set the standard on the continent, with the hope that other countries will follow, encouraging the ethical and sustainable development of AI technologies.
However, the global AI landscape appears to be different. The US predominantly uses industry-driven guidelines and rapid product iteration to set norms; China, on the other hand, couples rapid deployment with state coordination. Critics say Europe risks becoming the place where startups face heavier compliance costs and greater legal uncertainty, just as capital and talent flow toward more permissive markets.
The tension point between lawmakers and industry
The widespread discontent of the AI ecosystem became well-known in mid-2025. Over thirty founders and venture investors signed an open letter published on Sifted, arguing that the Act risks "creating a fragmented, unpredictable regulatory environment that will undermine innovation, discourage investment, and ultimately leave Europe behind." The letter, drafted by entrepreneurs including Johannes Schildt, Anton Osika, and Fredrik Hjelm, urged policymakers to “stop the clock” and not bring certain obligations into force until practical compliance tools were in place.
A second, broader push came from the EU AI Champions Initiative, which asked the Commission for a two-year "clock-stop" on enforcement to allow for simplification and broader stakeholder work — their ask was wider, but the gist stayed the same: European startups need more time and greater regulatory clarity.
The EU AI Act: milestones and enforcement
Since its adoption on August 1, 2024, the AI Act has been steadily moving through its phases. Some of its most sweeping provisions are already binding. As of February 2, 2025, certain AI practices deemed "unacceptable", such as real-time biometric identification in public spaces, manipulative systems, social scoring, and hidden emotion recognition, were banned outright.
The next milestone was August 2, 2025, when general-purpose AI (GPAI) obligations took effect. Since that date, providers must meet requirements for documentation and transparency, disclose their copyright and training data practices, and manage systemic risks. Models already on the market by that date receive a grace period until August 2, 2027.
Looking further ahead, August 2, 2026, will bring rules targeting high-risk AI systems, those used in hiring, healthcare, credit scoring, and similar areas. These rules include more stringent obligations, such as conformity assessments, risk management, oversight, and registration in public AI databases.
The penalty regime reflects the seriousness of non-compliance. Violations of prohibited practices can carry fines up to €35 million or 7% of global turnover. Other offences, including breaches by providers of GPAI models for certain obligations, may result in fines of up to €15 million or 3%. Providers failing to respond correctly to authorities or submitting misleading information may be fined up to €7.5 million or 1%. Smaller firms and startups are recognised under the law, and fines are scaled accordingly.
Code of Practice updates & EU’s stakeholder invitation
Ever since its adoption, one of the AI Act’s most contested elements has been the Code of Practice, a voluntary, practical toolkit meant to help providers demonstrate compliance with Articles 53 and 55 of the Act. On July 10, 2025, the European Commission published its text with three chapters: Transparency, Copyright, and Safety & Security. But timing has become a flashpoint, emphasised in both open letters — some of this code was expected in May, and delays have created great uncertainty for providers gearing up for August’s implementations.
In response to ongoing concerns, Brussels has opened the next drafting phase of the Code of Practice for GenAI, inviting stakeholders to participate in the process. The participants, including providers, deployers, civil society, and academic experts, can express their interest by October 2, 2025.
Regulatory uncertainties with high stakes
Regulating AI is crucial to creating an environment where AI actually serves humans, without exploiting questionable datasets and enforcing biases. These technologies, once confined to the tech world, now influence not only markets and businesses but also fundamental rights, public trust, and societal norms. The EU AI Act represents a historic attempt to create a framework that protects citizens and provides legal clarity for companies. Yet the challenge is immense, drafting rules that are strict enough to prevent harm but flexible enough to allow innovation.
For Europe to remain competitive in the global AI race, policymakers must get this balance right. Failure could mean innovation moves elsewhere, leaving Europe with rules on paper but little real-world impact.