10 Key Changes in the EU AI Act: What Enterprises Need to Know About Postponed Deadlines
The European Union's AI Act, initially set with strict compliance deadlines, has undergone a significant provisional agreement between member states and the European Parliament. This deal aims to ease the burden on businesses by extending timelines, clarifying rules, and narrowing high-risk classifications. Here are the 10 most important updates you need to understand.
1. High-Risk AI Systems Get Extended Deadlines
Under the provisional agreement, the original compliance date of August 2, 2026, for high-risk AI systems has been pushed back. Stand-alone high-risk AI systems now have until December 2, 2027, to comply, while AI used in products covered by existing EU sectoral safety regulations—like medical devices or machinery—must meet requirements by August 2, 2028. This gives enterprises an additional 12 to 24 months to adapt their processes and systems.

2. Overlapping Rules for AI in Machinery Are Removed
The agreement eliminates duplicative regulatory requirements for AI embedded in machinery products. Instead of complying with both the AI Act and sector-specific machinery rules, these AI systems will now follow only the sectoral safety regulations. However, safeguards ensure that equivalent levels of health and safety protection are maintained, preventing any regulatory gaps.
3. Stricter Definition of a 'Safety Component'
Previously, any AI feature that could be considered a safety component faced automatic high-risk classification. The provisional deal narrows this definition: AI that merely assists users or enhances performance is not automatically high-risk, unless its failure directly creates health or safety hazards. This change reduces the compliance burden for many applications, such as productivity tools in industrial settings.
4. Mechanism to Resolve Overlaps with Sectoral Laws
For sectors like medical devices, toys, lifts, machinery, and watercraft, the co-legislators agreed on a mechanism to clarify when the AI Act applies versus existing sectoral legislation. This avoids confusion and double compliance, providing a clear pathway for manufacturers who must navigate both sets of rules. The mechanism is designed to be flexible and practical.
5. Deadlines for AI Regulatory Sandboxes Extended by One Year
Member states now have until August 2, 2027, to establish AI regulatory sandboxes—controlled environments for testing innovative AI systems under regulatory supervision. This is a one-year extension from the original deadline, giving national authorities more time to develop frameworks that support experimentation without compromising safety or compliance.
6. Earlier Watermarking Obligations for AI-Generated Content
While most deadlines are being pushed back, watermarking requirements for AI-generated content are moving forward. The obligation to label synthetic content (e.g., deepfakes or AI-generated text) will apply from December 2, 2026—two months earlier than the European Commission initially proposed. This reflects the urgency of addressing disinformation and transparency in the age of generative AI.
7. Mid-Size Firms Gain Same Exemptions as Small Companies
Previously, only small and medium-sized enterprises (SMEs) benefited from certain exemptions and reduced obligations. The provisional deal extends these advantages to small mid-cap companies, meaning firms with up to around 500 employees now have more breathing room. This change recognizes that mid-size firms often face similar resource constraints as smaller ones when adapting to complex regulations.

8. Central Supervision by the EU AI Office
General-purpose AI systems—those capable of multiple tasks like large language models—will be supervised centrally by the newly established EU AI Office. This ensures consistent oversight across the bloc for powerful, versatile AI. Meanwhile, national authorities retain responsibility in specific domains: law enforcement, border management, judicial authorities, and financial institutions. This division aims to balance efficiency with local expertise.
9. Reduction in Recurring Administrative Costs for Companies
Cyprus's Deputy Minister for European Affairs, Marilena Raouna, highlighted that the agreement significantly supports businesses by reducing recurring administrative costs. The extended deadlines and simplified rules mean fewer compliance checks and less paperwork, especially for companies that must report on high-risk AI systems. This cost relief is intended to foster innovation without sacrificing safety.
10. Formal Adoption Still Pending: Original Deadline Applies Until Then
Despite the provisional agreement, the changes are not yet law. Both the European Parliament and the Council must formally adopt the deal before it takes effect. The co-legislators aim to complete this step before August 2, 2026—the original deadline. Until ratification, the original 2026 deadline remains in force. This uncertainty means enterprises should continue preparing for the earlier date to avoid last-minute compliance gaps.
In conclusion, the provisional agreement represents a pragmatic shift in the EU's AI regulation landscape. By extending timelines, narrowing high-risk definitions, and reducing administrative burdens, EU lawmakers aim to give businesses the flexibility needed to innovate responsibly. Companies should monitor the formal adoption process closely and adjust their compliance roadmaps accordingly. For a deeper dive into the specific timelines, see our breakdown of high-risk deadlines or the section on watermarking obligations.