.png)
The European Union is going through a pivotal moment in its digital regulation policy. After adopting the world's first regulation on artificial intelligence in March 2024, Brussels is now considering making it considerably more flexible. This U-turn is part of a tense geopolitical context, marked by the return of Donald Trump to the American presidency and the repeated criticisms of his administration against what it describes as European “excessive regulation.”
The European regulation on AI, which has continuously come into force since August 2024, classifies artificial intelligence systems according to four levels of risk:, high, limited and minimal. High-risk systems, used in sensitive areas such as critical infrastructure, education, or law enforcement, are subject to strict obligations including human control, technically documented, and a risk management system.
However, this groundbreaking regulation is the subject of increasing criticism. At the beginning of 2025, more than 40 leaders of major European companies, including ASML, Philips, Siemens and Mistral AI, called for a “two-year stop” before the essential obligations came into force. Their main argument: these rules would stifle innovation in a sector where Europe is already significantly behind the United States and China.
On 19 November 2025, European Commissioner Henna Virkkunen, in charge of technological sovereignty, presented the Digital Omnibus package, described as an “administrative simplification” initiative. This package of measures is officially aimed at reducing regulatory complexity without compromising protection standards. The stated objective is ambitious: to reduce the administrative burden by at least 25% for all businesses and by 35% for SMEs.
According to internal documents seen by Reuters, several easing measures are being considered. Businesses could be exempt from the requirement to register their AI systems in a European database when these systems are used only for restricted or procedural tasks. More significantly, the text provides for a grace period of one year during which national authorities will only be able to impose sanctions from 2 August 2027.
The requirement to tag content generated by AI, intended to combat deepfakes and disinformation, would also be subject to a transitional period. These adjustments come in a context where delays in the availability of technical standards have complicated the compliance of businesses.
The Digital Omnibus is not limited to relaxing the AI Act. It also includes substantial changes to the General Data Protection Regulation (GDPR), which has been a pillar of privacy protection in Europe since 2018. In particular, these changes aim to facilitate the training of AI models by European companies.
Among the proposed changes is a more restrictive redefinition of the concept of “personal data.” Information would no longer be considered as such if the company collecting it is not in a position to directly identify the person concerned. This change could exclude a lot of pseudonymized data from the full scope of the GDPR.
The text also proposes to allow the processing of personal data for AI training based on the “legitimate interest” of companies, a more flexible justification than the explicit consent currently required in many cases. With regard to sensitive data (ethnicity, political opinions, political opinions, political opinions, political opinions, health, sexual orientation), reinforced protection would only apply to data that “directly” reveals these characteristics, excluding those that would only implicate them by inference.
Finally, the European Commission plans to remove cookie consent banners for certain uses considered to be low-risk, thus responding to what it describes as “consent fatigue” of Internet users. Consent preferences could be automatically transmitted by browsers and operating systems once technical standards are defined.
These proposals are put with strong opposition from associations defending digital rights and privacy. The Austrian activist Max Schrems, an emblematic figure in these fights, denounces a “gift to technological giants” which would constitute “an enormous regression for the privacy of Europeans, ten years after the adoption of the GDPR”.
The German association noyb is worried about the “potential threat that the project represents for the fundamental rights of Europeans”. For these organizations, the Commission would use administrative simplification to weaken essential protections under pressure from technological lobbies.
Michael O'Flaherty, Council of Europe Commissioner for Human Rights, also warned at the Lisbon Web Summit: “Let us be careful not to remove the essential protective elements of these laws. Don't throw the baby out with the bathwater.” His statement highlights the risk of simplification turning into deregulation.
At the European political level, the reactions are mixed. Social Democrats have promised to oppose any delay in the AI law, while centrists intend to remain firm in the face of changes that compromise privacy. Conversely, Germany, although traditionally committed to data protection, has pushed some of these changes, especially concerning the GDPR. France calls for “targeted changes”, excluding any complete “reopening” of the regulation.
The Digital Omnibus initiative reveals the growing tension between two visions of European digital technology. On the one hand, proponents of competitiveness highlight Europe's technological lag behind the United States and China. The report by former President of the European Central Bank Mario Draghi, published in September 2024, pointed to the stagnation of European productivity, attributed in part to excessive regulation. Europe is investing heavily in AI but is struggling to transform these investments into global technological champions.
On the other hand, defenders of the European regulatory model recall that the GDPR and the AI Act constitute the foundation of a “human-centered” digital approach that respects fundamental rights. These texts have influenced global legislation and position Europe as a normative leader, even if it cannot compete technologically with American and Chinese giants.
Commissioner Henna Virkkunen says the amendments aim to “reduce red tape, duplication and complex rules” without compromising “high standards of fairness and security online.” She insists that the Commission remains “very committed to the main principles” of the AI law. Thomas Regnier, Commission in the Digital Field, insisted that “the objective is not to lower the high standards of confidentiality that we guarantee to our citizens.”
Beyond legal considerations, the Digital Omnibus is part of a broader strategy of digital sovereignty. In April 2025, the Commission presented an AI action plan focused on the creation of “giga-factories of AI” and aimed at tripling the capacity of European data centres over the next five to seven years. These massive investments require an attractive regulatory framework for businesses.
Tech companies, both European and American, have intensified their lobbying efforts. Apple, Meta Platforms, and other giants could save billions in compliance costs if these relaxations were adopted. The initial costs of complying with the AI Act were estimated between 6,000 and 7,000 euros for an average high-risk AI system, an amount considered prohibitive by many SMEs.
According to some sources, 64% of European AI startups would consider relocating their activities due to regulatory constraints. These figures fuel the discourse that Europe is depriving itself of its talents and innovations in favor of less regulated but more dynamic ecosystems.
The Digital Omnibus package still needs to go through several stages before it is finally adopted. After being presented by the Commission on 19 November 2025, the text will be submitted to the College of Commissioners for formal approval. It will then have to be debated and amended by the European Parliament and the Council of the European Union, representing the Member States.
This legislative process could take several months or even more than a year. The debates are set to be heated, as the positions diverge between Member States, political groups and stakeholders. The Commission hopes to finalize the entire package before the end of the current term of office, but this ambition could run up against the technical complexity of the subjects and political opposition.
The coming months will therefore be decisive for the future of European digital regulation. The Digital Omnibus could either become a balanced governance model, reconciling innovation and the protection of fundamental rights, or mark a significant setback in European regulatory ambition. The outcome of this debate will have repercussions far beyond the borders of the Union, potentially influencing global regulatory approaches to artificial intelligence.
The AI Act is the first regulation in the world specifically dedicated to the supervision of artificial intelligence, adopted by the European Union in March 2024. It classifies AI systems according to four levels of risk (, high, limited, minimal) and imposed obligations that are proportionate to these risks. Systems that present significant risks, such as social scoring or real-time biometric recognition in public spaces, are prohibited. High-risk systems must meet strict requirements for transparency, human oversight, and risk management. The regulation has come into force since August 2024, with full application expected until 2027.
The General Data Protection Regulation (GDPR) is the fundamental text for the protection of privacy in Europe, applicable since May 2018. It establishes extensive rights for citizens regarding their personal data: right of access, correction, deletion, portability and opposition. The GDPR requires companies to obtain the explicit consent of users to process their data, to limit the collection to what is strictly necessary, and to ensure the security of the information. With fines of up to 4% of global turnover, it has influenced data protection laws around the world and is a pillar of the European digital model.
Legitimate interest is one of the six legal bases allowing the processing of personal data according to the GDPR. Unlike consent, which requires explicit user action, legitimate interest allows a company to process data if it can demonstrate that such processing is necessary for its activities and that its interests do not disproportionately affect the rights and freedoms of the persons concerned. The Digital Omnibus proposes to expand the use of this legal basis for training AI models, which concerns privacy advocates because it would reduce users' control over their data.
The Digital Omnibus is a legislative simplification package presented by the European Commission on November 19, 2025. It aims to harmonize and alleviate several digital regulations adopted in recent years, including the AI Act, the GDPR, the ePrivacy Directive, and cybersecurity rules. The official objective is to reduce administrative burden by at least 25% for all businesses and 35% for SMEs, while maintaining high standards of protection. The package is part of a wider strategy of competitiveness against the United States and China, but raises controversy about its real impact on fundamental rights.