top of page
Search

Brussels’ Quiet Power Grab

How a “Voluntary” EU AI Code Turned a Small Office into a Global Regulator



Amid American cries, here’s the story almost nobody is watching: the most consequential AI rulemaking of 2025 did not come from a parliament or a court. It came from a PDF, a template, and a sign-up sheet run by the European AI Office.


On July 10, the European Commission published a General-Purpose AI (GPAI) Code of Practice— a voluntary document that tells foundation-model providers exactly how to meet the EU AI Act’s new transparency, copyright, and (for the biggest models) safety obligations. Two weeks later, the Commission confirmed the core GPAI rules kick in August 2, 2025 (with legacy models given until August 2, 2027), and that the Code is an “adequate voluntary tool” to prove compliance. Translation: sign the code, follow the forms, and you’re on the right side of EU law. Then came the masterstroke: on July 24, Brussels released a mandatory public template—a structured form that requires model providers to summarise the training data behind their systems. In one move, the EU Office turned the industry’s most ferocious trade secret into a standardised disclosure, pre-formatted for regulators, journalists, and rights-holders. Even law firms advising IP owners are already telling clients to mine those summaries for infringement leads. This is “soft law” with hard edges. Meanwhile, the AI Office—a directorate-level team created by Commission decision in early 2024—has parlayed this “voluntary + template” combo into real leverage. It set the computational yardsticks that now shape the market: most GPAI models are defined (for these purposes) at ≥10^23 FLOP, while the heaviest “systemic-risk” class starts ≥10^25 FLOP with extra safety duties. And crucially, while the rules begin applying now, Commission-led enforcement doesn’t bite until August 2, 2026—which gives the Office a year to socialize norms, iterate the forms, and make compliance by code feel inevitable. 


If you think that’s just clerical, consider who lined up to sign. The Commission’s signatory list includes OpenAI, Anthropic, Google, Microsoft, Amazon, Mistral, and more; even xAI opted into the safety chapter. When the world’s largest labs agree to a Brussels-written playbook, that playbook becomes a global standard—even outside the EU. And the Commission pointedly refused industry calls for a delay, brushing off lobbying from both U.S. giants and European blue-chips and sticking to the 2025–2026 timetable. That’s regulatory statecraft worthy of a spy novel—quiet, procedural, and devastatingly effective.


Hot take: This is not just tech policy; it’s a constitutional moment for administrative power. By elevating a code, a template, and a timeline memo, Brussels has invented a new mode of governance: compliance-by-document, exported at scale through market access. Legislators passed the broad AI Act, yes—but the AI Office now writes the de facto rulebook that companies must actually follow, complete with model-documentation forms and training-data summaries that will shape litigation, research, and product design. In effect, the EU has created a shadow standards body inside the executive, with instant uptake because every major lab needs to keep selling in Europe. That’s democratic delegation if you’re generous; it’s rulemaking by administrative accretion if you’re not. 


Why it matters (beyond AI nerdom):

  • Rights-holder politics flip overnight. With public training-data summaries, collective management organisations and publishers gain a roadmap for enforcement. Expect targeted claims, not fishing expeditions. The template is a discovery device disguised as transparency. 

  • Open-source risk becomes a policy choke point. The guidelines gesture at transparency-based carve-outs, but once public summaries exist, smaller/open projects face asymmetric legal exposure compared to incumbents with armies of licensing lawyers. That chills innovation at the edges. (Inference from the Commission’s template + legal commentary.) 

  • Metric capture is real. FLOP thresholds are simple to administer—and dangerously sticky. They privilege compute-rich players while under-capturing misuse pathways that don’t scale with FLOPs (e.g., fine-tuned smaller models). Once embedded in guidance and forms, these numbers become policy gravity wells. 

  • Extra-territoriality by queueing theory. Because enforcement lag starts 2026, early signatories gain smoother passage now, creating a soft lock-in. Firms that refuse the code will still need an alternative compliance story—expensive, uncertain, and slower. That’s how a “voluntary” scheme becomes mandatory in practice. 

High stakes, real penalties. When the binding parts arrive, non-compliance risks fines up to €35m or 7% of global revenue under the AI Act. That’s GDPR-scale bite backing the Office’s velvet-glove phase. 



POLICY CORRECTION!

  • Sunset the template unless Parliament re-authorises it. If these disclosures are going to drive the next decade of IP and safety politics, the democratic branch—not just the Office—should periodically revisit scope, granularity, and exemptions. 


  • Replace blunt FLOP gates with layered tests that track capabilities and deployment context, not just compute spent training. Preserve simple proxies for speed, but force regular recalibration to avoid entrenching incumbents. 


  • Codify a small-lab/open-model safe harbour: if you meet transparency duties and publish eval results, you get procedural shields (e.g., cure periods before fines). Keep the experimental fringe alive; most AI breakthroughs start there. (Policy proposal; consistent with the Commission’s stated transparency focus.) 


The headline, if you’re outside Brussels: the AI Office just showed how to govern powerful technology without passing another law—standardise documentation, set bright-line thresholds, and make “voluntary” pathways the least-cost route to legality. Whether you cheer or worry, don’t underestimate the politics: Europe has rediscovered how to wield bureaucratic soft power—and the world’s biggest AI labs are already playing by its rules.


- Michael Matteson, Economics, Trade, Cybersecurity Member Researcher @ ISYPO

(vetted by ISYO Exec and ETC Heads)

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page