Executive Summary
Challenge: General-purpose AI (GPAI) models--including large language models, multimodal systems, and foundation models--face dedicated compliance obligations under EU AI Act Chapter V (Articles 51-55). Unlike high-risk AI system requirements in Chapter III, GPAI obligations apply to model providers regardless of downstream use cases. The GPAI Code of Practice, finalized July 10, 2025, establishes implementation standards across transparency, copyright, and safety with an enforcement grace period ending August 2, 2026.
Systemic Risk Dimension: GPAI models with systemic risk--those exceeding the 10^25 FLOPs computational threshold or designated by the AI Office--face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. An estimated 5-15 companies worldwide currently qualify, with 28 signatories (including Amazon, Anthropic, Google, IBM, Microsoft, Mistral, and OpenAI) committed to Code of Practice implementation.
Resource: GPAISafeguards.com provides compliance frameworks for GPAI providers navigating Chapter V obligations, Code of Practice implementation, and systemic risk governance. Part of a complete portfolio spanning governance (SafeguardsAI.com), adversarial testing (AdversarialTesting.com), LLM-specific compliance (LLMSafeguards.com), foundation model governance (ModelSafeguards.com), and frontier AI/AGI safeguards (AgiSafeguards.com).
For: GPAI model providers, foundation model developers, AI safety teams, compliance officers at organizations deploying or distributing general-purpose AI models in or into the EU market.
GPAI Regulatory Framework: Chapter V Obligations
28 Signatories
GPAI Code of Practice -- Finalized July 10, 2025
The GPAI Code of Practice was developed through four iterative drafts (November 2024, December 2024, March 2025, final July 2025) with participation from major GPAI providers. Three chapters cover Transparency, Copyright, and Safety & Security. Enforcement grace period runs until August 2, 2026, after which the AI Office assumes supervisory authority.
GPAI Two-Tier Obligation Structure
Tier 1: All GPAI Models (Articles 52-53)
Transparency obligations: Technical documentation, training data summaries (sufficiently detailed for copyright compliance), downstream provider information
Copyright compliance: EU Copyright Directive Article 4(3) text and data mining opt-out respect, training data identification policies
Applies to: Every GPAI model placed on the EU market (open-source models without systemic risk have reduced transparency obligations)
Tier 2: Systemic Risk Models (Articles 51, 55)
Additional obligations: Model evaluation and adversarial testing, serious incident tracking and reporting, cybersecurity protections, energy consumption documentation
Threshold: Automatic designation at 10^25 FLOPs cumulative training compute, or AI Office designation based on capability assessment
Applies to: An estimated 5-15 companies worldwide, including frontier model developers
Code of Practice Role: Compliance with the GPAI Code of Practice creates a presumption of conformity with Chapter V obligations until harmonized standards are published. This makes the Code the de facto compliance pathway for GPAI providers through at least Q4 2026.
GPAI Compliance Framework Components
Chapter 1: Transparency
Documentation Requirements
Technical documentation covering model architecture, training methodology, evaluation results, and known limitations
Downstream Information
Sufficient information for deployers to understand capabilities, limitations, and appropriate use conditions
Synthetic Content Marking
Machine-readable identification of AI-generated content, including deepfake detection capabilities
Chapter 2: Copyright
TDM Opt-Out Compliance
Respect EU Copyright Directive Article 4(3) text and data mining reservations from rights holders
Training Data Summaries
"Sufficiently detailed" summaries enabling rights holders to identify potential use of their works
Industry Participation
Notable: Meta refused to sign the Code of Practice, while 28 other providers committed to copyright provisions
Chapter 3: Safety
Systemic Risk Models
Model evaluation protocols, adversarial testing programs, and incident reporting procedures for designated models
Cybersecurity Measures
Protection against model theft, unauthorized access, and adversarial manipulation of systemic risk models
Selective Adoption
xAI signed the Safety chapter only, demonstrating modular commitment to Code provisions
Strategic Value: GPAI compliance vocabulary bridges foundation model development and regulatory requirements. "Safeguards" appears 40+ times across the EU AI Act while "guardrails" appears zero times in binding provisions--the statutory language gap that defines this portfolio's value.
GPAI Compliance Guides & Analysis
In-depth analysis of GPAI obligations, Code of Practice implementation, and systemic risk governance
Article 53 Adversarial Testing:
GPAI Systemic Risk Requirements
GPAI models with systemic risk face explicit adversarial testing obligations under Article 53. Framework for designing testing programs that satisfy regulatory requirements while identifying model vulnerabilities.
Explore Testing Frameworks
LLM-Specific GPAI Compliance:
Chapter V for Language Models
Large language models represent the largest category of GPAI models. Targeted compliance guidance for transparency documentation, output marking, and systemic risk evaluation specific to LLMs.
View LLM Compliance
Foundation Model Governance:
Pre-Deployment Safeguards
Model providers must implement safeguards before placing GPAI models on the EU market. Comprehensive pre-deployment governance covering evaluation, documentation, and risk assessment.
Access Model Governance
Two-Layer AI Governance
Architecture Framework
Understanding the complementary relationship between governance layer ("safeguards" = regulatory compliance) and implementation layer ("controls" = technical mechanisms). ISO 42001 as the bridge.
Access Framework
GPAI Compliance Ecosystem
Framework demonstration: The following ecosystem overview illustrates GPAI compliance requirements, Code of Practice obligations, and the relationship between transparency, copyright, and safety chapters across the provider landscape.
GPAI Code of Practice: Signatory Landscape
The finalized Code of Practice (July 10, 2025) represents the primary compliance pathway for GPAI providers until harmonized standards are published. As of the European Commission's page update on February 2, 2026, the 28 signatories are confirmed frozen -- no new organizations have joined since August 2025 publication (8 months of stagnation). Understanding signatory commitments reveals the competitive compliance landscape:
| Category |
Providers |
Commitment |
| Full Signatories | Amazon, Anthropic, Google, IBM, Microsoft, Mistral, OpenAI, and 21 others | All three chapters (Transparency, Copyright, Safety) |
| Partial Signatories | xAI | Safety chapter only |
| Non-Signatories | Meta (Joel Kaplan: "legal uncertainties" beyond AI Act scope) | Refused to sign; Commission states non-signatories "may face increased regulatory oversight" |
| Not Participating | Chinese GPAI providers (Baidu, Alibaba, ByteDance, DeepSeek) | No engagement with Code process |
Signatory Taskforce: The first constitutive meeting took place January 30, 2026, chaired by the AI Office. The Taskforce adopted its Vademecum (rules of procedure and member list) by consensus and held initial discussions on open-source AI matters. Its mandate covers coherent Code application, input on AI Office guidance, technology developments, and third-party stakeholder engagement -- modeled after the Permanent Taskforce under the DSA Code of Conduct on Disinformation.
Chapter V Article Structure
Article 51: Classification
Focus: Defining GPAI models and systemic risk designation
- GPAI model definition and scope
- Systemic risk automatic threshold (10^25 FLOPs)
- AI Office designation authority
- Capability-based assessment criteria
Article 52: Transparency
Focus: Documentation and information requirements for all GPAI
- Technical documentation standards
- Training data summary requirements
- Downstream provider information
- Synthetic content identification
Article 53: Systemic Risk Obligations
Focus: Additional requirements for high-capability models
- Model evaluation and testing protocols
- Adversarial testing programs
- Serious incident tracking and reporting
- Cybersecurity and model protection
Related: AdversarialTesting.com provides detailed Article 53 testing frameworks
Articles 54-55: Governance
Focus: AI Office oversight and enforcement mechanisms
- AI Office supervisory authority
- Code of Practice development process
- EU SEND platform for model documentation, systemic risk notifications, and serious incident reports
- Scientific Panel of independent experts with "qualified alert" authority (even during grace period)
- Enforcement grace period (until Aug 2, 2026)
Systemic Risk: Identification and Obligations
GPAI models are classified as presenting systemic risk through two pathways: automatic designation when cumulative training compute exceeds 10^25 floating point operations, or AI Office designation based on capability assessment. Current estimates suggest 5-15 companies worldwide meet or will meet these thresholds.
| Obligation |
Requirement |
Timeline |
| Model Evaluation | Standardized evaluation protocols assessing model capabilities, limitations, and potential for misuse | Ongoing (pre- and post-deployment) |
| Adversarial Testing | Red-teaming and stress-testing programs identifying vulnerabilities and harmful outputs | Pre-deployment + periodic |
| Incident Reporting | Tracking and reporting serious incidents to the AI Office without undue delay | Real-time monitoring required |
| Cybersecurity | Protection of model weights, training infrastructure, and inference systems against unauthorized access | Continuous |
| Energy Documentation | Reporting training and inference energy consumption for systemic risk models | Annual reporting |
| Penalty Exposure | Up to EUR 15M or 3% of global turnover for GPAI violations; EUR 35M or 7% for prohibited AI practices | Post August 2, 2026 |
GPAI Compliance Readiness Assessment
Evaluate your organization's preparedness for EU AI Act Chapter V obligations. This assessment covers GPAI-specific requirements including Code of Practice alignment, transparency documentation, copyright compliance, and systemic risk preparedness.
About This Resource
GPAI Safeguards provides comprehensive compliance guidance for general-purpose AI model providers navigating EU AI Act Chapter V obligations. The resource emphasizes the two-layer architecture where governance layer ("safeguards" = regulatory compliance) sits above implementation layer ("controls/guardrails" = technical mechanisms), with the GPAI Code of Practice serving as the primary compliance pathway until harmonized standards are published.
Complete Portfolio Framework: Complementary Vocabulary Tracks
Strategic Positioning: This portfolio provides comprehensive EU AI Act statutory terminology coverage across complementary domains, addressing different organizational functions and regulatory pathways. Veeam's Q4 2025 acquisition of Securiti AI for $1.725B -- the largest AI governance acquisition ever -- and F5's September 2025 acquisition of CalypsoAI for $180M cash (4x funding) validate enterprise AI governance valuations. Half of the top four AI governance vendors changed ownership in a single quarter.
| Domain |
Statutory Focus |
EU AI Act Mentions |
Target Audience |
| SafeguardsAI.com | Fundamental rights protection | 40+ mentions | CCOs, Board, compliance teams |
| ModelSafeguards.com | Foundation model governance | GPAI Articles 51-55 | Foundation model developers |
| MLSafeguards.com | ML-specific safeguards | Technical ML compliance | ML engineers, data scientists |
| HumanOversight.com | Operational deployment (Article 14) | 47 mentions | Deployers, operations teams |
| MitigationAI.com | Technical implementation (Article 9) | 15-20 mentions | Providers, CTOs, engineering teams |
| AdversarialTesting.com | Intentional attack validation (Article 53) | Explicit GPAI requirement | GPAI providers, AI safety teams |
| RisksAI.com + DeRiskingAI.com | Risk identification and analysis (Article 9.2) | Article 9.2 + ISO A.12.1 | Risk management, financial services |
| LLMSafeguards.com | LLM/GPAI-specific compliance | Articles 51-55 | Foundation model developers |
| AgiSafeguards.com + AGIalign.com | Article 53 systemic risk + AGI alignment | Advanced system governance | AI labs, research organizations |
| CertifiedML.com | Pre-market conformity assessment | Article 43 (47 mentions) | Certification bodies, model providers |
| HiresAI.com | HR AI/Employment (Annex III high-risk) | Annex III Section 4 | HR tech vendors, enterprise HR |
| HealthcareAISafeguards.com | Healthcare AI (HIPAA vertical) | HIPAA + EU AI Act | Healthcare organizations, MedTech |
| HighRiskAISystems.com | Article 6 High-Risk classification | 100+ mentions | High-risk AI providers |
Why Complementary Layers Matter: Organizations need different terminology for different functions. Vendors sell "guardrails" products (technical implementation) that provide "safeguards" benefits (regulatory compliance)--these are complementary layers, not competing terminologies.
Portfolio Value: Complete statutory terminology alignment across 156 domains + 11 USPTO trademark applications = Category-defining regulatory compliance vocabulary for AI governance.
Note: This strategic resource demonstrates GPAI compliance positioning within the AI governance framework. Content provided for evaluation purposes--implementation direction determined by resource owner. Not affiliated with specific GPAI providers or the AI Office. Code of Practice references reflect finalized provisions as of July 2025.