EU AI Act Chapter V Compliance Resource

GPAI Safeguards

General-Purpose AI Compliance, Systemic Risk Governance & Code of Practice Implementation

Vendor-neutral analysis, compliance frameworks, and assessment tools for GPAI providers subject to EU AI Act Chapter V obligations

EU AI Act Chapter V GPAI Code of Practice Systemic Risk Evaluation Articles 51-55 AI Office Oversight
Explore GPAI Compliance

Strategic Safeguards Portfolio

11 USPTO Trademark Applications | 156-Domain Portfolio

USPTO Trademark Applications Filed

SAFEGUARDS AI 99452898
AI SAFEGUARDS 99528930
MODEL SAFEGUARDS 99511725
ML SAFEGUARDS 99544226
LLM SAFEGUARDS 99462229
AGI SAFEGUARDS 99462240
GPAI SAFEGUARDS 99541759
MITIGATION AI 99503318
HIRES AI 99528939
HEALTHCARE AI SAFEGUARDS 99521639
HUMAN OVERSIGHT 99503437

156-Domain Portfolio -- 30 Lead Domains

Executive Summary

Challenge: General-purpose AI (GPAI) models--including large language models, multimodal systems, and foundation models--face dedicated compliance obligations under EU AI Act Chapter V (Articles 51-55). Unlike high-risk AI system requirements in Chapter III, GPAI obligations apply to model providers regardless of downstream use cases. The GPAI Code of Practice, finalized July 10, 2025, establishes implementation standards across transparency, copyright, and safety with an enforcement grace period ending August 2, 2026.

Systemic Risk Dimension: GPAI models with systemic risk--those exceeding the 10^25 FLOPs computational threshold or designated by the AI Office--face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. An estimated 5-15 companies worldwide currently qualify, with 28 signatories (including Amazon, Anthropic, Google, IBM, Microsoft, Mistral, and OpenAI) committed to Code of Practice implementation.

Resource: GPAISafeguards.com provides compliance frameworks for GPAI providers navigating Chapter V obligations, Code of Practice implementation, and systemic risk governance. Part of a complete portfolio spanning governance (SafeguardsAI.com), adversarial testing (AdversarialTesting.com), LLM-specific compliance (LLMSafeguards.com), foundation model governance (ModelSafeguards.com), and frontier AI/AGI safeguards (AgiSafeguards.com).

For: GPAI model providers, foundation model developers, AI safety teams, compliance officers at organizations deploying or distributing general-purpose AI models in or into the EU market.

GPAI Regulatory Framework: Chapter V Obligations

28 Signatories
GPAI Code of Practice -- Finalized July 10, 2025

The GPAI Code of Practice was developed through four iterative drafts (November 2024, December 2024, March 2025, final July 2025) with participation from major GPAI providers. Three chapters cover Transparency, Copyright, and Safety & Security. Enforcement grace period runs until August 2, 2026, after which the AI Office assumes supervisory authority.

GPAI Two-Tier Obligation Structure

Tier 1: All GPAI Models (Articles 52-53)

Transparency obligations: Technical documentation, training data summaries (sufficiently detailed for copyright compliance), downstream provider information

Copyright compliance: EU Copyright Directive Article 4(3) text and data mining opt-out respect, training data identification policies

Applies to: Every GPAI model placed on the EU market (open-source models without systemic risk have reduced transparency obligations)

Tier 2: Systemic Risk Models (Articles 51, 55)

Additional obligations: Model evaluation and adversarial testing, serious incident tracking and reporting, cybersecurity protections, energy consumption documentation

Threshold: Automatic designation at 10^25 FLOPs cumulative training compute, or AI Office designation based on capability assessment

Applies to: An estimated 5-15 companies worldwide, including frontier model developers

Code of Practice Role: Compliance with the GPAI Code of Practice creates a presumption of conformity with Chapter V obligations until harmonized standards are published. This makes the Code the de facto compliance pathway for GPAI providers through at least Q4 2026.

GPAI Compliance Framework Components

Chapter 1: Transparency

Documentation Requirements

Technical documentation covering model architecture, training methodology, evaluation results, and known limitations

Downstream Information

Sufficient information for deployers to understand capabilities, limitations, and appropriate use conditions

Synthetic Content Marking

Machine-readable identification of AI-generated content, including deepfake detection capabilities

Chapter 2: Copyright

TDM Opt-Out Compliance

Respect EU Copyright Directive Article 4(3) text and data mining reservations from rights holders

Training Data Summaries

"Sufficiently detailed" summaries enabling rights holders to identify potential use of their works

Industry Participation

Notable: Meta refused to sign the Code of Practice, while 28 other providers committed to copyright provisions

Chapter 3: Safety

Systemic Risk Models

Model evaluation protocols, adversarial testing programs, and incident reporting procedures for designated models

Cybersecurity Measures

Protection against model theft, unauthorized access, and adversarial manipulation of systemic risk models

Selective Adoption

xAI signed the Safety chapter only, demonstrating modular commitment to Code provisions

Strategic Value: GPAI compliance vocabulary bridges foundation model development and regulatory requirements. "Safeguards" appears 40+ times across the EU AI Act while "guardrails" appears zero times in binding provisions--the statutory language gap that defines this portfolio's value.

GPAI Compliance Ecosystem

Framework demonstration: The following ecosystem overview illustrates GPAI compliance requirements, Code of Practice obligations, and the relationship between transparency, copyright, and safety chapters across the provider landscape.

GPAI Code of Practice: Signatory Landscape

The finalized Code of Practice (July 10, 2025) represents the primary compliance pathway for GPAI providers until harmonized standards are published. As of the European Commission's page update on February 2, 2026, the 28 signatories are confirmed frozen -- no new organizations have joined since August 2025 publication (8 months of stagnation). Understanding signatory commitments reveals the competitive compliance landscape:

Category Providers Commitment
Full SignatoriesAmazon, Anthropic, Google, IBM, Microsoft, Mistral, OpenAI, and 21 othersAll three chapters (Transparency, Copyright, Safety)
Partial SignatoriesxAISafety chapter only
Non-SignatoriesMeta (Joel Kaplan: "legal uncertainties" beyond AI Act scope)Refused to sign; Commission states non-signatories "may face increased regulatory oversight"
Not ParticipatingChinese GPAI providers (Baidu, Alibaba, ByteDance, DeepSeek)No engagement with Code process

Signatory Taskforce: The first constitutive meeting took place January 30, 2026, chaired by the AI Office. The Taskforce adopted its Vademecum (rules of procedure and member list) by consensus and held initial discussions on open-source AI matters. Its mandate covers coherent Code application, input on AI Office guidance, technology developments, and third-party stakeholder engagement -- modeled after the Permanent Taskforce under the DSA Code of Conduct on Disinformation.

Chapter V Article Structure

Article 51: Classification

Focus: Defining GPAI models and systemic risk designation

  • GPAI model definition and scope
  • Systemic risk automatic threshold (10^25 FLOPs)
  • AI Office designation authority
  • Capability-based assessment criteria

Article 52: Transparency

Focus: Documentation and information requirements for all GPAI

  • Technical documentation standards
  • Training data summary requirements
  • Downstream provider information
  • Synthetic content identification

Article 53: Systemic Risk Obligations

Focus: Additional requirements for high-capability models

  • Model evaluation and testing protocols
  • Adversarial testing programs
  • Serious incident tracking and reporting
  • Cybersecurity and model protection

Related: AdversarialTesting.com provides detailed Article 53 testing frameworks

Articles 54-55: Governance

Focus: AI Office oversight and enforcement mechanisms

  • AI Office supervisory authority
  • Code of Practice development process
  • EU SEND platform for model documentation, systemic risk notifications, and serious incident reports
  • Scientific Panel of independent experts with "qualified alert" authority (even during grace period)
  • Enforcement grace period (until Aug 2, 2026)

Systemic Risk: Identification and Obligations

GPAI models are classified as presenting systemic risk through two pathways: automatic designation when cumulative training compute exceeds 10^25 floating point operations, or AI Office designation based on capability assessment. Current estimates suggest 5-15 companies worldwide meet or will meet these thresholds.

Obligation Requirement Timeline
Model EvaluationStandardized evaluation protocols assessing model capabilities, limitations, and potential for misuseOngoing (pre- and post-deployment)
Adversarial TestingRed-teaming and stress-testing programs identifying vulnerabilities and harmful outputsPre-deployment + periodic
Incident ReportingTracking and reporting serious incidents to the AI Office without undue delayReal-time monitoring required
CybersecurityProtection of model weights, training infrastructure, and inference systems against unauthorized accessContinuous
Energy DocumentationReporting training and inference energy consumption for systemic risk modelsAnnual reporting
Penalty ExposureUp to EUR 15M or 3% of global turnover for GPAI violations; EUR 35M or 7% for prohibited AI practicesPost August 2, 2026

GPAI Compliance Readiness Assessment

Evaluate your organization's preparedness for EU AI Act Chapter V obligations. This assessment covers GPAI-specific requirements including Code of Practice alignment, transparency documentation, copyright compliance, and systemic risk preparedness.

Analysis & Recommendations

About This Resource

GPAI Safeguards provides comprehensive compliance guidance for general-purpose AI model providers navigating EU AI Act Chapter V obligations. The resource emphasizes the two-layer architecture where governance layer ("safeguards" = regulatory compliance) sits above implementation layer ("controls/guardrails" = technical mechanisms), with the GPAI Code of Practice serving as the primary compliance pathway until harmonized standards are published.

Complete Portfolio Framework: Complementary Vocabulary Tracks

Strategic Positioning: This portfolio provides comprehensive EU AI Act statutory terminology coverage across complementary domains, addressing different organizational functions and regulatory pathways. Veeam's Q4 2025 acquisition of Securiti AI for $1.725B -- the largest AI governance acquisition ever -- and F5's September 2025 acquisition of CalypsoAI for $180M cash (4x funding) validate enterprise AI governance valuations. Half of the top four AI governance vendors changed ownership in a single quarter.

Domain Statutory Focus EU AI Act Mentions Target Audience
SafeguardsAI.comFundamental rights protection40+ mentionsCCOs, Board, compliance teams
ModelSafeguards.comFoundation model governanceGPAI Articles 51-55Foundation model developers
MLSafeguards.comML-specific safeguardsTechnical ML complianceML engineers, data scientists
HumanOversight.comOperational deployment (Article 14)47 mentionsDeployers, operations teams
MitigationAI.comTechnical implementation (Article 9)15-20 mentionsProviders, CTOs, engineering teams
AdversarialTesting.comIntentional attack validation (Article 53)Explicit GPAI requirementGPAI providers, AI safety teams
RisksAI.com + DeRiskingAI.comRisk identification and analysis (Article 9.2)Article 9.2 + ISO A.12.1Risk management, financial services
LLMSafeguards.comLLM/GPAI-specific complianceArticles 51-55Foundation model developers
AgiSafeguards.com + AGIalign.comArticle 53 systemic risk + AGI alignmentAdvanced system governanceAI labs, research organizations
CertifiedML.comPre-market conformity assessmentArticle 43 (47 mentions)Certification bodies, model providers
HiresAI.comHR AI/Employment (Annex III high-risk)Annex III Section 4HR tech vendors, enterprise HR
HealthcareAISafeguards.comHealthcare AI (HIPAA vertical)HIPAA + EU AI ActHealthcare organizations, MedTech
HighRiskAISystems.comArticle 6 High-Risk classification100+ mentionsHigh-risk AI providers

Why Complementary Layers Matter: Organizations need different terminology for different functions. Vendors sell "guardrails" products (technical implementation) that provide "safeguards" benefits (regulatory compliance)--these are complementary layers, not competing terminologies.

Portfolio Value: Complete statutory terminology alignment across 156 domains + 11 USPTO trademark applications = Category-defining regulatory compliance vocabulary for AI governance.

Note: This strategic resource demonstrates GPAI compliance positioning within the AI governance framework. Content provided for evaluation purposes--implementation direction determined by resource owner. Not affiliated with specific GPAI providers or the AI Office. Code of Practice references reflect finalized provisions as of July 2025.