EU AI Act High-Risk System Classification: Immediate Identification Requirements for Corporate
Intro
The EU AI Act establishes a risk-based regulatory framework requiring immediate identification of high-risk AI systems. For corporate legal and HR operations, this specifically targets automated or semi-automated systems used in employment, worker management, and access to self-employment. Systems processing employee data through CRM platforms like Salesforce with integrated AI components for screening, evaluation, or decision support fall under Annex III high-risk classification. Identification is not optional—it triggers mandatory conformity assessment, technical documentation, human oversight, and accuracy requirements with enforcement timelines beginning 2025.
Why this matters
Failure to properly identify high-risk systems creates immediate legal exposure. Unclassified systems operating without required conformity assessments face enforcement actions including orders to withdraw systems from market, corrective measures, and administrative fines. For global enterprises, this creates market access risk in EU/EEA jurisdictions. Operationally, retroactive classification requires system redesign, documentation creation, and validation processes that disrupt business continuity. Commercially, non-compliance can trigger contract violations with EU-based clients and partners, while public enforcement actions damage brand reputation in regulated markets.
Where this usually breaks
Identification failures typically occur in three areas: CRM workflow automation that uses scoring algorithms for candidate screening or performance evaluation; integrated third-party AI services through API connections to HR platforms; and legacy systems where AI components were added incrementally without governance tracking. Specific failure points include Salesforce Einstein scoring models for recruitment, Workday predictive analytics for promotion pathways, SAP SuccessFactors talent intelligence modules, and custom-built scoring engines integrated through middleware. Systems using natural language processing for CV analysis or sentiment analysis for employee feedback often lack proper classification despite meeting high-risk criteria.
Common failure patterns
Four primary failure patterns emerge: 1) Treating AI as 'configuration' rather than regulated system component, particularly in no-code/low-code CRM environments; 2) Distributed AI functionality across microservices without centralized governance mapping; 3) Assuming human-in-the-loop designs exempt systems from classification despite automated filtering or ranking; 4) Overlooking data preprocessing pipelines that use ML for normalization or feature engineering as part of decision systems. Technical debt compounds these issues—systems developed before 2022 often lack model cards, version control, or performance documentation required for classification assessment.
Remediation direction
Implement systematic inventory using the EU AI Act's Annex III criteria as filtering mechanism. For Salesforce environments, audit all automation rules, Einstein predictions, and third-party app exchange integrations. Map data flows to identify where employee data triggers automated scoring or ranking. Technical implementation should include: 1) Registry of all AI components with versioning and deployment metadata; 2) Data lineage documentation showing training data sources and preprocessing; 3) Performance metrics against fairness and accuracy benchmarks; 4) Human oversight mechanisms with audit trails. For API integrations, require vendors to provide conformity assessment documentation or EU declaration of conformity. Prioritize systems affecting recruitment, promotion, or contract termination decisions.
Operational considerations
Classification requires cross-functional coordination between legal, HR, and engineering teams. Establish clear ownership: legal determines applicability, engineering provides technical documentation, HR validates business process mapping. Resource allocation must account for documentation creation, system modification for human oversight interfaces, and ongoing monitoring requirements. Technical debt remediation for legacy systems may require 6-12 months lead time. Budget for third-party conformity assessment where internal expertise is insufficient. Implement continuous monitoring for system changes that alter risk classification—particularly when adding new data sources or modifying scoring thresholds. Maintain evidence packages for regulatory inspection including model cards, testing protocols, and oversight logs.