AI Act Takes Full Effect in August 2026: High-Risk AI in the SME Sector
7 min Read Time
On 2 August 2026, compliance becomes mandatory: From that date onward, all obligations under the EU AI Act for high-risk AI systems apply in full. This affects every company using AI for personnel decisions, creditworthiness assessments, or quality control in production. A conformity assessment takes three to six months. Any organisation that has not yet begun inventorying its AI systems will almost certainly miss the deadline. And the training obligation under Article 4 has already been in force since February 2025.
The Key Takeaways
- High-risk obligations begin on 2 August 2026: All requirements for high-risk AI systems enter into full force. Affected use cases include HR screening, credit scoring, application filtering, and biometric systems (EU AI Act, Annex III).
- AI competence obligation is already active: Since 2 February 2025, companies must ensure employees who deploy, develop, or supervise AI systems receive adequate AI training (Art. 4 AI Act).
- Fines up to €35 million: Or 7% of global annual turnover for prohibited AI practices such as social scoring or workplace emotion recognition (Art. 99 AI Act).
- KI-MIG adopted by the German Cabinet: The AI Measures and Innovation Act (KI-MIG) establishes Germany’s national legal framework for implementing the AI Act (February 2026).
- Conformity assessment takes 3-6 months: Risk assessments, documentation, testing procedures, and quality management systems must be established. Start no later than March 2026.
The Timeline: What Applies When
The EU AI Act enters into force in stages. Some obligations are already active; the most critical deadlines fall within the coming months. For businesses, the sequence matters – each stage builds upon the previous one.
Since 2 February 2025, Article 4 has applied: the obligation to ensure sufficient AI competence among all staff who deploy, develop, or supervise AI systems. This applies not only to IT departments but also to HR teams using AI-powered recruiting or finance departments working with automated scoring systems.
Important for companies already tracking DORA, NIS2, and MiCA: The AI Act adds another layer. While regulatory requirements overlap partially – for instance, in risk management – they are not identical. An AI system in financial services must comply simultaneously with both DORA and the AI Act. Companies that have already built a DORA compliance framework can reuse parts of it for the AI Act – especially for risk documentation and governance structures.
On 2 August 2025, transparency obligations for general-purpose AI models (GPT, Claude, Gemini) entered into force. On 2 August 2026 comes the largest step: full application of rules for high-risk AI systems. As of that date, companies deploying such systems must be able to demonstrate a complete conformity assessment.
High-Risk AI: Which Systems Are Affected?
The AI Act defines eight domains in Annex III where AI systems are classified as high-risk. For SMEs, three areas are especially relevant.
Not considered high-risk are: AI-powered spam filters, automated translation tools, chatbots without decision-making authority, e-commerce recommendation engines, and most marketing automation tools. These systems are subject only to general transparency obligations and the training requirement under Article 4. This is crucial to understand – many SMEs mistakenly assume all AI use falls under strict high-risk rules. It does not.
The grey zone concerns AI systems that do not make autonomous decisions but do provide decision support. An AI system that automatically rejects loan applications is clearly high-risk. One that merely pre-sorts applications for human review may not be. In such cases, a legal case-by-case assessment is advisable.
Employment and Personnel Management: AI systems that screen job applications, evaluate candidates, prepare promotion decisions, or automate performance reviews. Companies using an applicant tracking system (ATS) with AI-based scoring are highly likely operating a high-risk system. The same applies to AI-powered workforce analytics assessing employee productivity.
Creditworthiness and Scoring: AI systems evaluating creditworthiness, preparing lending decisions, or calculating insurance premiums. FinTech firms and banks are obviously affected – but so too are industrial companies using AI for supplier evaluation or debtor management.
Safety Components: AI systems functioning as safety components in machinery, vehicles, or medical devices. For manufacturing SMEs using AI for quality control or predictive maintenance, this is highly relevant – because the AI here directly impacts product safety.
Even organisations using only ChatGPT, automated translation tools, or AI-powered recruiting software fall under the regulation.
– Paraphrased from IHK Schleswig-Holstein, AI Act Guide
What the Conformity Assessment Specifically Requires
Companies deploying high-risk AI systems (“deployers”) must meet several obligations from August 2026 onward. The scope depends on whether the company merely uses the system – or developed it itself.
Deployers face these core obligations:
First, risk management: Risks posed by the AI system must be identified, assessed, and documented – including bias risks, discrimination potential, and impacts on the rights of affected individuals.
Second, technical documentation: Providers must supply this; deployers must understand and retain it.
Third, human oversight: Mechanisms must ensure humans can monitor and intervene in AI-driven decisions.
Fourth, transparency toward affected persons: Individuals subject to AI decisions – e.g., job applicants in recruitment – must be informed accordingly.
Providers (developers) of high-risk AI face stricter requirements: They must conduct a full conformity assessment, affix the CE marking, and register the system in the EU database. The assessment covers data quality, test protocols, robustness, and cybersecurity. According to industry experts, this process takes three to six months.
Article 4: The Overlooked Training Obligation
While high-risk obligations begin in August 2026, Article 4 has applied since February 2025. It obliges all providers and operators of AI systems to ensure their staff possess “sufficient AI competence.” Though the wording sounds vague, it carries concrete consequences.
Article 4 prescribes no fixed curriculum. But it does require training to be appropriate to the context of AI use and the role of the individual involved. An HR manager using an AI-powered candidate tool needs different knowledge than an administrative assistant using ChatGPT for email templates.
There is currently no direct fine for failing to deliver AI competence training. Yet liability risk remains real: If an organisation demonstrably provided no training – and an AI system causes harm due to incorrect use by an employee – that lack of training becomes evidence of negligence.
Pragmatic implementation: Companies should document which AI systems are in use, who works with them, and what training has been delivered. An internal AI register and training records suffice as a starting point. Haufe Academy and IHKs already offer certified training formats tailored specifically to Article 4.
KI-MIG: What Germany’s Implementation Law Covers
In February 2026, the German Cabinet adopted the AI Measures and Innovation Act (KI-MIG) as the national implementing law for the EU AI Act. The law regulates market surveillance and designates competent authorities.
For SMEs, the KI-MIG includes an important relief measure: SMEs and startups may benefit from simplified forms of technical documentation. The aim is to keep administrative effort manageable without lowering safety standards. The precise design of these simplifications will be specified by the responsible market surveillance authority.
These facilitations affect, among other things, the scope of required technical documentation and testing requirements. SMEs must still meet identical safety standards – but may rely on less burdensome evidentiary procedures.
Market surveillance officially begins on 2 August 2026. From that date, national authorities are empowered to conduct inspections and impose sanctions. In practice, authorities are expected to adopt an advisory approach during the first months before resorting to fines. Still, no company should rely on this grace period.
Fines: What’s Really at Stake
The AI Act employs a three-tier sanction regime. The maximum penalty – €35 million or 7% of global annual turnover – applies to prohibited AI practices, such as social scoring or manipulative AI systems. Violations of high-risk obligations carry fines of up to €15 million or 3% of turnover. Providing false or misleading information to authorities incurs penalties of up to €7.5 million or 1% of turnover.
The AI Act provides for proportionate application for SMEs. Fines scale according to turnover and severity of the violation. An SME with €20 million in annual turnover therefore risks up to €600,000 for high-risk violations. Not existential – but painful.
Compliance costs are also significant: Industry experts estimate conformity costs for high-risk AI at 10-20% of total AI investment. In practice, this translates to annual additional expenses in the mid-five-figure range – primarily for documentation, risk assessment, and governance structures.
Five Steps to August 2026
Time remaining is short – but sufficient, if companies act now. Here are five steps to lay the foundation for AI Act compliance.
1. Create an AI inventory. Which AI systems are deployed across your company? This includes not only internally developed models but also purchased solutions: ATS systems with AI scoring, customer service chatbots, predictive maintenance in production, automated invoice auditing. Many companies underestimate how many AI touchpoints they actually have.
Example: A mechanical engineering firm with 200 employees uses three AI applications. First, a customer service chatbot based on ChatGPT. This is not high-risk – but requires Article 4 training for the support team. Second, a predictive maintenance system calculating failure probabilities for machine components. This could qualify as a safety component – and thus fall under high-risk, requiring detailed scrutiny. Third, an AI-powered applicant management tool that pre-sorts CVs. A clear high-risk case – full conformity assessment required.
2. Conduct risk classification. For each system in your inventory, determine: Does it fall under one of the eight high-risk categories listed in Annex III? Most AI applications used by SMEs are not high-risk. But those that are demand substantial documentation effort.
3. Catch up on Article 4 training. If no documented AI training has taken place yet: start now. A half-day workshop for affected teams serves as a solid entry point. Crucially, document who was trained, when, and on which AI systems.
4. Clarify provider requirements. For high-risk systems sourced from external vendors: request technical documentation, the EU declaration of conformity, and CE marking from the provider. If the vendor cannot supply these, treat it as a red flag.
5. Define responsibilities. Who in your company is accountable for AI compliance? In larger SMEs, this could be a dedicated AI officer; in smaller ones, the data protection officer or legal department. What matters is that one person owns the responsibility.
In parallel: Do not treat the AI Act as an isolated compliance project – but as part of comprehensive AI governance. Companies building robust AI governance now gain more than legal compliance: better risk management and greater trust from customers and business partners. The investment pays off – especially as AI adoption grows across most SMEs.
Conclusion
The EU AI Act is the world’s first comprehensive AI regulation. For SMEs, it is not cause for panic – but a clear call to action. The Article 4 training obligation is already in effect. High-risk obligations arrive in five months. Any company that has not yet inventoried its AI systems should start immediately.
The good news: Most AI applications used by SMEs do not fall under the high-risk category. ChatGPT in marketing, translation tools, or basic automation are generally unproblematic. But any company using AI for personnel decisions, scoring, or safety-critical processes must act. Five months may sound like plenty – but three to six months for a conformity assessment is tight.
Frequently Asked Questions
Does the AI Act apply to small businesses?
Yes. The AI Act applies regardless of company size to all organisations developing, supplying, or deploying AI systems. However, the law does provide SMEs with simplified documentation requirements and proportionate fines.
Is ChatGPT a high-risk AI system?
ChatGPT itself is not high-risk – it is a general-purpose AI model. It becomes high-risk only when integrated into a high-risk application scenario – for example, as the engine behind automated candidate screening in recruitment.
How much does AI Act compliance cost SMEs?
Industry experts estimate compliance costs for high-risk AI at 10-20% of total AI investment. In practice, this means annual additional expenses in the mid-five-figure range – mainly for documentation, risk assessment, and governance.
What happens if I fail to meet the Article 4 training obligation?
There is currently no direct fine for missing AI competence training. However, civil liability risks remain: If inadequate training leads to harm caused by faulty AI use, the company may be held liable for negligence.
How do I determine whether my AI system is high-risk?
Review Annex III of the AI Act: It lists eight domains, including personnel management, creditworthiness, law enforcement, and critical infrastructure. If your AI system operates in one of these areas and significantly influences key decisions, it is likely high-risk. IHKs offer free initial consultations.
What is the KI-MIG?
The AI Measures and Innovation Act is Germany’s national implementation law for the EU AI Act. Adopted by the Federal Cabinet in February 2026, it governs market surveillance, names responsible authorities, and specifies sanctions for the German market.
Editor’s Reading Recommendations
More from the MBF Media Network
- cloudmagazin – Cloud, SaaS, and IT Infrastructure for Decision-Makers
- Digital Chiefs – Leadership, Transformation, and C-Level Perspectives
- SecurityToday – Cybersecurity, Compliance, and Data Protection
Header Image Source: Adam B. / Pexels

