Team analysiert Daten am Bildschirm
25.04.2026

AI Data Maturity in the Midmarket 2026: Five Must-Dos Before Your First Production Agent

DEEP ANALYSIS · AI IN MIDSIZE BUSINESS
9 min. read

On April 7, 2026, Gartner published survey results from 782 IT leaders: 72 percent of AI projects either fail outright or fall short of what was promised. Combined with Bitkom’s AI study from February 2026 (41 percent active AI users in Germany), a picture emerges that we encounter regularly in our midsize-company engagements: the Copilot and agent platforms are in place – but the data they are supposed to work with is not. Here are the five prerequisites that need to be cleanly addressed before the first production agent goes live.

Key Takeaways (as of 24 April 2026):
  • Gartner reports on 07.04.2026 a 72 percent failure rate for enterprise AI projects; Bitkom confirms in February that 41 percent of companies are active AI users, double the 2025 figure.
  • According to Gartner’s detailed analysis, seven out of ten failures trace back to poor data quality and missing data governance – not model problems.
  • The five prerequisites: master data hygiene, role and permission documentation, process transparency, audit trail, and a clearly defined use-case scope per agent.
  • Midsize companies that complete these prerequisites before deploying their first Copilot or agent achieve time-to-value in 8 to 12 weeks rather than 9 to 14 months, according to our project reviews.
  • The frequently cited ERP role is central but not exclusive: CRM, DMS, and HR systems require the same level of maturity, or the agent will fail at its weakest data source.

Why the 72 Percent Failure Rate Is Not a Model Problem

What is AI data maturity? AI data maturity describes the state in which the datasets required for an AI use case are complete, current, consistent, rights-compliant, and machine-readable. The term comes from Gartner’s “AI-Ready Data” framework and covers five dimensions: content quality, governance, technical integration, contextualization, and legal permissibility. Midsize companies that skip any one of these dimensions fail systematically at production deployment.

The sharpness of Gartner’s April 7, 2026 figure is surprising, but the finding itself is not. Since 2024, it has become clear that model quality from major LLM providers is no longer the bottleneck. What triggers project abandonment is almost always the same chain of events: the use case is clearly defined, the agent prototype performs convincingly in the demo environment, then it gets connected to production data – and the responses become contradictory, incomplete, or legally risky. The project manager spends two months in talks with the platform vendor before both sides realize the problem is not the model, but five ERP fields maintained differently across three branch offices.

The takeaway is not that midsize companies should avoid deploying agents. The takeaway is that the groundwork ahead of the first production agent deserves its own project phase – one that is routinely underestimated in typical budget planning. In our engagements over the past three quarters, we set aside four to six weeks for this preparation work alone, before any model reaches a business unit.

What the Bitkom Figures Tell Us

The Bitkom AI study from February 2026 (604 companies surveyed, all with at least 20 employees) documents a doubling of active AI adoption compared to 2025 – but it also reveals a gap. Companies with more than 250 employees report 58 percent active AI use, while those with 20 to 99 employees remain at 32 percent. The gap is not technical; it is operational. Smaller midsize companies often lack dedicated data engineering roles and structured master data management – and that gap becomes a liability the moment the first agent goes live.

The Five Homework Assignments in Detail

72%
of AI projects fail to deliver their promised value, according to a Gartner survey from 07 April 2026. Based on 782 IT executives.
60%
of all AI projects lacking AI-ready data will be discontinued by end of 2026, Gartner projects. Data quality is the most frequently cited reason.
41%
of German companies are actively using AI (Bitkom study, February 2026). That’s double the 2025 figure of 17 percent.

Assignment 1: Master Data Hygiene

Master data is the underestimated core of every agent rollout. Customer, article, supplier, and employee data must be checked against four criteria before the first production agent goes live: unique identifiers across system boundaries, consistent key fields without duplicates, complete mandatory fields per use case, and a documented maintenance process. Anyone feeding an agent whatever the ERP happened to export will get answers where three variants of the same customer are treated as three separate entities.

A quick pragmatism test: take your top 20 customers and count the spelling variants across your ERP, CRM, and ticketing system. Anything above three variants means the agent will either hallucinate or give up. Cleaning this up rarely takes more than two weeks, but it prevents 80 percent of the frustrating conversations with the business side down the road.

Assignment 2: Roles and Permissions Documentation

Agents act. They don’t just read — they write, send, book, and document. That means whatever a human is allowed to do, the agent must be allowed to do too; whatever a human isn’t permitted, the agent is even less so. In practice, this requires a clean roles-and-permissions matrix that is ideally mapped identically in Entra ID, the ERP authorization concept, and the line-of-business applications. Mid-sized companies with historically grown permission structures need a consolidation pass before any agent starts creating bookings or sending emails.

The interesting leadership decisions in an AI project aren’t the ones that appear in the quarterly report. They’re the ones someone makes in week three, when it first becomes clear that the agent is taking the permissions matrix more seriously than the people who wrote it.

Assignment 3: Process Transparency

Agents don’t understand implicit processes. Everything stored in the head of a specialist with 15 years of experience must be made explicit before go-live. That doesn’t mean a full ISO process documentation, but it’s more than a flowchart on a PowerPoint slide. In our engagements, we use a lightweight format: a process narrative (half a page, written in plain language), decision points with criteria, and exception rules. If you can’t write the use case in this form, you haven’t found the right use case for your first agent.

Assignment 4: Audit Trail and Traceability

Once an agent goes live, the question that seemed secondary during the demo becomes critical: who decided what, and based on which data? Agents need an audit trail that links input, data sources used, model version, and output in a way that lets an auditor reconstruct any decision six months later. For mid-sized companies, this means a protocol layer between the platform (Copilot Studio, LangChain, Azure AI Foundry) and the business application — one that saves at least five fields per agent run and retains them for three months.

Assignment 5: Use Case Framework

The first agent rollout is not a platform decision — it’s a use case decision. A clear framework defines three things: what the agent is specifically supposed to accomplish, which boundaries are absolute no-go areas (for example, no autonomous customer communication without a human-in-the-loop), and which metrics will determine after twelve weeks whether the project continues or stops. Anyone who doesn’t put this framework in writing before the project kicks off will end up debating it in an escalation meeting later.

Assignments Compared: Mid-Sized Companies vs. Large Enterprises

The same five assignments look fundamentally different in a mid-sized company versus a large corporation. Enterprises come with dedicated teams, budgets, and existing governance structures, but must untangle historically grown legacy landscapes. Mid-sized companies start with a more manageable IT environment, but rarely have a role dedicated full-time to data quality. Both starting points represent an opportunity — they just require a different sequence.

Assignment Mid-Sized Company (typical) Large Enterprise (typical)
Master Data Hygiene Manually clean top-20 entities, two weeks of effort. Master data program, six to twelve months, multiple FTEs.
Roles Documentation Entra ID as the leading system, three to four weeks of consolidation. IAM refactoring across SAP, Salesforce, and ServiceNow.
Process Transparency Narratives from business-side interviews, five to ten days. Enterprise architecture and BPM suites, often already in place.
Audit Trail Log layer at platform level, supported by a lightweight SIEM. Integration into existing SOC and compliance management.
Use Case Framework One use case, clearly scoped, management sign-off in a single meeting. Portfolio of five to ten use cases, steering committee required.

The takeaway from this table isn’t “mid-sized companies have it easier” — it’s “mid-sized companies move faster when the roles are right.” An enterprise AI project comes with more stakeholders, more legacy, and more policies, but also with dedicated resources. Mid-sized companies execute faster, but only when management actively shapes the use case framework rather than simply nodding it through.

The Platform Landscape: Copilot, Gemini, Claude, Llama

The Bitkom study identifies Microsoft Copilot as the leading platform in the German market with a 28 percent share, followed by Google Gemini at 22 percent. Llama (7 percent), Claude (2 percent), and Amazon Q (2 percent) are rarely serious options for mid-sized companies in the DACH region, typically because support structures or compliance mappings are missing. For the typical Mittelstand company, this means the platform decision is seldom Copilot versus Claude — it’s Copilot Studio versus Power Platform versus Azure AI Foundry, and the real question is less about models than about integration depth and licensing.

An honest take on the platform debate: if you’re already living inside the Microsoft ecosystem, the question practically answers itself. If you’re operating in a heterogeneous environment, the smarter move is to pilot two or three concrete use cases across two platforms and measure the results against the five prerequisites. Whichever platform makes working through those prerequisites least painful wins the contract.

What Really Changes After the Prerequisite Phase

The most consistent observation across our client engagements: mid-sized companies that complete all five prerequisites before deploying their first agent reach time-to-value in eight to twelve weeks. Those that launch a platform pilot and tackle the prerequisites in parallel typically take nine to fourteen months — and frequently quit before the finish line. The prerequisites are not the most exciting project phase, but they are the one that ultimately determines ROI.

There’s a second effect we see in nearly every engagement: once the prerequisites are cleanly worked through, the internal conversation shifts. Instead of arguing about models, prompts, and platforms, teams suddenly start discussing use-case portfolios, scaling, and governance. That’s the moment AI moves from being an IT project to a business project. CEOs and managing directors who experience this transition consistently report that they approach the AI conversation inside their organizations differently from that point on — less technology-driven, more business-focused.

Three Warning Signs That Predict the Next Failure

Looking at it from the other direction: how does leadership recognize that an AI project is heading toward one of those 72-percent dead ends? In reviews, we watch for three signals. First, the steering committee is primarily debating platform features rather than use-case outcomes. Second, the business unit has yet to submit a written process description, even though the prototype is already running. Third, the data quality issue has been labeled a “legacy problem” and outsourced rather than acknowledged as a core prerequisite. When two of these signals appear simultaneously, the probability of the project being abandoned within the next three months rises significantly.

Frequently Asked Questions

How long does a realistic homework phase take for a mid-sized company?

For a company with 50 to 250 employees that has clearly defined its first use-case family, expect four to six weeks for the homework phase before the first agent prototype goes live. Organizations tackling a broader scope — multiple use cases in parallel — should realistically budget ten to twelve weeks.

Do I need a data lake, or is the ERP enough?

For a first agent, a cleanly connected ERP plus a limited number of integrated specialist systems is sufficient in most mid-market scenarios. A data lake pays off once you’re running three to four parallel use cases, or when data needs to be consolidated from more than five systems. Before that point, the data lake is usually what delays a use case by six months.

How do I handle the EU AI Act?

Most mid-market agents fall into the “limited risk” or “minimal risk” category, meaning they require transparency obligations and internal documentation — but no elaborate conformity assessment. What matters is documenting the classification for each use case in writing before go-live, and structuring the audit trail (homework item 4) so it holds up to scrutiny.

What role should management play in an agent project?

Management owns three decisions: the use-case scope, the escalation thresholds for human-in-the-loop, and the willingness to invest in the homework phase. Everything else can be delegated — these three cannot. Organizations that delegate them typically end up in a change freeze, because the business units come to see agent decisions as IT-imposed mandates.

How do I know when a homework item is genuinely done?

Each of the five homework items has a concrete acceptance test. Master data hygiene: the top 20 entities run without duplicates. Role documentation: the agent cannot do anything the human is not permitted to do. Process transparency: the business unit describes the use case in its own words and aligns with IT. Audit trail: a test query can be fully reconstructed after 24 hours. Use-case scope: the metrics are fixed in writing and signed off by management.

What does the homework phase cost?

For a typical mid-sized company working on a first use-case family, the homework phase runs between 25,000 and 60,000 Euro. That is considerably less than the cost of a failed agent project after six months — which, in our engagements, has ranged from 80,000 to 200,000 Euro. The ROI calculation is therefore conservative and, more often than not, unambiguous.

Further Reading on MyBusinessFuture

Quelle Titelbild: Pexels / Yan Krukau (px:7691673)

Also available in

A magazine by evernine media GmbH