Apr 2026

AI adoption strategy: why centralise, decentralise, and wait-and-see all fail

Centralising, decentralising, or waiting on AI all fail for the same reason: organisations treat adoption as a deployment problem rather than an organisational learning challenge.
AI adoption strategy: why centralise, decentralise, and wait-and-see all fail

The three-body problem of AI adoption

Global corporate AI investment reached $252.3 billion in 2024. Only 6% of firms report significant earnings impact. That gap represents hundreds of billions in stalled initiatives, abandoned pilots, and transformation programmes that transformed nothing.

Most organisations default to one of three strategies when deciding how to adopt AI: centralise it under a single team, decentralise it to every department, or wait until the technology matures. Each feels rational in isolation. Senior leadership can point to sound logic behind whichever path they've chosen, and board presentations make any of the three look like a plan.

All three produce the same outcome: wasted budget, organisational frustration, and a widening gap between what AI can do and what the organisation gets from it.

The strategy question itself is framed wrong. Centralise, decentralise, or wait treats AI adoption as a deployment decision, something you do to an organisation. Research from McClure and Gerdau's 2026 synthesis of nearly 10,000 organisational leaders across 19 large-scale studies confirms what practitioners have observed for years: AI project failure is an organisational learning problem, not a technology deficit. Choosing where to put the AI team misses the point entirely.

The centralised trap

Control masquerading as progress

When a CEO decides AI matters, the instinct is to hire a head of AI, build a Centre of Excellence (CoE), and funnel everything through one team. This maps neatly onto how organisations handled previous technology waves. Data warehousing, cloud migration, digital transformation: all followed a similar playbook. One team, one budget, one set of standards.

The appeal is obvious. Governance stays clean. Tooling stays consistent. Risk gets managed through a single point of control. Leadership can point to the AI team on an org chart and say, with confidence, that AI is being handled.

Where centralisation breaks down

A team of fifteen, or even fifty, cannot understand the operational nuances of every business unit in a large organisation. The supply chain team's forecasting challenges share little DNA with the customer service team's ticket classification problem or the legal department's contract analysis workflow. Each requires different data, different evaluation criteria, different definitions of success.

What happens in practice: business units submit requests to the CoE. The CoE triages, prioritises, and queues. Innovation gets scheduled. By the time the AI team gets to a business unit's problem, the context has shifted, the sponsor has moved on, or the window of opportunity has closed.

McKinsey's research on building AI-powered organisations found this pattern consistently: the CoE becomes a permissions desk. It exists to approve or deny rather than to accelerate. The people closest to the business problems sit furthest from the technical tools, separated by layers of intake forms and prioritisation frameworks.

The talent paradox

Centralised AI teams attract a specific profile: people who understand AI broadly but lack deep domain expertise in any single business function. They can build a model, but they cannot tell you why the logistics team's demand signal behaves differently in Q4 or why the underwriting team's risk thresholds changed after a regulatory update last March.

Meanwhile, domain experts with twenty years of operational knowledge stay in their silos. They understand the problems worth solving but lack access to the tools, training, or mandate to solve them with AI.

The result is a widening gap between technical possibility and business application. The AI team builds technically sound solutions to the wrong problems. The domain teams know the right problems but cannot articulate them in a format the AI team can act on. The SIO (Siloed-Integrated-Orchestrated) progression model developed by McClure and Gerdau identifies this as the "siloed" stage, where AI capability exists in the organisation but remains disconnected from the operational context that would make it useful.

The decentralised mirage

Democracy sounds good on a slide deck

The opposite impulse is to let every team experiment. Give each department a budget, access to AI tooling, and the autonomy to figure out what works. This appeals to organisations that value speed, entrepreneurialism, and distributed ownership. It also appeals to leadership teams that don't want to make hard prioritisation decisions.

The pitch is seductive: empower the edges, let domain experts drive adoption, avoid the bottleneck of a central team. A thousand flowers blooming.

What happens when everyone gets a budget

Five teams buy five different AI platforms. The marketing team builds a content generation pipeline on one vendor's API. The operations team builds a forecasting system on another. Finance experiments with a third. No shared data infrastructure connects them. No common evaluation framework measures their results.

Procurement becomes chaos. Each vendor relationship is negotiated independently, often at worse terms than a consolidated agreement would achieve. Shadow AI projects proliferate, built by well-meaning teams who lack the expertise to assess what they've created. Duplicate efforts multiply. Two teams in different offices spend six months solving functionally identical problems with incompatible approaches, neither aware of the other's work.

The Adaptive Responsible AI Governance (ARGO) framework, developed through collaboration between Stanford researchers and a multinational enterprise, documented this pattern across multiple business units. Their assessment revealed "complex interplay between group-level guidance and local interpretation" and "regional and functional variation in implementation approaches" that created inconsistent outcomes across the organisation. Four failure patterns emerged consistently: tensions between central guidance and local interpretation, difficulty translating abstract principles into operational practices, wide variation in how different regions and functions implemented the same guidelines, and inconsistent accountability for risk oversight.

Risk without a safety net

Decentralised adoption creates compliance exposure that only surfaces during an audit or an incident. Individual teams lack the expertise to evaluate model bias, data privacy implications, or the downstream effects of automated decisions. Without shared standards for model evaluation, each team invents its own definition of "good enough."

An AI system making lending recommendations needs different scrutiny than one suggesting blog post topics. Decentralised teams often apply the same level of rigour (usually insufficient) to both, or they apply rigour to the wrong dimension: obsessing over model accuracy while ignoring fairness, or optimising for speed while neglecting explainability.

The wait-and-see delusion

Patience as a strategy

Some leaders look at the 6% success rate and conclude that waiting is the smart play. The technology is changing fast. Regulatory frameworks remain unsettled. ROI cases are thin. Why invest heavily in something that might look completely different in eighteen months?

This maps onto a "fast follower" strategy that worked for most previous information technologies. Let the early adopters make expensive mistakes, learn from their failures, and adopt proven approaches once the dust settles.

The cost of standing still

AI doesn't follow the same adoption curve as enterprise software. With traditional technology, a late adopter could purchase a mature product, hire experienced implementers, and catch up within a deployment cycle. AI capability is different. Organisations that started eighteen months earlier haven't just deployed tools. They've built institutional knowledge about what works in their specific context, trained their workforce to collaborate with AI systems, developed evaluation frameworks tuned to their domain, and created feedback loops that compound learning over time.

Mahidhar and Davenport's research on AI adoption timing argues that the fast-follower strategy fails specifically because AI capabilities build on themselves. Each successful implementation creates data, institutional knowledge, and organisational muscle that accelerates the next one. Late entrants don't just face a technology gap. They face a capability gap that money alone cannot close.

When "ready" never arrives

The goalposts keep moving. Waiting for the "right" model gives way to waiting for the "right" regulation, which gives way to waiting for the "right" use case. Organisations in permanent waiting mode don't eventually adopt well. They adopt in a panic when a competitor's AI-driven product threatens their market position, when a board member asks uncomfortable questions, or when a regulatory deadline forces action.

Panic adoption is worse than any of the three strategies. It combines the worst elements of all of them: hasty centralisation without the right talent, rushed decentralisation without governance, and compressed timelines that preclude the learning the organisation never invested in.

The shared failure mode

Three strategies, three different mechanics, one identical outcome. The common thread is treating AI adoption as a technology deployment problem. Centralisation optimises for control. Decentralisation optimises for speed. Waiting optimises for risk avoidance. Each handles one variable while ignoring the others.

Research from Israeli and Ascarza at Harvard Business School makes the diagnosis precise: many AI initiatives fail to scale because organisations lack "the organizational scaffolding to bridge technical potential and business impact." Technology enables progress, but without aligned incentives, redesigned decision processes, and a workforce equipped to collaborate with AI systems, even technically excellent pilots never become durable capabilities.

Eatough, Ferrazzi, and colleagues documented a similar pattern in their 2026 analysis of industry adoption data: 88% of companies report regular AI use, yet performance gains plateau because employees "experiment with new tools but don't integrate them deeply into how work gets done."

The problem is not which strategy you pick. The problem is that all three strategies treat AI as something to be installed rather than something to be learned.

What works instead: adaptive AI integration

Thin centre, thick edges

The organisations getting this right have converged on a structural pattern that borrows from both centralisation and decentralisation while avoiding the pathologies of each.

A small central team (rarely more than five to eight people in a mid-sized enterprise) owns three things: guardrails, shared infrastructure, and evaluation frameworks. They define what responsible AI use looks like. They maintain common tooling, data pipelines, and model registries that any team can build on. They set evaluation standards so results are comparable across initiatives. They do not own execution.

Domain teams own their own AI initiatives, operating within the central guardrails. A supply chain team runs its own demand forecasting pilots. A customer service team builds its own ticket classification system. Each team applies AI to problems they understand intimately, using infrastructure they don't have to build from scratch.

The line between what gets centralised and what doesn't follows a simple heuristic: centralise what creates leverage across teams (infrastructure, governance, evaluation), decentralise what requires domain knowledge to get right (problem identification, solution design, success criteria). The ARGO framework describes this as three interdependent layers: shared foundation standards, central advisory resources, and contextual local implementation. The key word is interdependent. The centre enables the edges. The edges inform the centre.

Structured experimentation over strategy decks

Eighteen-month transformation programmes fail because the technology, the organisation, and the competitive environment all change faster than the programme can adapt. The alternative is structured experimentation: time-boxed pilots with predefined success criteria, clear kill conditions, and a mechanism for scaling winners.

Ninety-day cycles work for most organisations. Long enough to build something real, short enough to fail cheaply. Each pilot starts with a specific business problem (not "explore AI for marketing"), defines what success looks like in measurable terms, and commits to a decision at the end: scale, iterate, or stop.

The scaling mechanism matters. Winners don't just get more budget. They get promoted onto the shared infrastructure layer so other teams can learn from and build on the approach. Failures get documented with equal rigour. The insight from a failed pilot is often more valuable than the output of a successful one.

Building the feedback loop

The McClure and Gerdau research identifies five pillars of AI capability: Culture and Leadership, Human Capital and Operations, Data Architecture, Systems Infrastructure, and Governance and Regulatory Compliance. Organisations progress through stages (siloed, integrated, orchestrated) across all five simultaneously. You cannot be orchestrated on infrastructure while remaining siloed on culture.

Cross-team learning is what moves organisations between stages. The mechanism needs to be lightweight enough that teams participate willingly and specific enough that captured knowledge is actionable. What worked, what failed, what surprised people. A monthly thirty-minute showcase where teams present results (including negative results) achieves more than a quarterly AI strategy review.

The goal is making institutional learning a byproduct of doing the work rather than a separate initiative with its own programme manager and steering committee. When a supply chain team discovers that their forecasting model degrades in specific seasonal patterns, that finding should reach the finance team building revenue projections within days, not months.

The capability stack most organisations miss

Technical foundations that determine outcomes

Most AI failures aren't caused by choosing the wrong model. They're caused by insufficient foundations that make every project harder than it needs to be.

Data readiness sits at the base. This doesn't mean having a data lake. It means having clean, documented, accessible data with clear ownership and known quality characteristics. Organisations that skip this step find every AI project begins with three months of data wrangling before any modelling starts.

Integration architecture determines whether AI outputs reach the systems where decisions happen. A brilliant recommendation engine is worthless if its output can't flow into the CRM, the ERP, or the workflow tool where the human acts on it. Evaluation infrastructure (the ability to measure whether an AI system is performing as expected in production, not just in testing) separates organisations that learn from organisations that guess.

Organisational foundations that matter more

The Harvard Business School research makes this point with uncomfortable clarity: technology isn't the biggest challenge. Culture is.

Decision rights must be explicit. Who decides which AI projects get funded? Who decides when a pilot gets killed? Who decides whether a model is safe to deploy in production? Ambiguity in decision rights produces either paralysis (nobody decides) or chaos (everybody decides).

Incentive alignment shapes behaviour more than strategy documents. If business unit leaders are measured on quarterly revenue and AI projects take two quarters to show results, those projects will be deprioritised regardless of their strategic importance. Aligning incentives means adjusting how success is measured during the transition period.

Psychological safety for experimentation is non-negotiable. If a team that runs a failed AI pilot faces budget cuts or career consequences, every other team in the organisation learns to avoid experimentation. The organisations that build AI capability fastest are the ones where failure in a structured experiment carries no stigma, while failure to experiment does.

AI literacy is the final piece. This doesn't mean teaching everyone to code or understand transformer architectures. It means building enough shared vocabulary that business leaders can have productive conversations with technical teams, that domain experts can identify problems worth solving with AI, and that everyone can critically evaluate AI outputs rather than treating them as infallible.

Getting started without picking a lane

For organisations stuck between the three default strategies, the first step is honest assessment. Where are you on the SIO progression model across each of the five pillars? Most organisations overestimate their readiness in the areas they've invested in (typically infrastructure) and underestimate their gaps in the areas they've ignored (typically culture, governance, and human capital).

The second step is identifying two or three high-signal pilots. High-signal means the problem is well-defined, the data exists, the business impact is measurable, and a domain expert is willing to co-own the initiative with a technical counterpart. Avoid the temptation to pick the most impressive use case. Pick the one most likely to produce a clear result in ninety days.

The third step is building the minimum viable governance layer. This isn't a hundred-page AI policy document. It's a one-page set of guardrails covering data privacy, model evaluation, and escalation procedures. Expand it as you learn. Resist the impulse to over-engineer governance before you have anything to govern.

The first ninety days should produce two things: a completed pilot with documented results (positive or negative) and a working governance framework tested against a real project. Everything else (vendor selection, platform architecture, talent strategy, operating model design) can wait until you have evidence from your own organisation about what works.

What to ignore until later: comprehensive AI strategy documents, organisation-wide training programmes, multi-year transformation roadmaps, vendor bake-offs comparing fifteen platforms. These activities feel productive. They aren't. They're sophisticated forms of delay that substitute planning for learning.

The organisations that build durable AI capability share a common trait: they started before they felt ready, learned from what happened, and built organisational muscle through repetition. There is no shortcut to that process, and there is no strategy document that substitutes for it.

If you're ready to build AI capability that goes beyond pilots and exploits the full technical potential of what's possible today, get in touch.

References

From stalled pilots to compound capability

If your AI investment is producing technically sound work that doesn't translate into business impact, the gap is almost certainly in organisational scaffolding: decision rights, incentive alignment, and the cross-team learning loops that centralised and decentralised approaches both miss.

Agathon works with leadership teams at the point where strategy needs to become structure: governance frameworks, 90-day pilot programmes, and the operating model design that moves organisations from siloed to orchestrated across all five capability pillars.

  • Book a working session if you're ready to design your minimum viable governance layer, scope your first high-signal pilots, and build the feedback mechanisms that turn experimentation into institutional capability.

Related services

Build internal AI capability and strategic thinking

Read more

The AI Team Decision: When to Build Internally, When to Stay Fractional, and When to Do Both

The AI Team Decision: When to Build Internally, When to Stay Fractional, and When to Do Both

Rather than treating AI resourcing as a binary "build or outsource" choice, companies should honestly assess their maturity and progress through a natural sequence — from fractional external leadership, to a hybrid model that builds internal capability, to a self-sufficient team — hiring only when clear readiness signals are met, since premature hiring is far costlier than most leaders expect.

The leader's guide to building AI aptitude in your organisation

The leader's guide to building AI aptitude in your organisation

Most organisations treat AI implementation like building rudimentary websites in 1995: functional but missing the architectural sophistication needed to exploit AI's genuine competitive potential beyond basic automation.

Why Most AI Projects Fail Without Expert AI Consulting

Why Most AI Projects Fail Without Expert AI Consulting

Most organisations fail at AI because they mistake building models for building systems, burning millions on architectural decisions that doom projects from the start whilst ignoring the expertise gap that separates proof-of-concepts from production reality.

Subscribe to our newsletter
Join our newsletter for insights on the latest developments in AI
No more than one newsletter a month