In a market where every target claims proprietary AI, investors face a critical challenge: distinguishing genuine technical capability from well-marketed wrappers around commodity models.
The stakes are significant:
Traditional due diligence wasn't built for this. Legal and financial reviews can't tell you whether the AI is real.

Dr Colin Kelly
I've been on every side of AI investment decisions: building AI products, leading research teams, advising boards, and assessing technical claims for major deals.
This isn't theoretical assessment. I know what good AI looks like because I've shipped it. I know what breaks at scale because I've fixed it. I know what genuine defensibility looks like because I've built moats, and watched others crumble.
Six critical dimensions of AI investment assessment
Is this genuine AI or a thin wrapper? Automated code analysis plus human review of architecture, model choices, and implementation quality.
Will this advantage persist? Assessment of data assets, proprietary training, switching costs, and vulnerability to foundation model commoditisation.
What breaks at 10x? Review of infrastructure, cost dynamics, and technical debt that compounds with growth.
Can they execute the roadmap? Evaluation of technical leadership depth, key person dependencies, and realistic delivery capacity.
Where's the risk? Assessment of data provenance, privacy compliance, and responsible AI practices.
Does the tech serve the business? Gap analysis between technical capability and commercial claims.
PhD in Natural Language Processing, University of Cambridge: Original research in extracting knowledge from unstructured text
Head of Applied AI Research, AI Defence Platform: Built and led the team delivering production AI capabilities
AI Value Assessment Experience, IBM: Identified €50m+ in AI-driven benefits across telco, financial services, and automotive sectors
Founder, Committee for Responsible AI: Developed ethical risk frameworks now used in production systems
Fractional CTO: Currently guiding technical architecture for AI product builds
Invited Speaker, Cambridge MSt in AI Ethics & Society: Trusted voice on AI capability and governance
"I don't just assess AI—I've built it, deployed it, governed it, and advised boards on it. That's the difference." -- Colin Kelly, PhD (Cambridge), Agathon Founder
Standalone Technical Assessment Report including:
Clear investment recommendation framing
Is the AI genuine and defensible?
Severity ratings and mitigation options
What breaks, when, and what it costs to fix
Can they deliver what they promise?
How does their tech compare to alternatives?
Issues requiring founder/CTO clarification
Optional add-on: Integration with your legal and financial DD workstreams
Reports are custom to each engagement: no templated box-ticking.
Typical engagement: 2 weeks | Confidential | NDA-protected
Every week you spend uncertain about technical claims is a week your competitors might be moving faster.
Or download our free checklist to get started
Typically within one week of engagement confirmation.
Preferred but not required. We can deliver meaningful assessment from documentation, demos, and technical interviews alone. We can also assess based on available materials and flag gaps as risks.
I can assess based on available materials and flag gaps as risks. Reluctance to share technical details is itself a finding.
Always. All engagements are fully confidential.
12 critical questions every investor should ask before committing to an AI investment
Get instant access to our comprehensive checklist: 12 critical questions investors should ask before any AI investment.