How we turned fragmented AI experimentation into shared frameworks, tested playbooks, and a concrete product roadmap over two months.
A fragmented team experimenting with ChatGPT individually became a team with shared frameworks, tested playbooks, and a concrete roadmap for AI-enabled products and workflows. Two-month AI Leadership Advisory engagement.
Role: AI Leadership Advisory, working directly with senior leadership over two months. Designed and delivered the full programme: process mapping, team training, strategic innovation, and implementation roadmapping.

2 months
Engagement duration
3 half-day
Workshops delivered
Full team
Team members trained
3
Product concepts validated
The firm's leadership could see what was coming. AI was about to reshape strategic communications, and soon. Pitch writing, proposal generation, media targeting, stakeholder analysis: the work the team did every day was the kind of knowledge work that language models were getting good at.
The team had already started experimenting. People were using ChatGPT for drafts, research summaries, and brainstorming. But approaches varied wildly across the firm, with no shared understanding of what worked, no way to tell good output from mediocre output, and no strategy for where AI could make a material difference.
The commercial pressure was real. If the firm didn't build AI into its craft systematically, a competitor or a software company would build what they do as a product and undercut them.
Most AI training for professional services firms stays generic: what a language model is, how to write a prompt, a handful of use cases. Teams walk away with general awareness but no ability to apply it to their actual work.
We mapped the firm's real workflows before designing anything. How pitches actually get written, how proposals move from brief to submission, where the bottlenecks sit, what institutional knowledge lives in people's heads rather than in systems. The training was built around their processes, their pain points, and their client scenarios.
So the team wasn't learning abstract prompt engineering. They were learning how to make their specific pitch-writing process faster, stress-test their own proposals, and judge whether an AI output met their professional standards.
We delivered a three-phase programme, each phase built around a half-day workshop with the full team, plus playback sessions with leadership afterwards.
Process improvement. We ran interviews across the team, documented existing workflows for pitch writing, proposal development, and research synthesis, and identified where AI could deliver efficiency gains versus where it would add complexity without value. The output was a set of practical playbooks the team could use immediately.
LLM literacy and evaluation. A hands-on workshop covering how language models actually work, prompt engineering techniques applied to the firm's real use cases, and how to evaluate AI output. We built evaluation frameworks specific to communications work: criteria for accuracy, tone, and professional standards that the team could apply consistently. They also learned to build and configure custom GPTs tailored to their workflows.
Innovation and strategic roadmap. We facilitated an ideation process that produced three validated product concepts, each an AI-enabled service offering that could differentiate the firm in its market. We assessed technical feasibility, evaluated build-versus-buy options, and delivered a three-track implementation roadmap covering immediate workflow automation, product development, and ongoing optimisation.
A dedicated prompt engineering follow-up session covered advanced techniques (reverse chain-of-thought reasoning, step-back prompting, deep research workflows) with live application to the firm's scenarios. The team worked through real pitch-writing and proposal tasks using the techniques in the room.
AI Leadership Advisory, working directly with senior leadership over two months.
We attended fortnightly check-ins, ran playback sessions after every workshop to align leadership on findings and next steps, and adapted the programme as new priorities emerged. When leadership raised a strategic question about a potential technology partnership mid-engagement, we pivoted part of the work into a competitive analysis to inform that decision.
Every deliverable was designed to be usable by the team independently after the engagement ended, with capability transfer as the explicit goal rather than ongoing dependency.
The full team, from junior account executives to senior leadership, gained practical prompt engineering and LLM evaluation skills. A shared prompt library replaced the fragmented individual experimentation they'd started with.
We produced a detailed pitch writing automation roadmap for the firm's highest-volume workflow, with architecture, data requirements, and success metrics defined. Three AI-enabled product concepts were validated for technical feasibility and market differentiation. A three-track implementation roadmap laid out the sequencing: immediate automation, product development, and ongoing optimisation, with phased investment requirements.
The evaluation frameworks for assessing AI output against professional standards proved particularly durable. The team continues to use them as the baseline for judging quality across new tools and workflows.

How we took a ghostwriting firm from copy-pasting into ChatGPT to a patent-pending AI product in beta with enterprise users.
20 months
Engagement duration
5 hires
Team recruited
85%
AI rendering speedup

How we helped a large communications division turn AI momentum into an implementation plan — in six weeks.
6 weeks
Engagement duration
11
Stakeholder interviews
2 delivered
Technical blueprints
Every engagement starts with a conversation about what you're trying to achieve and whether I'm the right person to help.
Get in Touch