How we took a ghostwriting firm from copy-pasting into ChatGPT to a patent-pending AI product in beta with enterprise users.
From zero technical capability to a patent-pending, category-defining AI product in beta with enterprise users. Fractional CTO engagement spanning 20 months and counting.
Role: Fractional CTO owning product requirements, technical architecture, team recruitment, product roadmap, and development cadence from day one.

20 months
Engagement duration
5 hires
Team recruited
85%
AI rendering speedup
4 novel
Patent innovations
Twenty months ago, the founder of a successful executive ghostwriting firm was running his AI workflow by copy-pasting transcripts, tone references, and outlines into ChatGPT, one conversation at a time. He had no technical team, no product, no prototype, and no roadmap. He knew AI could transform his industry but was, in his own words, "struggling to get started."
Today, his firm has SecondDraft, a patent-pending AI word processor in active beta, built on a novel interaction paradigm that doesn't exist anywhere else in the market. A nimble, AI-enabled technical team, recruited and led by Agathon, ships product on a structured cadence and we are now onboarding both professional writers and enterprise users.
Most AI writing tools start from the technology: take a language model, wrap it in a text box, and let users prompt their way to a draft. We started from the opposite direction. The founder had spent years refining a multi-step editorial process: establishing tone through reference materials, layering in research and content, then structuring an argument that a human ghostwriter would execute. It was this writing process that informed the product's architecture.
Every technical decision was driven by the question: how does a professional ghostwriter actually think about this? The result is a product that encodes genuine editorial process rather than generic text generation. We implemented a prompt management system that allows the firm's non-technical writing experts to iterate on AI behaviour directly, without filing engineering tickets. Domain expertise flows right into the product without bottlenecks.
A novel interaction model. The product's core innovation is a dual-pane "translation" interface: writers work freely in a working draft on one side, and the AI produces a polished, publication-ready version on the other. Think Google Translate, but for editorial quality. Around this, we designed three distinct levels of AI interaction: precision annotations on selected text, threaded conversations on specific sections for structure and tone, and a global document chat for whole-piece coherence. A structured interview step captures the writer's intent (audience, format, voice) before any generation begins. Content locking at paragraph and portion level gives writers granular control over what the AI can and cannot touch. Nothing like this interaction model exists in any competing product.
An editorial intelligence layer. A RAG-based document system lets users feed in reference materials — research, prior publications, style guides — which are processed into searchable knowledge blocks and retrieved during the writing process. A multi-stage NLP pipeline, built on frontier models from OpenAI and Anthropic, handles the rendering. The product's moat deepens with use: the more editorial expertise is encoded, the harder it becomes to replicate.
Performance that enables the paradigm. The dual-pane experience only works if it feels like a real-time writing partner, not a slow generation tool. The initial rendering pipeline took 10-12 seconds per pass. Through intelligent caching at paragraph boundaries, optimised token allocation, and streaming responses, we brought that down to 1-3 seconds, an 85% reduction.
Defensible IP. We led the technical analysis for a provisional patent filing covering four novel innovations: the content-locking mechanism coupled with LLM-based generation, the rendering pipeline architecture, the structured interview step combined with document rendering, and the command system for pinpointed AI control within the editor.
We wrote the product requirements document, oversaw the technical architecture, and recruited the entire technical team from scratch: five hires across ML engineering, full-stack development, frontend engineering, and UX design. For the frontend role alone, we designed and administered technical assessments for thirteen candidates, including paid take-home evaluations. We owned the product roadmap, made the architectural decisions, and ran the development cadence from day one.
Beyond the build, we contributed to investor materials, led competitive analysis, defined the ICP, and helped design a structured beta testing programme with retention metrics at day 7, 30, and 90. Working directly alongside the client, we co-delivered an AI writing workshop for a major enterprise client's content team. Fractional model, full executive ownership.
SecondDraft is in active beta, validating product-market fit through structured user feedback and retention metrics. The next phase deepens the AI's editorial intelligence through the proprietary writing library, hardens the security and compliance posture for enterprise deployment, and scales the team for commercial launch. The long-term vision is category-defining: not another AI writing assistant bolted onto existing workflows, but the first word processor built from the ground up with AI at its foundation, with Agathon leading the technical build.

How we helped a large communications division turn AI momentum into an implementation plan — in six weeks.
6 weeks
Engagement duration
11
Stakeholder interviews
2 delivered
Technical blueprints

How we turned fragmented AI experimentation into shared frameworks, tested playbooks, and a concrete product roadmap over two months.
2 months
Engagement duration
3 half-day
Workshops delivered
Full team
Team members trained
Every engagement starts with a conversation about what you're trying to achieve and whether I'm the right person to help.
Get in Touch