Technical and strategic perspectives on generative AI systems beyond text: image synthesis, code generation, multi-modal models, and how organisations can identify genuine use cases amid considerable hype.
15 articles

AI-enhanced scenario planning: techniques for modern boardrooms
Modern boardrooms are squandering AI's potential in scenario planning by digitizing outdated methods rather than implementing sophisticated systems that explore true possibility spaces through causal inference, complex adaptive modeling, and counterfactual testing.

Self-improving systems: the AI architecture pattern everyone talks about, nobody builds
Despite the hype, truly self-improving AI systems remain theoretical due to fundamental technical and organizational barriers, with today's "self-improving" implementations being merely constrained optimization within predetermined parameters.

Richard Sutton's bitter lesson explains why your AI solution feels shallow
Sutton's bitter lesson reveals that most AI implementations feel shallow because they prioritize domain expertise over computational scale, leaving roughly 80% of potential untapped.

Top AI consulting companies for 2025: the rise of boutique technical excellence
In 2025, boutique AI consulting firms are outpacing traditional giants by offering tailored, innovative solutions that meet specific client needs, reshaping the consulting landscape.

Contextual chunking strategies that improve RAG performance
In the evolving AI landscape, mastering contextual chunking is essential for optimising Retrieval-Augmented Generation (RAG) performance.

Understanding how QLoRA works
QLoRA revolutionises the fine-tuning of large language models by combining quantisation and low-rank adaptation to significantly reduce memory usage while preserving performance, making advanced AI accessible to a broader range of users.

Diffusion models: a simple explainer
Diffusion models revolutionise generative AI by generating high-quality images, videos, and molecules through a dual process of noise addition and reconstruction, while raising significant ethical and computational challenges.

An executive’s guide to AI agents
AI agents are transformative software entities that enhance operational efficiency and decision-making in businesses by autonomously performing tasks and leveraging advanced technologies like generative AI.

Process reward models: a simple explainer
Process reward models (PRMs) train AI by providing feedback at each step of a task, enhancing understanding and problem-solving abilities.

Should I build my own large language model (LLM)?
Organizations considering building their own large language models (LLMs) should weigh the benefits of control and specialisation against challenges like high computational needs and expertise requirements.

Can you reason with LLMs?
Research from Apple reveals that large language models struggle with genuine mathematical reasoning and perform inconsistently on complex math problems

Understanding large language models: a group discussion analogy
By visualising the transformer as a dynamic conversation between human participants, we can grasp the core principles behind this influential neural network architecture.

What is multi-modal AI?
Multi-modal AI represents the evolution from single-stream processing to systems that integrate multiple information types (text, images, audio) simultaneously—mimicking human cognition and unlocking transformative capabilities most organisations fail to fully exploit.

ChatGPT’s impact on the enterprise
Potential impacts and use cases for ChatGPT on the enterprise

Recent trends in NLP
Examining some recent trends in NLP and AI, including transformer-based models, transfer learning, multimodal AI and conversational AI