Analysis of large language model capabilities, limitations, and practical applications. We examine prompt engineering, fine-tuning approaches, retrieval augmentation, and how to integrate LLMs into products that create genuine value.
17 articles

The key metrics to measure the ROI of your LLM deployments
Most organisations measure LLM success using traditional software metrics whilst sitting on transformational cognitive infrastructure they barely understand how to evaluate properly.

Building a Secure LLMOps Pipeline: From Development to Production
Most organisations treat LLM security like traditional DevOps while ignoring novel attack vectors through model weights, training data, and prompt injection that conventional tools cannot detect.

Why tokenization matters: CharGPT vs ChadGPT
GPT-4 struggles to count letters in "CharGPT" versus "ChatGPT" because tokenisation (the process of breaking text into processable units) fundamentally shapes what AI models can perceive, revealing why some companies' AI implementations fail at the architectural level rather than the reasoning level.

Cost-effective LLM implementation: when to fine-tune and when to prompt
Most companies are burning money on LLM implementations by defaulting to expensive fine-tuning when sophisticated prompting could achieve comparable results at a fraction of the cost and complexity.

Beyond the hype: creating measurable ROI with LLM implementations
Despite their transformative potential, Large Language Models (LLMs) necessitate robust evaluation and strategic implementation to ensure they deliver real value rather than becoming a costly gamble.

Understanding how QLoRA works
QLoRA revolutionises the fine-tuning of large language models by combining quantisation and low-rank adaptation to significantly reduce memory usage while preserving performance, making advanced AI accessible to a broader range of users.

Small LLMs — why they matter
Small language models (SLMs), characterised by their efficiency and versatility, are emerging as pivotal tools for language processing, offering significant advantages in resource optimisation and accessibility, while challenging the dominance of larger models.

Enterprise knowledge graphs as RAG foundations: implementation lessons
Enterprise knowledge graphs, enhanced by retrieval-augmented generation, are essential for transforming data silos into interconnected knowledge ecosystems, but their success hinges on data quality, scalability, security, and user-centric design.

Demystifying LoRA
Low-Rank Adaptation (LoRA) revolutionises the fine-tuning of large language models by enabling efficient model adaptation with minimal computational resources, while raising important ethical considerations.

Skills taxonomy for modern AI teams: beyond traditional data science
A modern AI skills taxonomy is essential for building versatile teams that go beyond traditional data science to include advanced technical, interdisciplinary, and ethical competencies for future innovation.

Your private LLM: deploying LLMs locally and offline using Ollama
Ollama enables local deployment of Large Language Models (LLMs), offering enhanced privacy, control, and efficiency for organisations seeking to harness the power of LLMs while maintaining oversight of their operational environment.

Unlocking business potential with AI agents
AI agents are intelligent systems that autonomously handle tasks, enhancing efficiency and reducing costs.

Process reward models: a simple explainer
Process reward models (PRMs) train AI by providing feedback at each step of a task, enhancing understanding and problem-solving abilities.

Should I build my own large language model (LLM)?
Organizations considering building their own large language models (LLMs) should weigh the benefits of control and specialisation against challenges like high computational needs and expertise requirements.

Understanding large language models: a group discussion analogy
By visualising the transformer as a dynamic conversation between human participants, we can grasp the core principles behind this influential neural network architecture.

Adversarial models: what are they and when should you use them?
A brief explanation of adversarial models and some potential use cases for them

ChatGPT’s impact on the enterprise
Potential impacts and use cases for ChatGPT on the enterprise