Large Language Models

Analysis of large language model capabilities, limitations, and practical applications. We examine prompt engineering, fine-tuning approaches, retrieval augmentation, and how to integrate LLMs into products that create genuine value.

17 articles

LLMs
AI Strategy
AI Consulting
article cover

The key metrics to measure the ROI of your LLM deployments

Most organisations measure LLM success using traditional software metrics whilst sitting on transformational cognitive infrastructure they barely understand how to evaluate properly.

LLMs
AI Strategy
Machine Learning
article cover

Building a Secure LLMOps Pipeline: From Development to Production

Most organisations treat LLM security like traditional DevOps while ignoring novel attack vectors through model weights, training data, and prompt injection that conventional tools cannot detect.

AI Strategy
Machine Learning
LLMs
article cover

Why tokenization matters: CharGPT vs ChadGPT

GPT-4 struggles to count letters in "CharGPT" versus "ChatGPT" because tokenisation (the process of breaking text into processable units) fundamentally shapes what AI models can perceive, revealing why some companies' AI implementations fail at the architectural level rather than the reasoning level.

LLMs
AI Strategy
Machine Learning
article cover

Cost-effective LLM implementation: when to fine-tune and when to prompt

Most companies are burning money on LLM implementations by defaulting to expensive fine-tuning when sophisticated prompting could achieve comparable results at a fraction of the cost and complexity.

LLMs
Responsible AI
AI Strategy
article cover

Beyond the hype: creating measurable ROI with LLM implementations

Despite their transformative potential, Large Language Models (LLMs) necessitate robust evaluation and strategic implementation to ensure they deliver real value rather than becoming a costly gamble.

LLMs
Machine Learning
Generative AI
article cover

Understanding how QLoRA works

QLoRA revolutionises the fine-tuning of large language models by combining quantisation and low-rank adaptation to significantly reduce memory usage while preserving performance, making advanced AI accessible to a broader range of users.

LLMs
Responsible AI
NLP
article cover

Small LLMs — why they matter

Small language models (SLMs), characterised by their efficiency and versatility, are emerging as pivotal tools for language processing, offering significant advantages in resource optimisation and accessibility, while challenging the dominance of larger models.

Knowledge Graphs
RAG
LLMs
article cover

Enterprise knowledge graphs as RAG foundations: implementation lessons

Enterprise knowledge graphs, enhanced by retrieval-augmented generation, are essential for transforming data silos into interconnected knowledge ecosystems, but their success hinges on data quality, scalability, security, and user-centric design.

LLMs
Machine Learning
NLP
article cover

Demystifying LoRA

Low-Rank Adaptation (LoRA) revolutionises the fine-tuning of large language models by enabling efficient model adaptation with minimal computational resources, while raising important ethical considerations.

Fractional CTO
Responsible AI
LLMs
article cover

Skills taxonomy for modern AI teams: beyond traditional data science

A modern AI skills taxonomy is essential for building versatile teams that go beyond traditional data science to include advanced technical, interdisciplinary, and ethical competencies for future innovation.

LLMs
AI Consulting
article cover

Your private LLM: deploying LLMs locally and offline using Ollama

Ollama enables local deployment of Large Language Models (LLMs), offering enhanced privacy, control, and efficiency for organisations seeking to harness the power of LLMs while maintaining oversight of their operational environment.

AI Strategy
LLMs
AI Agents
article cover

Unlocking business potential with AI agents

AI agents are intelligent systems that autonomously handle tasks, enhancing efficiency and reducing costs.

Generative AI
LLMs
Machine Learning
article cover

Process reward models: a simple explainer

Process reward models (PRMs) train AI by providing feedback at each step of a task, enhancing understanding and problem-solving abilities.

Generative AI
AI Strategy
LLMs
article cover

Should I build my own large language model (LLM)?

Organizations considering building their own large language models (LLMs) should weigh the benefits of control and specialisation against challenges like high computational needs and expertise requirements.

Generative AI
Machine Learning
LLMs
article cover

Understanding large language models: a group discussion analogy

By visualising the transformer as a dynamic conversation between human participants, we can grasp the core principles behind this influential neural network architecture.

Machine Learning
LLMs
article cover

Adversarial models: what are they and when should you use them?

A brief explanation of adversarial models and some potential use cases for them

Generative AI
AI Consulting
LLMs
article cover

ChatGPT’s impact on the enterprise

Potential impacts and use cases for ChatGPT on the enterprise

Subscribe to our newsletter
Join our newsletter for insights on the latest developments in AI
No more than one newsletter a month