AI Act (EU)
The AI Act is the European Union regulation that governs artificial intelligence. It sorts AI systems by risk and places obligations on anyo...
Read definitionFine-tuning is the practice of further training an existing AI model on your own data so it handles a specific task, tone, or domain better. It is pricier than RAG, but essential when you need consistent output and a distinctive style.
Fine-tuning is the practice of continuing the training of an already existing AI model on your own dataset. You do not start from scratch, but from a pretrained model that already understands language or images, and teach it more about your specific task, tone, or domain. The model adjusts its weights based on the new examples.
Think of it as hiring an experienced copywriter and then onboarding them to your brand voice. The basic skills are already there, what they pick up is the way you do things. That is a lot faster than training someone who has never written a sentence.
Fine-tuning is most often talked about for LLMs, but it applies equally to image, speech, and classification models. The technique is a form of transfer learning: moving knowledge from a broad pretraining phase into a narrower use.
Consistent tone and style. When customer communication has to sound the same every time, you teach that through hundreds of examples of approved copy.
Fixed output structure. If you need output in a strict JSON schema or a specific template, a fine-tuned model is far more reliable than a prompt instruction alone.
Jargon and domain knowledge. Legal language, medical terminology, or internal product codes that a general model does not know well.
Cost optimisation at scale. A smaller fine-tuned model can match a larger general model on repetitive tasks at a fraction of the cost per call.
Sensitive or restricted data. When you can host an open source model yourself and fine-tune it on data that is not allowed to leave the building.
Full fine-tuning
All of the model's weights are updated. Gives the best quality, but needs a lot of GPU memory and time. For large models this quickly becomes unaffordable for individual teams.
LoRA (Low-Rank Adaptation)
Instead of updating every weight, LoRA trains a small extra layer on top of the model. You get around 90 percent of the quality at roughly 1 percent of the cost. The default choice for most business use cases since 2023.
QLoRA
LoRA combined with quantisation of the base model. Lets you fine-tune models of 70 billion parameters on a single GPU with 48 GB of memory. The breakthrough that made fine-tuning accessible.
Instruction tuning
A specific style where you teach a model to follow instructions better, often combined with reinforcement learning from human feedback (RLHF). That is how ChatGPT and Claude were originally tuned.
The three techniques are often mixed up, but they solve different problems.
Prompt engineering adjusts the instruction you pass to the model. Cheap, fast, no training. First choice when you want to start small.
RAG adds knowledge by fetching relevant documents from an index and feeding them into the prompt. Ideal when the knowledge changes frequently or when you need citations.
Fine-tuning changes behaviour. You teach the model a style, a tone, a fixed output format, or a specific task. Ideal for consistent output across many calls.
In practice you often combine all three: prompt engineering for the overall instruction, RAG for the current facts, fine-tuning for tone and format. For most business cases you can go a long way without ever fine-tuning.
Bad training data
Fine-tuning amplifies the patterns in your examples, mistakes included. Invest in a carefully curated dataset with clear input-output pairs and enough diversity.
Catastrophic forgetting
Tune too aggressively and the model loses general skills it used to have. Test on a broader set than your own use case before releasing.
Cost of repeated training
Every time the base model gets a new version you have to decide whether to re-run your fine-tune. Budget for that as a recurring cost, not a one-off.
Sensitive data baked into weights
Data you feed the model during training ends up in the weights and is hard to remove. Think up front about personal data, copyright, and what happens when someone asks for their data to be deleted.
The European AI Act places specific obligations on anyone who fine-tunes a model for high-risk applications. You then count as a provider and must document which data was used, how quality was tested, and how bias is monitored. GDPR also remains in scope: personal data in the training set needs a legal basis and, in some cases, a DPIA.
The AI Act is the European Union regulation that governs artificial intelligence. It sorts AI systems by risk and places obligations on anyo...
Read definitionAn AI agent is an AI system that autonomously plans and executes multiple steps to reach a goal. It uses a language model as its brain and c...
Read definitionArtificial intelligence is technology that teaches computers to learn, reason, and make decisions from data instead of following hand-writte...
Read definitionBias in AI is a skew that creeps into models through data, algorithms, or human choices. It is not always harmful, but it has to be managed ...
Read definitionBottleneck analysis finds the step in a process where work gets stuck waiting, the step that dictates total throughput time. You spot bottle...
Read definition
Collect&Go and Telenet Business are testing an autonomous electric delivery cart in Leuven, steered over 5G. What it means for logistics and...
Ten practical steps to automate your business processes without AI hype. Start small, fix the process first, use the tools you already own, ...