Dictionary

Model drift

Model drift happens when an AI model gets worse at predicting because data or real-world patterns shift over time. Regular retraining and continuous monitoring keep predictions accurate and useful.

What is model drift?

Model drift means that an AI model gets less accurate over time, because the world it was trained on keeps changing. A model learns from historical data. Once today's reality stops matching that data, the predictions start to slip.

There are two main flavours: data drift and concept drift.

Data drift

Data drift happens when the input data itself changes.
The variables stay the same, but their distribution or values shift.
The model expects one pattern and gets handed another.

Example: imagine a model that flags spam emails. It was trained on older messages where spam often contained words like "lottery" or "win money". A year later, spammers switch to fresh wording and emoji to hide the same intent. The vocabulary of the inbox has changed, the inputs no longer look like the training data, and the model starts missing real spam.

Common causes:

  • New data collection methods or systems

  • Changes in user behaviour

  • Seasonal effects or external factors

  • Technical errors in upstream data sources

Concept drift

Concept drift happens when the relationship between input and output changes.
The data on paper looks the same, but its meaning has shifted.

Example: a model predicts whether someone will drive or take public transport, based on weather data. It learned that on rainy days most people take the car. A few years later, electric bikes and proper rain gear have become common, and remote work is part of the weekly routine. It still rains just as often, but the meaning of rain for someone's transport choice has shifted. The relationship between input (weather) and behaviour (transport mode) has moved.

Common causes:

  • New market trends or habits

  • Changes in policy or regulation

  • Unexpected events that reshape behaviour

Keeping a model fresh

Model drift is a natural phenomenon. The world changes, so a good model has to move with it.

Regular retraining and continuous monitoring are what keep predictions relevant. Track model accuracy over time, compare incoming data against the training distribution, and set alerts when either starts to slip.

Last Updated: April 18, 2026 Back to Dictionary
Keywords
model drift data drift concept drift machine learning ai retraining model monitoring mlops model evaluation