Short definition:
Model drift is when an AI model’s accuracy or usefulness declines over time, usually because the real world has changed but the model hasn’t.
In Plain Terms
AI models are trained on data — but that data represents the past. Over time, customer behavior, language, regulations, or market trends can shift.
When that happens, the AI’s predictions or answers might no longer reflect reality. This gradual mismatch is called model drift.
It’s not that the model is broken — it’s just outdated or out of sync with the current environment.
Real-World Analogy
It’s like using last year’s weather forecast to plan this week’s trip.
Even if it was accurate once, it no longer reflects what’s happening now — and relying on it might lead to bad decisions.
Why It Matters for Business
- Hidden risk to quality
An AI model might silently get worse at recommendations, fraud detection, or customer responses — damaging outcomes without anyone noticing. - Requires monitoring and retraining
LLMOps or MLOps systems help detect drift early so models can be refreshed or fine-tuned. - Impacts regulated industries
In finance, healthcare, or legal fields, model drift can introduce compliance or safety issues if not addressed.
Real Use Case
A retail company uses AI to forecast demand. But after a major supply chain shift and inflation spike, their model consistently underpredicts sales.
This signals model drift — and they retrain the model with newer data to fix it.
Related Concepts
- LLMOps / MLOps (The operational practices that help detect and manage drift)
- Retraining (The process of updating the model with new data to fix drift)
- Feedback Loops (Ongoing performance data helps catch drift early)
- Model Monitoring (Tracks outputs over time to flag degradation)
- Data Distribution Shift(Technical term for the root cause of drift — when the input data changes)