Why can't I see the reason for a specific workout in my plan?

Why can't I see the reason for a specific workout in my plan?

By Dr. Sean Radford30th August 2025

That's an excellent question. We understand the desire to know the specific "why" behind each workout. The nature of our advanced AI - "deep learning" - makes it inherently complex to provide a simple, step-by-step reason for its decisions.

These powerful systems are often referred to as 'black boxes' because their internal decision-making processes are not easily understood by humans. This is not a flaw, but a direct consequence of the very complexity that makes them so effective.

Here are the key reasons for this complexity, based on established principles in AI:

Immense Scale and Non-Linearity

Our AI analyses your entire running history, along with numerous other data points. This information is processed through a highly complex mathematical model with countless interconnected parameters. The relationships it learns are non-linear, meaning there is no simple, straight path from an input (like your last run) to an output (your next workout) that a human can easily trace.

Abstract Feature Learning

Unlike a human coach who will use generalised and often anecdotal rules, our AI learns deep, abstract patterns from your data. These learned 'features' or 'representations' often lack a direct, human-understandable meaning. For instance, the AI might identify a highly predictive mathematical pattern in your heart rate recovery over hundreds of runs. This pattern exists only as a complex algorithm and has no simple, real-world name or single explanation that represents it.

Distributed Knowledge

A decision is never based on one single factor. Instead, information is processed collectively across the entire system. The reason for scheduling a specific workout is the result of a highly complex, collective computation across thousands of factors, not a single 'if-then' rule we can point to. Something no human could ever hope to replicate.

The Trade-Off: Performance vs. Interpretability

In the field of AI, there is an inherent trade-off between a model's predictive accuracy and its interpretability. Simpler models, like decision trees, are easy to understand but are often less powerful and less personalised. Our primary goal is to provide the most effective and adaptive training plan possible, and the current complex model architecture is what allows us to achieve this high level of performance.

While we can't show a simple "reason" today, this complexity is what powers the highly personalised plan you receive. We are actively following the field of Explainable AI (XAI) and will continue to evaluate new techniques as they become available.