The Knowledge–
Performance Link

Most retail training produces no measurable commercial impact. This study examines why, and what changes the outcome.

2025–2026 Observation Period · 275 Points of Sale · 3 Maisons

The Question

Every Maison trains its teams.
Almost none can prove it works.

The industry spends heavily on retail learning. Yet when asked to quantify the commercial return, most L&D teams cannot. The question is not whether training should work. It is why most programs produce no measurable result, and what makes the difference when they do.


We cross-referenced granular learning data with store-level sales performance across three Maisons, over 6 to 12 months, to isolate the variables that separate effective training from expensive noise.

Methodology

How we measured it.

Learning data

Completion rates, quiz scores, retake behaviour, time-on-platform. Captured automatically from ToldUntold's backend.

Source: ToldUntold platform
Merged at store level

Sales data

Monthly store-level sell-through, current year and N−1, provided by each Maison's commercial team.

Source: Client POS / ERP
Matching. Each store's learning engagement is matched against its own year-over-year sales change. No aggregation hides underperformers.
Controls. Year-over-year comparison eliminates seasonality. Pre-deployment baselines confirm engagement groups were comparable before launch.
Statistical tests. Pearson correlation, Granger causality, multivariate regression with R² decomposition. Full methodology in the white paper.
The Sample

Three Maisons. 275 stores.
1,850 retail professionals.

Maison A

Leather Goods & Accessories

Points of sale120
Retail associates850
Observation12 months
Markets14
Maison B

Fragrances & Cosmetics

Points of sale95
Retail associates620
Observation12 months
Markets9
Maison C · Replicability validation

Ready‑to‑Wear

Points of sale60
Retail associates380
Observation6 months
Markets6

Maison C joined 6 months later. Different brand, different category, shorter window. It serves as an independent validation: does the same pattern emerge?

Finding 01

Engagement that
actually lasts.

Within 12 months, 91% of team members completed their entire learning journey. Quiz scores reached 88–89%, confirming genuine retention.


Maison C, at 6 months, already reaches 72% completion and 81% quiz scores, tracking the same curve as A and B at the same stage.

Industry context

Standard e-learning completion rates in retail sit at 15–25%. The correlation we measure depends on a depth of engagement that most platforms never reach.

Completion rate over 12 months (%)

Completion = percentage of all available learning content finished. Maison C (dashed) started 6 months later. Red band = industry average range.

Finding 02 · The core finding

Trained stores
sell more.

We compared each store's learning engagement with its year-over-year sales performance across 120 stores from Maison A.

Store-level: completion rate vs. year-over-year sales change

Each point = 1 store. r = 0.53, p < 0.001. Sales change is year-over-year to account for seasonality.

Stores grouped by engagement level:

Median sales change by engagement group
Key finding

Trained stores outperform untrained stores by +4.9 percentage points of sales growth. The "in progress" group falls in between: a dose-response pattern consistent with a causal relationship.

Finding 03 · Causality

Learning drives sales.
Not the reverse.

Correlation is not causation. To establish directionality, we applied a Granger causality test.


This statistical procedure tests whether past learning activity significantly predicts future sales, beyond what past sales alone would predict.


Result: learning activity Granger-causes sales improvement at the +1 month lag (F = 12.8, p < 0.001). The reverse direction, sales causing learning, is not significant (F = 0.9, p = 0.42).


The causal arrow points one way. The 4–6 week activation window is exactly what you'd expect for knowledge to translate into stronger client interactions.

Granger causality: F-statistic by time lag

Bars above the dashed line are statistically significant (p < 0.05). Only positive lags (learning precedes sales) cross the threshold consistently.

Finding 04 · Multivariate analysis

25% of what differentiates
store performance is learning.

Sales in luxury retail are driven by dozens of factors. A 2025 PRISMA review of 80 empirical studies identified 151 variables affecting purchase intention and 84 affecting actual buying behaviour.1 Most of these, however, are measured via consumer surveys, not real transaction data.


Our approach is different. We ran a multivariate regression on actual point-of-sale revenue, controlling for two factors we could objectively measure alongside learning engagement:


  • Store-level fixed effects, absorbed by the year-over-year design: each store is compared to itself. This implicitly controls for location, foot traffic profile, store size, team tenure, and local competitive dynamics.
  • OECD Consumer Confidence Index, matched monthly to each store's market. This publicly available macro indicator captures shifts in consumer spending appetite that affect all stores in a given country regardless of training.2

After controlling for these factors, learning engagement alone explains 25% of the remaining variance in year-over-year sales change.

Is 25% credible?

In context, yes. A PLS-SEM study on luxury store environments found that all sensory stimuli combined (music, scent, lighting, staff interaction) explain 24.6% of the variance in customer emotions.3 The CXG "Advisor Effect" report, based on 100,000 client evaluations across 12 Maisons, shows that a single negative advisor interaction leads 78% of clients to abandon a purchase, while effective clienteling increases average basket by 30–50%.4 Our finding sits squarely in this range, and it measures the one lever a Maison can scale across its entire network.

Variance explained (R² decomposition)

Multivariate regression on YoY store sales change. Store fixed effects absorbed via the YoY design. CCI data: OECD (oecd.org). Only directly measured factors are shown; the residual includes foot traffic variations, stock availability, local promotions, and other unmeasured variables.

A note on what we don't claim

Unlike studies that decompose R² across many factors using survey-based proxies, we only show what we directly measured. We do not claim to know the exact contribution of foot traffic or stock levels. The residual is large, as it should be in any honest model of retail performance.

Rigour

What this study
does not prove.

We ran this analysis knowing full well that retail performance is messy. Here are the objections we asked ourselves, and where we landed.

"The best stores probably just train more. You're measuring motivation, not training impact."

This was our first concern too. So we looked at what the three engagement groups (low, in progress, trained) were doing before the platform launched. Their sales performance was comparable. The gap only opened after deployment, and it widened over time. That said, we can't fully rule out a hidden manager-quality variable: a great store manager might both push training adoption and independently drive sales. The Granger test helps here, but it's not a randomized controlled trial. We're transparent about that.

"Isn't this just seasonality? Some months are always better."

All sales figures are year-over-year: each month compared to the exact same month the previous year. That's the standard way to strip out seasonal patterns. We also included the OECD Consumer Confidence Index as a macro control, because a country-wide spending mood swing could affect all stores simultaneously and look like a training effect if you don't account for it.

"Any new initiative creates a buzz. This could just be the novelty effect."

We thought about this one a lot. If it were novelty, you'd expect the effect to spike early and then decay. We see the opposite: the correlation between learning depth and sales is stronger at month 12 than at month 6. More importantly, it tracks with quiz scores and completion rates, not just logins or time-on-platform. People who merely opened the app but didn't learn don't show the same sales uplift. The CXG Advisor Effect report4 documents a similar pattern: it's the depth of advisor knowledge, not the existence of a training program, that moves the needle on client conversion.

"We already have an e-learning platform. Why would yours be different?"

The mechanism we measure is universal: deeper knowledge leads to better sales conversations. But reaching that depth is a format problem. Traditional e-learning modules are long, generic, and disconnected from what's actually on the shop floor. Completion rates in retail sit between 15 and 25%.5 That's not a motivation issue, it's a design issue. Our approach is structurally different in two ways. First, every learning fragment is a short, visual micro-episode tied to a specific product, collection, or savoir-faire. The advisor doesn't learn "leather goods" in the abstract; they learn the story behind the bag they'll present tomorrow. That anchoring to real products changes how knowledge is encoded and recalled. Second, the format itself (cinematic micro-content, gamified quizzes, social mechanics) drives the kind of repeated, voluntary engagement that produces 91% completion over 12 months. The sales signal we detect depends on a depth of retention that most platforms never reach, because the format doesn't sustain attention long enough to get there.

"R² = 25% means 75% is unexplained. That's a lot."

It is. And we think that's honest. A 2025 PRISMA review catalogued 151 distinct variables influencing luxury purchase decisions,1 from store atmosphere to social influence to macroeconomic conditions. No single factor dominates. The ANCOVA model by Jourdan & Pacitto on selective cosmetics distribution achieved R² = 74.7%, but with four interacting structural variables (store type, geography, distribution density, market tenure) and 28,000 survey responses across 14 countries.6 In that context, a single behavioural variable explaining a quarter of the variance in actual sales data is a strong signal, not a weak one.

The Bottom Line

The evidence, summarized.

91%
completion at 12 months
+4.9 pts
sales differential, trained vs. untrained
4–6 weeks
learning-to-impact activation window
25%
of store performance variance explained by learning

Across three Maisons, 275 stores, and up to 12 months, the pattern is consistent: depth of learning engagement predicts commercial performance, with a clear causal direction and a 4–6 week activation window.

Go deeper

Download the full methodology

The complete white paper includes detailed statistical tables, confidence intervals, Granger causality analysis, multivariate regression outputs, and the full data requirements specification.

Request the White Paper
Sources

References.

  1. Systematic review of factors influencing luxury purchase. International Journal of Trade and Management, 2025. PRISMA analysis of 80 empirical studies (2009–2023), identifying 151 variables affecting luxury purchase intention and 84 variables affecting buying behaviour. journals.imist.ma
  2. OECD Consumer Confidence Index (CCI). Monthly, by country. Used as macro-economic control variable in the multivariate regression. data.oecd.org
  3. Yang, S. et al. (2022). Effects of Stores' Environmental Components on Chinese Consumers' Emotions. PLS-SEM and fsQCA study. Environmental stimuli explain 24.6% of the variance in customer emotions; emotions explain 19.8% of purchase intention variance. PMC/NIH
  4. CXG, The Advisor Effect: Driving Retail Success by Re-imagining the Role of the Client Advisor, 2024–2025. 12,000 employee surveys, 12 Maisons, 100,000 client evaluations. 78% purchase abandonment after negative interaction; +30–50% basket from effective clienteling. cxg.com
  5. Industry e-learning completion benchmarks. Aggregated from Brandon Hall Group L&D Benchmarks (2023), Training Industry Training Delivery Report (2022), LinkedIn Workplace Learning Report (2024). Typical retail e-learning completion: 15–25%.
  6. Jourdan, P. & Pacitto, J.-C. (2021). Determinants of Commercial Performance in the Sector of Selective Distribution. ANCOVA model, 28,000 questionnaires, 14 countries. R² = 0.747 (F = 12.810, p < 0.0001). davidpublisher.com (PDF)
  7. Bain & Company / Altagamma, Worldwide Luxury Market Monitor, 2024–2025. €358B global personal luxury goods market in 2025. Tourist luxury spending +7–9% in 2024. bain.com
  8. Mehrabian, A. & Russell, J.A. (1974). An Approach to Environmental Psychology. MIT Press. Foundational S-O-R framework. See also: Kim, S. et al. (2016). Customer emotions and their triggers in luxury retail. Journal of Business Research. sciencedirect.com
Make every opportunity to share information truly meaningful and track the real, measurable impact it has on your teams and your audience.