A new model for measuring return on investment

The discussion of return on investment (ROI) in training is a common stumbling block. We know that the workforce needs to continually learn, but finding a calculation for ROI of human performance is difficult.

Hebert-Maccaro, K. (2018) The ROI Dilemma: Measuring Results of Your Learning Programs [ATD webinar], May 1. Archive available to ATD members at http://webcasts.td.org/webinar/2701.


Herbert-Maccaro begins with an overview look at two common models used to determine ROI in corporate learning and development: Allen’s model, and Kirkpatrick’s model. They are similar, and a common issue with both is that measuring at each of their levels is too complicated.

Allen’s model: This is from Mark Allen’s 2002 book, The Corporate University Handbook  .

  1. Participant satisfaction
  2. Cognitive knowledge acquired
  3. Technical skill acquired
  4. Attitude and perception change
  5. Individual behavior change
  6. Individual behavior change regarding application of knowledge
  7. Critical mass change
  8. Culture change

Kirkpatrick’s model: This was published by Donald Kirkpatrick in 1959, but its actual unattributed origins are from Raymond Katzell  .

  1. Reaction
    This is the “smile sheet”. It doesn’t measure what you really want to know.
  2. Learning
    This is usually a set of multiple choice questions asked directly after training, but retention drops significantly over time.
  3. Behavior
    This is conducted by observation; for example, one month after training a manager may assess whether they can observe the person applying the new learning.
  4. Results
    This asks if the training had its intended effect, but the problem is that it’s difficult to measure this for large-scale initiatives.

Unmentioned but often measured is just the completion or compliance record. This is increasingly irrelevant for the just-in-time way people are learning.

New framework: Look at correlation instead of causation

One main problem with these models is the issue of causation. It is hard to prove that a training initiative had a direct effect. Instead, Herbert-Maccaro argues that we should look for correlation between learning and metrics such as promotions, bonuses, engagement scores, adoption rates, etc. This is her proposed framework:

Herbert-Maccaro's framework

Correlation-Based Learning Metric Framework (Hebert-Maccaro, 2018)

New model: Because correlation isn’t enough

Correlate: Despite the proposed new framework, showing correlation isn’t enough. So, she proposes a new model for measuring learning in which correlation is one of three assessment types:

Multi-faceted Learning Measurement Model (Hebert-Maccaro, 2018)

Explore: In the proposed model, this assessment type looks at behavior and impact. It’s targeted, and so it’s easier to scale. She recommends observation- and inquiry-based methods: manager feedback, observational analysis, and the Success Case Method.

Monitor: In the model, this is where you will find smile sheets and pre- and post-training tests. For smile sheets, be sure to limit the questions to those about instructor effectiveness, applicability of the topics learned, and missing or confusing topics (not asking about the room temperature and the breakfast danish!).

Learning imperative: In the model, this is the underlying justification. Although it was emphasized in the webinar, I felt it didn’t state anything new or surprising: It’s hard to find good talent, employee turnover is expensive, the workforce must engage in (and be supported in) lifelong learning, people value development.

Note: She focused on millenials, but I think this is a mistake: People value development, not just milllenials. We’ve always needed to engage in lifelong learning as technology changes, that’s not something new to today.

Leave a reply