“How am I supposed to look good to senior leadership when I can’t prove success?”

“ROI? We all know you can’t measure ROI from learning.” 

How We Measure Learning

According to the LinkedIn Learning 2020 Worldwide Workplace Learning Report, measuring learning is the number one concern of learning professionals, worldwide. 

Numerous learning companies say they measure for learning effectiveness and they do take a stab at it. We’ve made measuring learning one of our major focuses. Instead of using one or two learning measurement touchpoints, we use all of the major touchpoints plus some we’ve designed ourselves. In this document, we’ve laid out how we do it.

Sample Measurement Timeline    

Background 

Since 1959, when Donald Kirkpatrick first published his learning measurement model, the Kirkpatrick Scale has been the gold standard model. Each of the 4 levels is important and each shows a progressively more powerful impact. 

The traditional four levels of learning measured by the best learning programs are:

  • Reaction: What is the initial response to the learning?
  • Learning: Did they learn anything?
  • Behavior: Did learners take action?  
  • Results: Did the action make an impact?

We subscribe to the idea that a 5th level of measurement is necessary: ROI. 

Here's how we measure for each. Feel free to use our method.
Of course, if you’d prefer to engage us, we’ll be happy to partner with you. Ahem. 

Level 1 - Measuring Reaction

“Are they willing to do what we want them to do?”

If people don’t see value and aren’t willing to try something new, then why bother measuring anything else? Specifically, we measure if learners see the perceived value. Are they willing to do what we want? 

How we measure:

  • Measure before the learning:
    We measure openness to change and perceived value before the learning engagement starts. Our pre-learning activities, media, and thought stimulators are all designed to:
    • create motivation 
    • intention to use 
    • alignment to goals 
    • and create an “I can do it” belief
  • During the learning
    We collect metrics during the learning. Attendees participate in brief polls around perceived value, willingness to try, and action plans. Insights gained during conversation are recorded by the facilitator.

  • After the learning 

Immediately after the learning--before attendees leave the room--attendees fill out a short survey designed specifically to get their thoughts. Did attendees feel learning was relevant, important, useful, and do they intend to use it.

We measure if attendees would recommend the learning to others in the organization, and boil this down to a Net Promoter Score, the metric that the Harvard Business Review says is the single most important metric you need for growth (link).   

Level 2 - Measuring Learning (Knowledge)

“Are they able to do what we want them to do?”

If people intend to use learning but they don’t understand or retain any of it well, that’s not good.

The second level is measuring whether or not the learning sticks. We're really asking, “Did they take away what we wanted them to take away? Do they know what we want them to know?”

  • Pre and Post Knowledge Checks
    Attendees participate in pre and post tests which check their knowledge around major learning points. And just as important as knowledge is confidence. If confidence grows as a result of the learning, attendees are more likely to try out their new skills. That’s why our pre and post tests also focus on building confidence.

  • Post Learning Support Campaign
    After learning, our attendees are supported with additional engaging content, peer-to-peer meetings, real world assignments, and one-on-one coaching opportunities. Each of these contains a short (painless) evaluation, designed to measure if employees understand what we hoped they understood.

Level 3 - Measuring Behaviour

“Did they apply the learning in real life?” 

Engaging, fun, and fascinating workshops are great. But if people just go back to their normal life afterwards, what's the point? Sadly, most learning programs never bother to find out if the learning took hold and led to action.

  • Long-Term Surveys 

Our learning formula is specifically designed to lead to action. We test for behavior change with long-term (30d/60d) evaluations with attendees, their direct leaders, and peers. Getting leaders involved creates accountability and if necessary we reach out personally to make sure we get participation in the evaluation process.

We don't just ask, “Did you use the learning?” We ask how the learning was applied, what challenges or barriers were faced, and if/how they feel supported or enabled. This allows our clients to see both the impact of learning and identify opportunities for cultural and operational support. 

Level 4 - Measuring Results 

“Does the learning make an impact?”

We are super excited to change behavior. Did that behavior change make any difference? That's what we find out next. We make sure that we don't lose sight into the idea that the ultimate goal of any learning program is to make a difference.

  • Long-Term Surveys
    We ask the attendee, their peers, and their direct managers if and how the learning made an impact, what that impact was, and how much of the impact is directly related to the learning.

    Impact analysis is the first step towards ROI. Sometimes, impact is more important than ROI. Many executives are fine with spending if that money leads to a positive emotional, social, cultural, or ethical result.

Level 5 - Measuring ROI

“Bottom line, what’s our return on investment?”

ROI has forever been the elusive Holy Grail of learning programs. Some people will tell you that you can't measure ROI for learning. We're here to tell you that's not true. 

First, I'd like to point out that we're not talking about some newly-created calculation made specifically for learning programs. We're using standard ROI calculations to determine ROI including benefit-to-cost ratio (Benefit / Cost) and ROI percentage (Net Program Benefits / Program Costs) * 100).


Here's how we do it:

  1. First, we determine objectives. What problems are we trying to solve or what outcomes are we looking to achieve? This way, measuring is targeted.

    Before the program we determine a “win measurement”. What makes a good Target ROI? We might set the win value at the return that the organization normally makes on a capital investment. Or we set it slightly higher to show how much more human training is worth. Sometimes we simply ask the client, “What ROI would make you happy?”.

  2. We isolate effects of the training.
    To be able to truly say, “The learning caused this”,  we must attempt to isolate the impact of non-learning factors on the outcome and ROI. To do this, we use the following techniques:

    1. Control group:
      Is there a comparable group that did not go through the learning? If so, using a control group is one of the best ways to isolate learning causes. There are times when this isn't feasible, ethical, or desirable. Sometimes the sample size of the control group would be too small to draw reasonable conclusions. In this case, we turn to:

    2. Trendline Analysis
      Trendline analysis specifically works by counting incidence of specific events. For example, if an organization logs customer complaints we can count complaints before and after the learning to see the direct result.

      This only works if we have “before” data and if we are able to understand what other variables are impacting results. For example, an improvement in the product may greatly reduce customer complaints unrelated to the training.

      In instances where “before” measurements are not available or variables are not accounted for, we use:

    3. Conservative estimation process:
      When attendees and their leaders are asked to make an estimate as to how much time or money savings training directly led to, this data is generally met with suspicion if not outright rejection. How can we accept a guess as reality? The real answer is you can't--unless we can make the estimated data credible and conservative. 

      For example, someone may think they saved one hour a week as a result of learning, but how credible is that?  Estimates vary wildly and tend to be hyperbolic.

      We solve for this by applying a conservative formula designed to create credible results:
      1. First, we ask attendees, managers, and peers whether or not the training led to a behavior change. If the answer is yes:
      2. We ask whether or not the behavior change made an impact. If that answer is yes:
      3. We ask what kind of impact the behavior change made. Impact can vary. The attendee is presented with the most common impacts of learning: time, productivity, communication, personal growth, or relationship impact.

        For example, if the impact is a “time impact” we ask how many hours the attendee, manager, or peer estimates is saved over a period of time.

        This is the important part: We ask, “How confident are you in that answer?” Then, we multiply the perceived benefit by the confidence ratio.

        If a manager thinks that each team member saved 1 hour a week and they're 60% confident in that answer, we multiply 60 minutes by .6 to come to the conservative estimate of 36 minutes saved (60% of 1 hour). 

It's a myth that data scientists never make guesses. They just don't make guesses without a conservative and credible process-based formula.

  1. Convert impact to money:
    Often effective training leads to time saved, productivity, or closed deals--things that are easily measurable. When the benefits are less tangible, the good news is that there are studies and readily available databases showing average costs for things that are normally considered intangibles like turnover or conflict. 

    Many of our clients have internal experts who have already quantified specific impacts. In that case, we work with them or turn over our data for internal analysis.

  2. Compare monetary savings to cost:

Return isn't very helpful if we're not clear on what the investment is. That's why  we help our clients understand the true cost that goes into a learning program. This involves elements like:

  1. Needs assessment costs
  2. Development costs
  3. Program material costs
  4. Instructor/facilitator costs
  5. Facilities/Venue costs
  6. Travel, lodging and catering costs
  7. Participant salaries and benefits
  8. Administrative and overhead costs
  9. The cost of the evaluation program itself

We make predicting ROI easy by providing a pre-built ROI calculator. You can download it now by clicking: (click here to view and download).

After we have costs, we apply standard ROI formulas to reach an ROI calculation.

  1. Communicate results to stakeholders:

Evaluation without communication is a wasted effort. Why bother with all of the data if you can't do anything with it?

Our reports are built with specific decisions in mind. Leaders need to answer questions like:

  • Did we get our money’s worth?
  • Is training more valuable than something else the attendee could have spent 2 hours doing? 
  • Is it worth it to expand the training program? 
  • Do we need to pivot our approach in any way?  
  • How successful were we in achieving the goal we set out to achieve?

Our reporting is delivered in a big picture way and we also make the minutiae available. Decision makers get what they need in an easy to understand way and people who like to climb into the data are happy as well.


NEXT:

If you'd like to talk about how Gravity Learning might help your organization, please reach out. We're also happy to just explore ideas or have a nerdy conversation about the psychology of fantastic learning.

Reach Out!

Questions about how we could help your team? (and prove it?)

While you're here, why not book a workshop for your team today?

View All Coursestalk to us