If you’ve ever designed, developed, delivered or managed a training program, you might get very excited when you get rave reviews on your Level 1 smile sheets. And you should — it’s the first measure and earliest indicator of success. But no matter how well you do here, the question still remains: Is it going to have a positive impact on employee performance back on the job?   We want training to be fun and engaging, but we must also measure and report higher levels of impact to make our case for the business. If investors, stakeholders and leaders are spending millions of dollars on training, they want (and have every right) to ask how much that fun and engaging experience will ultimately lead to better employee behaviors, greater profit and stronger growth. If we don’t give them this evidence, then many skeptical stakeholders will continue to see even the most innovative training experiences as fun, but frivolous investments.

So what exactly do these skeptical stakeholders want? And why don’t great Level 1 scores float their boat? To answer this, let’s look at how far “fun” will get us within our larger, more comprehensive six-level approach to training evaluation. Here’s a quick review of all the levels we can measure:

Evaluation Levels 1 – 6

Level 1:  Did they like it? Level 1 measures the extent to which training participants react positively to the training experience. Were they engaged? Was it fun? Were they satisfied with the content and the way it was delivered? Was it relevant to their role? Was it worth their time?

Level 2: Did they learn anything? Level 2 measures the extent to which new knowledge and skills were acquired during the training. Are they leaving with critical knowledge and capabilities that will help them do their jobs better?

Level 3: Are they doing anything differently and better? Level 3 measures the extent to which participants are returning to their everyday jobs and actually applying what they learned in training. Do they do something better? Do they do something more effectively or more efficiently? Without this application and transfer of knowledge, training can never impact the business.

Level 4: Did it impact the business? Level 4 measures the extent to which training is improving critical business metrics. That is, did the behavioral improvements and application of new knowledge and skills lead to better business metrics and higher performance? What was the increase in productivity? What was the increase in sales revenue, customer satisfaction or cost-saving efficiencies?

Level 5: Was it worth the investment? Level 5 measures the return on investment (ROI). That is, the extent to which the benefits of a particular training experience outweigh the costs of that training experience. The final ROI is expressed as a percentage of the original investment.

Level 6: What factors maximize the ROI? Level 6 is an evaluation of what I call “ROI Maximizers.” This analysis tells you which environmental factors are best at influencing the impact of your training back on the job.  Are there things going on in the participant’s immediate work environment (e.g., direct manager support) that are either helping or hindering the impact of all your training efforts?

Table 1.

Level Measures
1 Satisfaction: Did they like it?
2 Learning: Did they learn anything?
3 On-the-job Improvements: Did they do something differently or better?
4 Business Impact: Did it impact business performance?
5 ROI: Was it worth it?
6 Transfer Climate: What factors maximize the impact of training?

If we look at Table 1, we see that a fun and engaging training experience will get us past Level 1 — and that’s it! Can we ask stakeholders and business leaders to feel good about all the dollars they spend if we only tell them how fun and engaging it is? Of course not! There’s five more levels that are even more important and predictive of a good investment. Would you drop tens of thousands of dollars on a car if it might fall apart tomorrow, but you had a “fun” test-drive? How about investing in college tuition (as a paying parent not a student) because you heard the campus was fun? We can’t expect business-minded bottom-liners to care about a one-time training experience that is fun, without telling them about the longer-term story of impact.

The Value in Level 1

But that doesn’t mean you should scrap your Level 1! All is not lost. It’s still a valuable data point to collect. The bad news is you need more — but the very good news is that, despite assertions to the contrary, there is actually a correlation between how engaged the learner is during training and how much it might later impact that employee’s application (Level 3) and business benefits (Level 4). The even better news is that there are ways to increase the power of your Level 1 by including all the right questions in the overall score.

Can Level 1 Reaction Predict Level 3 Employee Behavior Change on the Job?

Based on my own research and data collected over the past 15 years, the answer is yes, it does have some predictive power. The reason why it’s been traditionally so frustrating to draw this conclusion or find research in the industry is that most Level 1 assessments don’t ask the right questions, nor do they use the right scaling. Further, there is rarely any consistency across companies, or even within companies, as to how they measure their Level 1 results (our independent variable). Add to that the scarcity of L&D organizations that are validly measuring Level 3 (our outcome variable), and you’ve got an almost impossible hypothesis to prove.

So, what evidence do I have of a correlation? The strength of my findings comes from the consistency of how I’ve collected Level 1-6 data over the years. That is, I’ve gathered data and reported findings on over 150 different training programs in over a dozen different companies using the same methodology, and at least a core set of assessment questions to capture Levels 1 and 3. Using this data, the evidence of a relationship breaks up into two key findings:

Correlations Within Programs

Here, the Level 1 scores of thousands of internal employees (on a five-point scale) were correlated to their respective scores for Level 3 (standardized to a five-point scale) for those very same training programs. The correlation for all participants in the study (N=20,116) was a 0.31, which was weak, but it was statistically significant considering the size of the sample.

Differences Across Delivery Modes

Here, the overall Level 1 scores for different delivery modes (our independent variable) were collected and then compared to the overall Level 3 scores for those same delivery modes (dependent variable). I used an analysis of variance within and between groups to make sure any differences were significant and so far, the evidence is compelling. The main modes of delivery in the study included training guides, training videos, self-directed web-based training, classroom training and virtual reality (VR) training. Essentially, the overall Level 1 scores went up as the training became more sophisticated. Simple instructional training manuals and videos scored the lowest in “exciting and engaging,” and VR scored the highest. The next step was to look at any available Level 3 scores across all the delivery modes and see if this progressively higher engagement translated into a higher impact on the job.  Since there were not many full Level 1-6 studies done for the low-investment training manuals and videos, I ended up comparing the aggregate results of just three different delivery modes — self-directed web-based training, classroom training and VR training. Table 2 a short summary of the findings:

Table 2.

Delivery Mode Average Overall Level 1 Scores Average Overall Level 3 Scores
Self-directed Online 3.8 3.0
Classroom 4.4 3.9
Virtual Reality 4.9 4.5

As you can see from the table, as the ratings go up for Level 1, so too do the scores for Level 3. The online training scored the lowest on Level 1 and also the lowest on Level 3. Classroom faired considerably better than online for Level 1 and 3. And finally, VR scored the highest on both Level 1 and Level 3. So at least from these findings, it appears that you may at least be able to predict which delivery mode will yield more employee improvement on the job (Level 3) by looking at their initial Level 1 scores.

Conclusion

Don’t stop measuring Level 1. If you’re asking the right questions, your smile sheets can at least give you a little positive indicator as to which training programs hold the highest potential for manifesting more effects back on the job. However, this Level 1 data is nowhere near enough to convince your stakeholders it was “worth it” because it doesn’t capture the real story of impact, which is how much was actually learned, how employee behaviors have changed months after the training, how much business benefit was realized, how much ROI was achieved and how to improve that impact and ROI in the future.

After all my research over the years, I’ve come to consider Level 1 as the first baby step in a journey across a very long and hazardous bridge. On one side, the training event where you begin your first steps, and all the way on the other side, across a very dark and scary abyss — lies the business results and the ROI destination you are trying to reach. And all along the bridge are potential pitfalls where you can lose your footing, slip through the cracks and plummet to a dismal training failure. There’s no way of truly determining for sure who’s going to make it across this treacherous bridge by rating those very first baby steps, and there’s no way of predicting exactly how long it might take them to get there, or how strong they will be when they arrive. But there’s no denying that the ones who start with the strongest steps and the sturdiest footing will certainly inspire the most confidence, and probably have a better chance of making it all the way across to a positive ROI.