Editor’s Note: A recent research report sheds light on common challenges learning leaders face. These challenges were discovered through research and surveys conducted over the course of a decade. In this series of articles, we will explore these challenges and how learning leaders are responding. This article is the first of an eight-part series.
When speaking to learning and development (L&D) professionals, themes emerge that corroborate Training Industry research. One such theme is that L&D professionals know how crucial training is to business performance, but proving its value to other parts of the organization isn’t a walk in the park.
Adam Kucera, director of sales and training support at DISH, says L&D’s challenge is to prove training can and should be essential to driving business results. “When most enterprises evaluate effectiveness…the time and effort of the training itself is often viewed as simply a cost.”
People often focus on how to design a training program and how to deliver it “but spend comparatively little effort on how to know that it had all the intended impacts,” says Tom Whelan, Ph.D., director of corporate research at Training Industry, Inc. Regardless of how challenging proving the business impact of training may be, it is critical for the future of learning and work.
So how can an L&D leader overcome this challenge? The “role of training as a means to impact business metrics can be accomplished through rigorous use of data and analytics,” says Kucera.
“Give all the stakeholders one solid study you could defend with all the best research methods (trained groups vs. control groups, etc.) and be very conservative with your conclusions about the impact of training,” says Dr. Paul Leone, senior ROI consultant at Verizon. “Once they have confidence in your credibility, you can start expanding the scope of your measurement strategy.”
The proof is in the numbers, but how do you know specifically what you should be measuring and how to use that data to communicate business impact to stakeholders?
Start at the Beginning
Evaluation cannot be an afterthought. When it comes to deciding the most effective way to evaluate learning, you must first determine what it is you need to evaluate. What is driving your training program? For instance, if you’re looking for behavioral outcomes in an IT training initiative where your employees are familiarizing themselves with a new software program, you’re likely paying attention to if they’re transferring the newfound knowledge and skills on the job. The knowledge transfer is the purpose of the training program. Implement the evaluation of this desired outcome into your training plan from the start of the design process.
Additionally, make sure the purpose is clear across the organization. Kucera says there can often be “a disconnect in the final evaluation of success” across departments based on different expectations and motives for training. For example, a sales training program implemented by L&D might focus on improving the performance and confidence level of salespeople, which happened to result in an overall increase in sales calls. This outcome may cause the manager or senior executive to only focus on the potential percentage increase in sales, not in the behavioral outcomes.
This disconnect can be prevented by determining the purpose of training before designing the training program. “Get key stakeholders and finance brought in early so that you can be very clear about what everyone considers ‘success’ for a particular training program,” whether that be “a percentage increase in sales, a decrease in customers leaving, [or] a change in culture,” says Leone. If everyone knows what you’re evaluating from the start of the training program, you’ll have a better idea of how you should be evaluating it and how you can prove its impact.
One Size Does Not Fit All
Behavior change in the IT training scenario is not the most difficult learning outcome to measure. But what happens when the training purpose or skills are not as concrete, such as in compliance training? Trying to define what learning or training transfer looks like in these cases can be challenging, which can make evaluating the effectiveness of training and proving the impact it has made even more difficult.
Leone says that when it comes to learning evaluation and proving training’s impact to stakeholders, “you need to tell a story that makes sense to the business.” This story needs to consist of what you chose to evaluate and the business impact based on the evaluation results. For example, Leone says that it doesn’t help to only show the learning (Level 2 of Kirkpatrick’s model) if it doesn’t lead to a change in employee behavior (Level 3), or to say that employee performance is improving (Level 3) without showing how and in what ways those improvements have impacted the business (Level 4). In order to show the effectiveness of learning, he says it all has “to tie together in a beautiful story of impact.”
The evaluation model you choose to use can help in developing that story. Kirkpatrick’s four levels of evaluation might be the most well-known evaluation model, and so it is often the model people use, since it’s likely the easiest to communicate. It can be valuable, but the model was not developed with the modern learner in mind. It was formed during a time meant mainly for measuring the impact of classroom, or instructor-led training (ILT). L&D professionals today also have to take into account virtual instructor-led training, blended learning, e-learning and more, on top of ILT. But how does one go about adapting a well-known evaluation method that was originally developed for only one type of training?
“Anything can be measured, you just have to work harder at some of them,” Kucera says. If you can’t rely on the quality of the data you’re collecting because the form of evaluation was not the best choice for the training, you must look elsewhere. “To assume that the same methods of evaluation are going to be equally effective across all training programs is being willfully blind to all of the types of training that someone can take at an organization, and the mix of evaluations that best fit the particular goals of each training initiative,” says Whelan. In other words, there is more than one answer when it comes to evaluation and presenting your “story of impact.”
The key to evaluating the effectiveness of training is to not put the measurement of training into one specific box. Just like how training comes in all shapes and sizes, so, too, should evaluation.
Don’t miss the other articles in this series:
- Limited Access to Resources: A Learning Leader Challenge
- Content Relevancy: A Learning Leader Challenge
- Sustaining Training’s Impact: A Learning Leader Challenge
- Learner Experience Across Modalities: A Learning Leader Challenge
- Training Consistency: A Learning Leader Challenge
- Securing an Internal Champion: A Learning Leader Challenge
- Prioritization of Training: A Learning Leader Challenge