How to effectively evaluate training has been a common challenge in the world of learning and development (L&D) for decades. Even though most training practitioners know that evaluation is important, have read articles about evaluation and have probably dabbled with a couple of models, little is actually done besides end-of-course questionnaires or check-the-box, multiple choice questions.
From our experience of working with hundreds of L&D professionals over the years, we’ve found that the most formidable obstacles to getting started with evaluation are lack of clarity about how to actually do it and apprehension that the results of training might not be good. However, not all is lost. Here’s a practical three-step process to better training evaluation.
Step 1: Improve Results.
The main mistake many learning leaders make when evaluating training is assuming that everything can or should be measured. That assumption usually leads to data that is either too broad to be useful or not beneficial to the business: Think of attendance or completion statistics, for example. Rather than trying to measure everything, it’s better to begin your evaluation adventure by choosing a training program that answers “yes” to the following questions:
- Can you expect clear and visible results from training?
- Is the intention of the program focused on strategic business issues?
- Is the training spread out over multiple sessions?
- Will the training have immediate application components?
- Is the program one that you want to repeat?
Once you’ve designed and developed a suitable training program, you should focus on strengthening the program’s results. This means you should have elements in your program where learning is transferred to on-the-job application. This learning transfer approach will always provide you with results that are big, loud and obvious, and, by nature, easy to measure.
In addition, having something to show stakeholders that really pops as you go through a program will give you confidence to replicate your measurement strategy in other programs and help you to overcome the fear that evaluation will simply highlight failures. To improve results, you should focus on creating learning transfer by thinking about these design tweaks.
- Less is more. Often, training programs try to encompass too much content into a single learning intervention. If you reduce the amount of content to only the highly targeted information, people will remember and comprehend more.
- Action is king. Design your programs to deliver output during and after training sessions. This means that most of training should be practice based. For example, the general rule could be 30% input and 70% output — blending traditional classroom training with learning application on the job. In this way you can increase learners’ confidence in their roles.
- Sequences trump one-offs. Where possible you should design learning to be a series of multiple events rather than a one-off event. A series of learning events can help to integrate learning, practice, on-the-job application and reflection.
- Blended is best. Creating a series of training activities can allow you to blend learning so that input is done offline, practice is done together, managers are involved at the right time and results can be presented as an end of program round up.
Having said that, not all programs are equal. Some are knowledge-heavy, others are skills- or habit-focused and others are designed to shift mindsets. Depending on what kind of training program you are measuring, the learning transfer elements or design considerations that get learning happening in the workplace, can change. However, here are some ways to improve learning transfer.
- Involve the learner’s manager throughout the program at targeted moments where they can have maximum impact.
- Get learners applying the learning content immediately during the program.
- Follow up on training with surveys, sharing best practices, etc.
- Set up, gather and publicize results that derive from workplace application as you go through the program.
Once you deliver training that addresses a key challenge, drives performance and leads to meaningful business results, then it’s time to measure.
Step 2: Measure Results.
To start, think about the various evaluation models that already exist and see if one matches your needs. Each model has features that will help you evaluate. For example:
- Kirkpatrick’s Level 4: tells you to think beyond learner satisfaction to application and on-the-job results.
- Phillips’ ROI Model: has great advice about how to calculate return on investment (ROI) and level four impact.
- Thalheimer’s LTEM: illustrates what you need to do to design and measure so that knowledge is retained over time and can be applied rather than just memorized.
When it comes to evaluation, Brinkerhoff’s Success Case Method is simple to administer, easy to produce results data and cheap to apply when you have clear program results. The main idea with this model is find the top 10-15% of learners who applied the training content effectively and who got great business results so that you can find out how that was possible, and then make sure the relevant stakeholders hear about it.
Have all learners apply the skills learned during training, gather the information that supports application as you go along and have them present their results directly to senior management at the end of the program. Clients often respond well when senior management assesses the level of impact (individual, team or organization level) for each presentation. Afterwards, follow up interviews with the top 15%.
During this process, try to find evidence to support the reported results, identify what enabled them to get such great results and look for any workplace application barriers. The idea is that by understanding the mechanism for impact you can recreate it and help more learners get better results in future training. However, even with some sort of measurement, you will still need an effective way to communicate the results to stakeholders.
Step 3: Report Results.
When it comes to sharing the results of training, there isn’t a one-size-fits-all approach. What you choose to share and how it is presented depends on a few factors. The first is related to the evaluation model you used. If it is one of the well-recognized ones then the report will be concretely prescribed, so all you need to do is follow a templated design. The second is all about how your company shares business results. If there is a tried and trusted way to get your message across, then use that. Familiarity is key to being understood and getting appreciation for your training programs.
Finally, you need to consider what you’re trying to convey and how to relay this message effectively. Quite often, L&D professionals rely on learning jargon that flies over the head of executives to explain program results. Try to avoid this where possible and use company jargon and established business expressions.
One best practice for presenting results is to turn data into pictures. This is known as data visualization, and it can be very impactful to show things like:
- Before and after comparisons.
- Application in the workplace data.
- Areas that require additional support or focus.
- Places where training can be improved.
However, you’d like to visualize data, ensure that it’s simple and easy to digest, gets to the heart of the matter and proves your point effectively. Ultimately, remember to design and deliver training programs that will make an impact, make adjustments along the way, and find an evaluation method to effectively measure training success.