During the 1980s and ‘90s, training departments at most corporations published tables and graphs that displayed training days, batch occupancy and trainer utilization as key metrics to indicate how well the department was doing. The mantra was: “Get them into the classroom and everything will be right with the world.”

When the millennium turned, someone wisely said that perhaps it was time to ask the learner what they felt about the whole experience. That introduced measurement and analytic methods to track the employee experience, like surveys and the Likert scale. Though useful in many cases, relying solely on the Likert scale to measure the impact of training can be ineffective — a 4.1 score on the Likert scale is celebrated while a score of 3.9 is frowned upon. That difference of 0.2 can be the difference between a phenomenally successful training program and one that was an abject failure. Really?

Then came net promoter scores (NPS) as a measure of training success, and once again, the industry went ballistic with this new way of figuring out how good their programs were. For example, if three out of five learners recommend this program to their colleagues it was a success; if not, then it was a disaster. Since when did attending training programs get slotted into the same box as watching movies and eating at a restaurant?

Ineffective Measurement

But the biggest drawback (and there are several) of these measurements, is that you cannot do anything with a number. Looking back at the example — is there a significant difference between 4.1 and 3.9? Is it important to know that 60% of people who attended this program will recommend their colleagues? No, there isn’t! There is no call to action. And more importantly, there is no attempt to measure something fundamental to corporate training: What difference has this made to the learner’s performance on the job?

Organizations shouldn’t invest in their employees only to improve the business’s brand and reputation. Instead, employers should invest in training to improve business performance, increase sales, reduce mistakes, improve productivity, increase customer satisfaction and so on. They should invest in training to make an impact on relevant metrics that are tied to both an individual’s and the business’s success. Those are the only outcomes that should matter.

What Can You Do?

When training programs are designed to improve job performance and role productivity, they won’t need to rely on complicated NPS algorithms and smiley sheets for measurement. Neither should they rely solely on post-survey surveys to identify why the feedback score was 3.9 and not 4.1. Instead, learning leaders should be able to monitor and track an individual’s performance on the job and how they apply these new skills in their roles. That way, learning leaders can provide feedback and coaching to continuously improve skills, rather than rely on a bunch of ambiguous numbers. Key performance indicators (KPIs) are another great way to identify if training really worked.

You can demonstrate “before and after” results, compare performances of participants that went through the training course versus those that did not and make continuous course corrections to create a training program that is truly relevant to your employees’ work. When the next learning initiative comes up for budget approval, ask for the program’s success criteria: What business results will determine that this program is a success?

Pushing the envelope further, ask for failure criteria: Under what circumstances will this program be declared a failure? In both cases, these criteria must be clearly defined and have results within the first quarter of rollout. If you cannot ensure that the program will meet the success criteria, you should take a step back and identify if the training program is truly relevant to the organization.

Change the Narrative!

Over the past few years, learning and development (L&D) professionals have increasingly found themselves at the table. To continuously stay relevant and earn a seat as a business partner, L&D leaders are creating relevant training programs for their people. As learning leaders, design a program that is truly relevant for the business and measures the right metrics and not the metrics that have always been measured before.

No one cares about 4.1 and 60%, and action plans designed to create a 10% improvement in the numbers, for example. No one cares about learning objectives that read “at the end of this course you will be able to.” People care about what they can do at work after they attend this course. The next time you design or approve a training program, change the learning objectives. Make them more measurable and relevant to the employee experience. Ensure that these metrics are tied to business success and can prove the impact of training.

Here are some examples:

Like the old saying goes, “What you measure is what you will improve.” If learning leaders wish to prove training’s impact, they must measure individual employee’s performance, rather than rely solely on post-training surveys and questionnaires.