It’s important to tell an honest, balanced story about impact measurement for training programs and learning initiatives. Measuring impact can be difficult — but it’s also possible. In this article, I’ve identified three prevalent reasons for impact measurement failure and ways in which to respond for impact measurement success.
Reason No. 1: You didn’t conduct a performance needs analysis.
Training and talent development fulfill the highest purpose with measurable impact on employee performance and business goals. There is a way to measure the extent to which this purpose is fulfilled. It starts with asking the right questions where the answers inform decisions for when training is the right solution, when it’s not and most importantly, how you will measure purpose fulfilled.
A proper performance needs analysis uncovers business goals and the skills and capabilities, expressed as human performance, required to achieve those goals. It’s a proactive step in the right direction for measuring the impact of training and learning. My experience tells me that if you do not get this most important, critical step right, measuring impact will be difficult or in the worst case scenario, impossible.
There are nine questions that when answered, can give the information we need to make proper recommendations for solutions that measurably impact performance. We can also receive guidance and direction for what we’ll measure.
Here are some performance needs analysis questions:
- What is the business goal?
- What is the opportunity or problem to solve?
- Who is supporting the business goal?
- Whose performance is needed?
- What are the performance requirements?
- What is the performance gap?
- What are the performance activators?
- What are the measures and key performance indicators (KPIs)?
- What are the risks?
These questions are intentionally focused on employee performance, business goals and measurable outcomes. A discussion where these questions are asked in response to training requests transforms the conversation from order-taking to impact-making. Results from the performance needs analysis informs decisions for what you will measure.
Reason No. 2: You didn’t define impact before you tried to measure it.
This is the impact measurement failure I have encountered most often. It is the most prevalent challenge and most immediate opportunity. If you conduct a performance needs analysis first, you’ll have what you need to define impact before training is designed, launched and delivered.
When you proactively define impact, you describe expectations for what the influence of training looks like in a worker’s day-to-day behavior and actions. You can answer the question, “If training fulfills its purpose, in combination with other performance triggers and activators, what does impact look like?”
These are examples of impact, defined:
- In combination with new pricing, new sales technology and our new marketing strategy, we expect the training program for consultative selling will contribute to a 5% increase in market share.
- We are introducing an eLearning solution for product assembly, compensation incentives and quality controls for errors, and expect these combined efforts to contribute to reducing defective products by 10%.
- Our call center metrics for customer satisfaction are two points below goal so we are introducing manager coaching tools, new call routing technology, job aides and training for customer engagement to raise customer satisfaction scores by three points.
Impact is defined as increasing market share by 5%, reducing error rate by 10% and increasing customer satisfaction scores by three points. If you do not define impact before you try to measure it, you’re throwing darts in the dark. A post-mortem approach for impact measurement potentially leads to failure but a proactive approach, where impact is defined before design, utilization and participation, sets you up for impact measurement success.
Reason No. 3: You didn’t manage expectations for isolating impact.
One of the most frequently requested impact analysis I am asked to perform is isolating the impact of training. There is interest in knowing training’s unique contribution. Control and test groups are the best solution.
Control and test groups can give the most compelling, reliable and credible signal for impact, and isolate training’s effect. With all else equal, these groups are homogenous where the difference is the test group gets training while the control group does not. Withholding training is not punitive or exclusionary, but rather, is done to compare what happens when the difference between groups is the introduction of training as a measurable variable.
Consider this example:
A health care provider wants to reduce waiting time for patient check-in. Group A (the test group) is trained on a new patient intake process while group B (the control group) continues the “old” process. A month after training, intake times for both groups are measured. Note: Other variables that impact intake time are considered as well.
Working at the speed of business does not always allow for control and test groups (some might call it a pilot). That’s the reality. Training requests are often urgent where time does not allow for the kinds of impact insights gained from test and control groups.
We must manage expectations for how far we can go with impact analytics. In the absence of test and control groups, we can estimate the isolated impact of training but doing so will not produce the strongest signal for impact. Telling the truth about what we can and cannot measure and exploring pros and cons for test and control groups manages expectations.
Opportunities for Success
There are no magic wands, pixie dust or magic potions for impact measurement success. No doubt, there are pitfalls and traps for impact measurement failure. However, there’re steps you can take to mitigate the risks: Conduct a performance needs analysis, define impact before you try to measure it and manage expectations for isolating impact. Measuring the impact of training may be difficult, but it’s absolutely possible.