Training measurement is a notoriously difficult aspect of the training manager role. However, measuring the impact of training is essential for proving learning and development (L&D)’s impact on the business — and your own.

In this special episode of The Business of Learning, sponsored by GP Strategies, Bonnie Beresford, director of performance and learning analytics at GP Strategies, answers our most pressing questions about training measurement.

Listen Now:

Additional Resources:

To learn more about training measurement, download the brochure for the Measuring the Impact of L&D Certificate course below. 

Speaker 1:

Welcome to The Business of Learning, the learning leaders podcast from Training Industry.

Michelle Eggleston Schwartz:

Hi there. Welcome back to The Business of Learning. I’m Michelle Eggleston Schwartz, editorial director at Training Industry, here with my co-host Sarah Gallo, a senior editor.

Sarah Gallo:

Welcome. This episode of The Business of Learning is brought to us by GP Strategies.

Ad:

GP Strategies enables people and organizations to perform at their highest potential, creating a world where business excellence makes possibilities achievable. Subscribe to the GP Strategies podcast, Performance Matters, where they interview industry experts and explore best practices, and share innovative insights on topics like the one we’ll discuss today.

Michelle Eggleston Schwartz:

We live in a world driven by data, but say the word “measurement,” and many training professionals immediately get nervous. But why is measuring the impact of training so difficult? How can we identify key learning metrics early on? And most importantly, how can learning leaders master training measurement once and for all to help position themselves and the entire training function as a core business asset? To find out the answers to these question and more, we’re speaking with Bonnie Beresford, director of performance and learning analytics at GP Strategies. Bonnie, thanks for speaking with us today.

Bonnie Beresford:

It’s my pleasure to speak with you and your guests on one of my favorite topics. Thank you.

Sarah Gallo:

Perfect, yes welcome Bonnie. Well to kick things off, I think it’d be helpful if you could define training measurement for our listeners. What do we really mean when we say measure the impact of training?

Bonnie Beresford:

Let’s start with that first question, how do you define training measurement? Because different things might come to people’s minds when they think of measurement, I like to position it as figuring out what the purpose is, why do we measure? We measure to inform, and we measure to answer questions, whether it’s getting on the scale in the morning or tracking our Fitbit to see how many steps we took, we’re using measurement to answer questions, and to provide information and really to provide insights for decision making. These decisions definitely come in all shapes and sizes, whether it’s again getting on the scale and should I have the ice cream or not? To learning measurement where we might be looking at efficiency, effectiveness, and impact. So really the best measurement…. And when people start thinking about measurement, I would hope they would start thinking about the questions that they’re trying to have measurement answer. If you simply say, “I want to measure,” but you don’t know why or what for, it’s going to be really hard to come up with good measurement because you won’t know what it is you’re looking for. It could be simply how many employees are taking our training, that’s a measure. Or did customer service reps find the training more relevant than salespeople, and why? Or how well are we utilizing our virtual classroom licenses? Or all the way to did my million dollar leadership program have a positive return on investment? Without a sense of what you’re looking for, you could just be sitting on a pile of data. You all might have heard of stories of people that have wonderful dashboards with all kinds of reports, but they’re not answering anybody’s questions. So really, again, you need to just start with the question, what are you trying to answer and how can I use data to get there? I’m going to come now to your second point, that impact of training. For many, and especially more recently, measuring impact is kind of the holy grail, is my program moving the needle? And if so, by how much? How much is it impacting the business? When we talk about measuring impact, in my perspective, it should always be about business impact, using real business data. Getting outside the learning data to the business data to demonstrate business impact [is key].

Sarah Gallo:

Such a great point. I think going back to one thing you said, Bonnie, about the importance of asking the right questions …. What tips do you have for learning leaders who don’t know which questions they should be asking?

Bonnie Beresford:

You can start pretty simply and working just with the data you have. Do you know how many students you have? Do you have a description of your audience, by job role? How many there are by job role? Where are they geographically? Do you know what your course catalog looks like, and the age of your courses? How many people are taking new courses versus old courses? It could be, do I have too many trainers or don’t I have enough trainers? What’s my facility and trainer utilization? Or it could get into the business thing of, is this training working better for new hires than for veteran employees? It’s all over the board in terms of the types of questions and it really just takes a little stepping back and saying, “Gosh, what do I wish I knew?” It could be into the business impact realm, but it could very well be just into the operational metrics as well.

Sarah Gallo:

Perfect, thanks for breaking that down for us. I think first of all, while we know that many learning leaders probably want to get started measuring the impact of training, like Michelle mentioned earlier, it’s not easy. Why do you think it’s so hard for so many of us to get training measurement right?

Bonnie Beresford:

The answer is twofold. One is we tend to stay within the walls of training to do our measurement and use the data that we own, that we love and that we can control, and try to do our measurement there. Second, it’s how we also define success. Are we defining success on our terms? Or for instance in a learning organization, we might say we were successful if we completed our pilot and had a certain number of attendees complete the program. Or we converted 15 live courses to a virtual delivery platform. If that’s how we measure success, that’s within our own universe, that’s not in the business world. We need to think about what success looks like in the business world to get to business impact. Did we improve new hire retention? Did we reduce on the job injuries? Did we increase production run rates? You see the difference in the measures here, we’re looking one side is learning operational kinds of measures, versus business outcome and output measures. So it’s how we define success, and in the second breath, it’s the data that we use. If we are confined to our own data, because it’s easy to get and we understand it, we’re only going to get that far. We have to get comfortable talking about business data and showing the linkages between how our programs affect that 90 day retention, affect safety ratings, affect production rates. It’s when we step outside the learning area and start getting integrated with the business in terms of how they define success and what their measures are, then we’re going to start having more success, but it’s uncomfortable for a lot of learning people to jump over that wall into the business world.

Michelle Eggleston Schwartz:

Those are some really great points Bonnie, especially I really like what you said about really looking at success through the lens of the business. L&D professionals really need to challenge themselves to get outside their comfort zone in a sense, and really immerse themselves in the business metrics and what success looks like through the lens of stakeholders and senior leaders. It’s so incredibly difficult, but it’s so important.

Bonnie Beresford:

Echoing on that, Learning Technologies Group just did a research project and they identified that 96% of learning leaders want to measure impact, so we’re all talking about it. Brandon Hall did some follow up research and found that only 16% of learning organizations feel they have the capability to measure impact, and that same research asked the respondents why? If you want to, why aren’t you? And the number one reason was competing priorities, and I get that. Other things come up and measurement takes time and focus, but the second reason was they didn’t know how. They didn’t know what to measure, and they didn’t know how to measure. I scratch my head when I think about that, because many of us have grown up with the Kirkpatrick Model. We’ve got our four levels of evaluation from the satisfaction, through the learning gain, and the behavior, and the business impact, but we still aren’t there. I mean, that model, the Kirkpatrick Model was first published in 1959, so that’s what? 62 years ago, and we still haven’t mastered it. What’s missing is a mental model around how to get to that impact. How do we make that alignment? We’re missing that alignment between our learning and business impact, or could I say learning and business outcomes, because we aren’t conversant in those business metrics. What we need to do is partner with our business stakeholders to identify what those outcomes should be. It shouldn’t be up to the learning organization to define what the business outcomes are. The business is coming to us saying, “I have a need, I’ve got a safety problem, I’ve got a production problem, I’ve got a turnover problem,” whatever it might be, and ask us to help them solve it. But we have to dig in and figure out what are the underlying behaviors that we could train that would address those business outcome problems that they have? This is a mental model. It’s really being able to have this conversation with the business partner to unpack that. Throughout my career, I have been fortunate enough to work with great organizations and I have developed a tool called a measurement map, and a process by building a map in collaboration with business partners to build a picture of that alignment. If this is your business problem, what’s the evidence of that? What would employees have to do differently if they were to resolve that problem for you? How would you measure that? How would you know they were doing that? We back that all the way down and pretty soon we’ve got learning objectives for our training programs. And we also know what to measure, and we have the business partner on board. They have told us what’s important, what good looks like in their terms, what’s on their dashboard. We’re not having to invent this, they’re telling us and we are partnering with them to define what success would look like and how our training could help close the gap.

Michelle Eggleston Schwartz:

Those are some good points, Bonnie. I really like that you brought up a lot of industry research around really the need. Everyone wants to measure the impact of training, but it’s like, where do you start? Our own research here at Training Industry has found over 60% of people surveyed indicated that they feel they need more development in this area. We’ve seen it firsthand on our website in terms of website traffic and around content related to measurement and analytics this year alone. So there’s really been an increase, I’d say this past year, in that topic. Through that lens, where can L&D leaders start on this journey towards measurement? Do you have any recommendations?

Bonnie Beresford:

In my work with clients, and this is from nonprofits all the way to the Fortune 10, learning organizations are struggling with this. I realized that this process that I’ve developed over the years, actually it does work and that others want to learn how. Initially people would contract me to do this work with them and my passion is more to enable other people to do this work themselves. I would love to see every learning organization have an embedded measurement and analytics function within their learning organization. I want to enable those people to have these kinds of business conversations, to build something like a measurement map so that they too can get aligned and get to the business data and start integrating. It’s not an easy thing to do… If you don’t have a mental model and a process, you’re stuck trying to invent it yourself, and there are good ways to start coming at this if you’ve got the right tools.

Michelle Eggleston Schwartz:

Definitely. From your own experience, are there any types of programs in which it’s easier to measure impact, than others?

Bonnie Beresford:

Yes, it is. When the learner or the worker is directly accountable for their own output, for their own metrics, that’s much easier. Such as salespeople are supposed to sell, they have volumes or dollars or units or something that’s easy to measure. Customer service reps often have very specific metrics about customer satisfaction or call handle time, that are directly attributable to the individual. When you’ve got things that are that direct, it’s much easier because you can look at people who got trained and look at people who didn’t get trained, and look at their before performance and their after performance. What you actually have then, it’s kind of a natural test and control group and you’re all set up to do an observational study, which is kind of the scientific approach to doing this. Those are much easier to measure. Things that are a little more difficult would be leadership development. It’s easier for first level leaders because those measures might be closer to their job role than it would be at senior executive level, because those metrics tend to be at a higher level, big numbers as opposed to individual outputs, so that gets a little more difficult, but there is a way. One of my favorite books is Douglas Hubbard’s book on How to Measure Anything. He is an economist, he’s not in the learning profession, so he’s coming at this from an econometrics perspective, about how do I measure intangibles in business? Pretty much it’s if there is a will, there’s a way, you can figure out how to measure. So I would encourage your listeners to check out that book because it will give you several aha moments into how to measure those intangibles, and give you a way for thinking about the problem at hand. I want to build on that too, because Douglas Hubbard’s work has been foundational in my own. You were asking earlier about how people can learn this and my passion for sharing the how-to on this, has gone into us building out a measurement academy. I don’t want this to be a promotional spot for GP Strategies’ Measurement Academy, but it came out of the desire of our customers to do this work themselves. In this academy, we’re teaching through hands on and micro learning, and videos, and assignments, and live sessions, how do I go about building these measurement maps? And how do I go about putting together a credible measurement plan that I can show the impact in a credible way? I can causally link my training to the business outcomes. Again, it’s passing on this knowledge because the more people in our profession understand and can do measurement, the more credibility we’re all going to have with our business stakeholders, and we’re going to elevate the entire industry because we’re going to be able to show our value.

Sarah Gallo:

Definitely, thanks for sharing that, Bonnie. I love what you mentioned really about how some programs, it’s just easier to measure numbers and revenue versus someone’s emotional intelligence, so I think that makes complete sense. I’d love to, if you could maybe share a little bit about your own experience. When you first started getting involved with training measurement and now you’re really empowering others in the community to do this, what did your journey look like? How did you overcome some of those really real initial challenges that many learning leaders may be facing right now, in terms of measurement? That’s a great question and I have a very specific incident that happened in my career that really pushed me in this direction. I was working for a major automotive company and supporting their training organization, and this was for their retail training organization. They trained all the dealerships and all the dealerships’ sales people across the country. The organization had just launched a new model and sales were flat and your learners may appreciate this, the sales department came to the training organization and said, “You need to improve your training because the sales are down,” and they were blaming the training department. That’s why the salespeople weren’t selling it, it was because they weren’t trained well enough. The learning leader turned to me and he said, “I think that I’m really tired of being the fall guy. I’m tired of taking the blame every time a product doesn’t meet its expectations when it might be the advertising, the marketing, the design, the engineering, the quality of the product. I don’t like it always being training that’s at fault. And I never get credit, by the way, Bonnie, when the product sells well. Nobody ever says, ‘Wow, that training must’ve been great.'” He challenged me with showing the impact of his training. With that, I did not know how to do it and I actually called in some experts that I knew professionally through the International Society for Performance Improvement, and I had them consult with me on how to do this. That’s when we started building out this concept of a causal chain of evidence from training to behavior, to outcomes, to the business goals. We were able to do one of those observational studies, looking at the before and after performance of people who got trained and people who didn’t, and we were able to show credible evidence of the impact that training was having. With that, I was hooked, that yeah, you could do this. This doesn’t just apply to automotive, it doesn’t just apply to sales training, we could do this everywhere. So for me, it was that “aha” moment that yes, it can be done. Yes, there is a process that you can do. Think about kind of like a clinical drug trial. We’re all very much aware these days of clinical drug trials for medicines and how they have test and control groups. The control group they’re controlling for hereditary and diet, and exercise, and sleep patterns, and all that other sort of thing, and so we try to control as much of that as we can in the workspace too, by looking at tenure and job role, and age, and demographics, and region of the country somebody’s in. So we control for those things and we apply the same methodology that’s being used in the sciences, we apply that now to the human sciences of human capital to do this same kind of analysis so we can show that cause and effect relationship. One thing that I always get challenged with is you can’t prove it, there’s other stuff going on. There’s a pandemic, there’s a recession, there’s a product shortage, there’s a chip shortage, all of this going on. So part of showing that causality is three things you really need to make that causal argument for the effect of your training. First, the cause must precede the effect, so your training has to happen before you see the increase in sales. Okay, that’s easy. Second, there must be a correlation between the cause and the effect, so let’s say as I train people on safety, it’s correlated with a reduction in injuries. Okay, that’s a correlation. Those two are pretty easy hurdles to clear when you’re making a causal argument, the third one is trickier and that is rule out everything else. Rule out all those plausible alternative causes, and to do that, you need two things. You need a logic model that says if this, then this, and if this, then this, and if this, then this. That’s where that measurement map comes in, it’s actually a causal model. It’s a logic model. So you need to have a logic to make a causal argument, and the second thing is you need a good research design. That’s going back to our measurement academy and what I’m teaching there, is not only the measurement mapping, but it’s also how to design a credible measurement plan, which is your research design. Which all goes into the making that credible argument for your training. Because again, we’re looking at business data and we’ve got skeptics and we want to be sure that they understand what success looks like. They’ve told us what their metrics are. They’ve told us all these other influencing things so we can account for them in our analysis. They’ve given us all the, “Yeah, but…” So we have that conversation with the business partner, figure out what success looks like in measurable terms, figure out all the other things that could be going on that could affect sales or safety besides our training, so that we can design a credible analysis, then we’re ready to measure. So it’s really having a mental model about how to tackle business impact and then having a process that you can apply again and again, to different initiatives, different programs.

Michelle Eggleston Schwartz:

I like that there’s a process. I think you did a great job of reiterating that yes, there’s a process and that learning leaders can follow. Yes, this is achievable and doable. I’d be interested to hear what are some specific skills that learning leaders need to have in order to do this effectively? Do you have any recommendations on specific skills?

Bonnie Beresford:

This goes back to our very first question about asking the question, starting with a question. I would say the number one skill for the leader is curiosity and being able to frame that curiosity in the form of a question. I wonder if my training’s working. Okay well, I wonder if my training is improving sales. I wonder if my training is improving customer satisfaction. I wonder if my training is reducing workplace injuries or slip and fall injuries. I wonder if it’s different for people in one location versus another, or one job role versus another. So it’s that curiosity because when you start asking those questions, you enable your team to think along with you about, “Gosh, how could we answer that?” So that’s one. Number two is for a learning leader, a basic analytic skills, and you should have somebody else on your team. If you’re the learning leader, you’re going to have a data analyst or somebody who is really conversant with the tools that you have and number crunching, and merging data, and that sort of thing, but you should be comfortable enough with data. I guess I would coin that as data literacy. Then the third is storytelling. Now that you’ve found your answer, how are you going to describe what you found? I’ll share with you, early in my career when I was doing this, I would do these studies and I would come up with a 30- or 40-page report that was just, I thought it was magnificent. The number one critique I got was, “Where’s the executive summary? Can you put this in three PowerPoint slides?” I was so excited by my research, I wanted to share it with everybody, but not everybody cares. They want to know what did you find? What does it mean? What should I do next? So the curiosity tells you what questions you want to go answer. The data literacy gives you comfort interpreting what your data analyst is bringing to you. And then the storytelling is how are you going to share this back to the business in a story that resonates with the business? And it’s more about what you did for the business, not what you did within the training, if you know what I mean. Because again, our stakeholders are our business partners, how did we help them achieve their goals?

Michelle Eggleston Schwartz:

Definitely. I think that’s a great roadmap on some key skills, like that balance between those technical skills and soft skills. Curiosity, kind of leading the way, asking those important questions to lead to the right data and right answers, I think that’s incredibly important for learning leaders to know. There’s another point I wanted to touch on, it’s why it’s important for learning leaders to measure the impact of training. Bonnie, can you share what is the risk if learning leaders don’t measure the impact of training?

Bonnie Beresford:

I was talking with an organization just this morning about this, and they told me their risk right now was being outsourced. If they can’t show their value, their whole learning organization was under threat of being outsourced. That’s kind of extreme, but we’re all in a world of scarce resources, time, money, people, and the people are not just the people in the learning organization, the people are also the students who are consuming our training, and their time is precious. So we need to make sure that our training is on target, that it is aligned with the business outcomes that the business needs to achieve, because then we’re not going to be wasting their time. If we’re doing training on programs that are not moving the business needle, then it’s a waste of everybody’s time. Organizations, they’re looking for value in everything they do, and that includes their learning organizations. So if we want to be relevant, if we want to be recognized as being part of the solution, as opposed to just an expense, we need to show that value. We need to show how we are aligned. And with this process of getting aligned, getting them to articulate their success, I’ve had clients tell me that they are having better conversations and their relationships with their business partners have improved so much because the business now realizes, oh those people in L&D, they’re trying to help me. They care about what’s on my dashboard. That’s what we’re working toward. It has shifted that relationship from the L&D people being those order takers to now thinking alongside the business with what does success look like? How would you measure it? What behaviors do people need to do to drive that success? It’s really tightened that connection between the learning organization and the business. That’s all part of showing the value of the learning department to the organization at large. You really need to have a process for engaging with the business and have the internal expertise on your team to do that. The risk at the far end of the spectrum could be to be outsourced.

Michelle Eggleston Schwartz:

That is a big risk. Definitely, the process and expertise definitely are both needed to really deliver that value to senior leaders. That’s what they need to see for L&D to remain relevant within the organization. That’s so crucial to moving away from that order taker status, as you said, and really delivering essential business value.

Bonnie Beresford:

Right, and some of it will be showing impact, and some of it is simply having those discussions with your business partner, because they may have told you they want to improve productivity. Do you even know what they mean by that? What is productivity? What do you mean by productivity? How do you measure productivity here? For what job role do you mean productivity? If they have never articulated what they mean by productivity, but they’re holding you accountable for improving productivity, the learning department’s in a pretty dire state then, because they don’t know what productivity even looks like. If you talk to the business partner, say, “Well productivity looks like doing A, B and C,” now you know that you need to train on A, B and C, it’s much clearer. I have an example with a restaurant, and let’s say you are a restaurateur and you wanted to improve the profitability of your restaurant, and you knew you needed to train your front of house staff, your wait people. If the owner of the restaurant said to the learning department, “Help me improve productivity,” I’m not sure that the learning department would exactly know where to start, but if the learning consultant asked the restaurateur, “Well, how would you expect our weight staff to improve profitability?” And if he said, “Well, I would like them to upsell people. We need to get the average check per table. We need to increase the amount of the average check.” “Ah okay, and how would you do that? What would you expect them to do?” And he comes back and says, “Well, we have a very fine wine list and a lovely dessert menu. If we could train our wait people to upsell on desserts and to pair wines with the dinner, and so we would see a bottle of wine and a couple desserts on every ticket, that would increase profitability. Now as the training department, you know what to train on. Now the business partner knows, and so when the restaurateur sees people showing the wine menu, showing the dessert menu, it’s like, oh okay, that training is working. It’s connecting the dots all the way back, so we know what productivity looks like, define it in measurable observable terms, down to the performer. That’s the process and that enhances the relationship with the business partner, because it forces them to articulate things they may not have articulated themselves.

Sarah Gallo:

For sure. Going back to one other thing, you mentioned Bonnie about the importance of really storytelling and [telling stories to] stakeholders. Like you said, they want that executive summary, how can having those storytelling skills and really those interpersonal skills help training managers really break down this complex data in a way that does prove their value clearly to the C-suite and those making decisions?

Bonnie Beresford:

The storytelling can be so rich and some of it is how you choose to visualize your data, tables, charts, graphs have a place. Usually in an executive presentation, you want a couple of high impact graphs that pop with the big numbers that change, and your storytelling should respect what the training has done. We set out to do this, the training was intended for this and before training, we were here and after training we were here. Look at the improvement and quantify that improvement for them, not in a formula, but in a graphical way. Whether it’s a big number that says 10% increase in sales, or it’s a picture of 10 more cars on the screen, if it’s car sales, something that carries that story. Another lovely thing to attach is any qualitative data you might have, whether they are testimonials from students or comments on level one surveys, your satisfaction surveys. When you can bring in any anecdotes to add the life to the data, to personalize it a little bit, it just makes for even a more memorable story. In the storytelling it’s important to know what is the key message? What do you want them to walk away with? Figure out what that is and make sure that that message is front and center and that it’s clear. You can have an appendix a mile long that has all the other things about the study population and how many people were in it and all that sort of thing, but keep it concise, know what your story is, and then figure out visually how to best tell it. And throw in some anecdotes for the color commentary.

Michelle Eggleston Schwartz:

Definitely, adding life to the data, I love that. It’s definitely critical in delivering your message effectively to the business, for sure. Well, before we wrap up here today, Bonnie, you shared so much, do you have any final tips for our listeners out there who are struggling to measure the impact of training? Any last best practices or a tip?

Bonnie Beresford:

Sure. We’ve covered at a lot of ground today and I’ve gone all the way to the observational study and all of that. I would encourage people just get started, start small, start small, get into the data, start asking a couple of basic questions and see what you find. Start slicing your data by demographics, by region, by job roles and see if you’re learning anything different there. So start small, learn how to build a measurement map, learn how to do this alignment with a friendly business partner. Learn what’s important to them, what are their KPIs? And get some of their data and merge it with your learning data. Just try to merge the business data and the learning data together and see what you find. Do some data exploration there, having the questions, but see what you find. So start small and remember, even if you don’t find anything, you’ve learned something because if a program doesn’t work, don’t you want to know that too?

Sarah Gallo:

Love that. Well, what a great note to end on, Bonnie. I think asking those final questions, and maybe we will like the answers, maybe we won’t, but at least we have that information and that’s really what it’s all about. So thank you so much for speaking with us today on The Business of Learning. How can our listeners get in touch with you after today’s episode, if they’d like to reach out?

Bonnie Beresford:

Well, I am on LinkedIn as Bonnie Beresford, and also through gpstrategies.com. And as my final thought, I would just like to leave you with my personal mantra, which is measure to prove, measure to improve. Because really proving is one thing, but improving is really the game we’re all in. After all, we are all in the human performance improvement business.

Michelle Eggleston Schwartz:

That’s great. It was a pleasure talking to you today, Bonnie, on this very important and complex subject, so definitely thank you. I think our audience and listeners are definitely going to find this information valuable.

Bonnie Beresford:

Well, my pleasure sharing.

Michelle Eggleston Schwartz:

For more insights on training measurement and to learn more about Training Industry measurement certificate, which can help you develop many of the skills we’ve discussed today, check out the show notes for this episode at trainingindustry.com/trainingindustrypodcast.

Sarah Gallo:

And as always, don’t forget to rate and review us on your favorite podcast app. We love hearing from you. We’ll see you next time.

Speaker 1:

If you have feedback about this episode or would like to suggest a topic for a future program, email us at info@trainingindustry.com or use the contact us page at trainingindustry.com. Thanks for listening to the Training Industry podcast.