Measuring the business impact of training has long been a notorious challenge for learning and development (L&D) professionals, and rightly so: It requires a certain level of business savvy, in addition to working with stakeholders well before rolling out a program.

In this episode of The Business of Learning, we spoke with industry experts Lindsey Clayton, a learning and development consultant at Caterpillar, Dr. Allan Church, managing partner at Maestro Consulting and former senior vice president of global talent management at Pepsi Co., and Dr. Jaimie Krause, director of L&D at Indeed in global learning and enablement, for tips that can help “demystify” training measurement and prove the value of training to the business.

Listen now:

Additional Resources:

To learn more about Training Industry’s Measuring the Impact of L&D Certificate course, download the program brochure below. 

The transcript for this episode follows: 

Speaker:

Welcome to The Business of Learning, the learning leader’s podcast from Training Industry.

Sarah Gallo:

Hi, welcome back to The Business of Learning. I’m Sarah Gallo, senior editor here at Training Industry.

Michelle Eggleston Schwartz:

And I’m Michelle Eggleston Schwartz, editor-in-chief. Before we begin, here’s a brief message from our sponsor. Training Industry is Measuring the Impact of L&D Certificate program.

Ad:

The Measuring the Impact of L&D Certificate Program offers the tools you need to measure and elevate the business impact of training. Rooted in the research-based training manager competency model, you’ll leave this elite training industry course with actionable strategies and best practices that you can immediately apply in your role to better measure and assess training programs of all kinds. To learn more about the program, visit trainingindustry.com or check out the show notes for today’s episode. Where will training measurement take you?

Sarah Gallo:

Today, we’re talking about a topic that has long been a challenge for many L&D professionals, regardless of their level of experience, measuring the business impact of training. Proving training’s value to the bottom line is essential in positioning the training function as a lever for performance improvement and for gaining stakeholder buy-in and support. But as you can ask pretty much any L&D professional, connecting training to business outcomes isn’t easy.

Today, we’re speaking with some experts who can help demystify training measurement and hopefully offer some best practices that you’ll be able to apply after listening to today’s episode. With us, we have Lindsey Clayton, a learning and development consultant at Caterpillar, Dr. Allan Church, managing partner at Maestro Consulting and former senior vice president of global talent management at PepsiCo, and Dr. Jaimie Krause, director of L&D at Indeed in global learning and enablement. Lindsey, Dr. Church and Dr. Krause, welcome to the podcast.

Dr. Allan Church:

Thanks for having us.

Dr. Jaimie Krause:

Thanks for having me.

Michelle Eggleston Schwartz:

Yes, welcome. I’m excited for this conversation today, because as Sarah just mentioned, we constantly hear from our audience that measuring the business impact of training is a huge challenge and undertaking. I’d love to get your thoughts about why you think that is. Why is proving training ROI so difficult?

Dr. Jaimie Krause:

I’m happy to start us here. Hi, everyone, this is Jaimie Krause. Thanks so much for having me. In the recent conferences that I’ve attended and in many of the industry publications I’ve read, there’s two reigning topics right now in L&D, and that’s impact and AI. For impact, there’s so much activity and emphasis on those three little letters, which stands for Return on investment, and it’s really exciting. Everyone is super, super excited to be able to show ROI on enablement solution.

However, as you all said, it’s not easy. There’s a couple of reasons for that. With training, it can be challenging to represent a cause and effect relationship because we have to isolate the effect of training and identify what else could be responsible for that improvement that we might’ve seen. With training, it could be challenging to represent a cause and effect relationship because so much else could be responsible for the improvement that we might’ve seen to performance such as employee incentives, marketing solutions, product enhancements, you name it.

The first thing is, depending upon the relationship that L&D has with the business, we may or may not have a line of sight into business strategy. Without that line of sight, we may not know to account for all the appropriate inputs as we tell back our impact story. That access, I would say, is that first challenge right there, especially as we think about the ROI equation in reference to the Phillips model. This is all before considering the influence of our own L&D capabilities and bandwidth, which is another challenge that we have.

We have to have the bandwidth and L&D is notoriously lean in staffing. We have to be able to have the bandwidth to validate our evaluation instruments, a set aside time to analyze insights and communicate impact. Capacity, I would say, is another. In a way, I suppose it sounds cheeky, but we have to ask ourselves, what is the ROI on measuring ROI? The ROI Institute recommends only evaluating about five to 10% of our trainings and showing return on investment that way based on the size and scope of the program.

Lastly, I would say managing expectations with the business about what’s feasible to measure ROI and being able to really highlight where it makes sense to invest in that ROI demonstration and not. I would say just to summarize, access, capacity, and credibility would be three reasons I think that make it challenging to show ROI.

Lindsey Clayton:

Yeah, and I’d like to add to that just a bit because you mentioned a lot of the business context that might be responsible for a difficulty in linking training outcomes to business outcomes, but it can be broader than that too, market conditions. For most of us, we’re working with global companies, and so it’s so highly variable that there is, even if you have the business information, a believability issue when you try to link training outcomes specifically to business outcomes.

It’s really important to make sure that when you are designing programs and deciding what to measure, I love the point you made about being very specific about how and when you measure are the conditions right for being able to make that link. Because you could have the best data, you could have access to the data, you could have capability, but you’re still going to have a believability issue at the end of the day unless you have really strong controls over that. And oftentimes we don’t, and that’s another reason why it’s just so hard.

Dr. Allan Church:

Yeah, I think those are great points. I would say from my perspective, I’m going to take a little bit of an OD or development lens to this kind of discussion as well at times, because to me training when it’s thought of is just pure training. I think you can actually do a pretty decent job of measuring impact if it’s very straightforward. If it’s a global intervention or something that’s more complex, I think it’s a whole different ballgame. To me, a lot of training in that regard, a lot of what we do in organizations is more of a change intervention than just training.

I take it from an OD perspective and that brings in a whole other set of lenses that I’m not sure everyone is equally comfortable with. But for me, many organizations will say, “Oh, give me some training to solve that problem. I want some training in this area. I want this. I want this,” without thinking through. The senior leaders who are asking for that don’t always think through the implications, really why they want it, what they’re doing, what they expect to get out of it. Any realistic, to Jaimie’s point, resourcing against that.

I mean, Lindsey, your point too, I mean, linking it to the business, understanding what’s going on and the other factors in the organization. For me, it’s stepping back and saying, “All right.” It’s sort of like contracting, right? I know we don’t always have the opportunity to do that with senior leaders and we don’t always have visibility of strategy, but to the extent possible, it’s stepping back and saying, “All right, what is the definition of impact of this intervention? Why are we doing it? Why did you pick training in the first place? Let’s say it’s the right intervention, that’s great, but what else needs to go with it to make it work?”

It’s not just training. What else needs to go with it? And usually I would argue measurement, but there’s other things. We can talk about that later. Understand the impact needed, get realistic time horizons for measurement and for outcomes. Again, Jaimie, I think you touched on this a little bit, but there’s often this budget cycle every year that most companies operate under.

If you’re lucky, you might have budget for a couple of years for a project, but some training efforts, some things take a long time, especially if it’s linked to culture change, behavior change, new types of manager capabilities, new skill sets. It might not be something that’s overnight or a month. Thinking about that and aligning the clients with the timing needed, and then finally applying that system’s thinking. Lindsey, to your point, everything else that goes with it. Is the reward system going to reinforce the new behaviors?

Are the communications reinforcing it? Do things like structure enable people to go apply new skills learned? There’s lots of other things going on that you need to look at holistically to make it work.

Sarah Gallo:

Yeah, that holistic approach is key. I love what all of you mentioned, especially you Jaimie, about how kind of determining the ROI of measuring ROI. I think there’s this longstanding notion that every single training intervention needs to be measured. Being more strategic about that is definitely key.

Dr. Allan Church:

On that point, exactly. I forgot to mention this, it’s coming up later, but one of the things I was told when I joined PepsiCo was that we don’t measure ROI. We don’t believe in ROI. Jaimie, your point triggered that. I meant to mention that. Part of the reason the head of learning at the time told me that ROI was circling the drain, at least at PepsiCo. It’s because in that organization, historically training development and learning and leadership development, which are bundled together there, has always been very effective.

Over the years that’s been a pretty standard approach to ROI, although I will say at times different leaders will come into the organization and start asking the same questions and you get into a scenario where you do need to do a study or prove something to get over that hump. But it is interesting, Jaimie. I agree. The ROI on the ROI is a great question.

Sarah Gallo:

Well said. Great. I think it also would be helpful for our listeners if you could each tell us more about your own experience with measuring training impact and specifically what have you found that works or doesn’t work when it comes to training measurement?

Lindsey Clayton:

I’ll jump in, and I want to build on what Allan just said too about ROI. At Caterpillar, we’ve had very similar discussions. I remember a few years ago there was a lot of training literature that was shifting the ROI to ROE, return on expectations, and that’s really where we’ve been living.

I think that that’s true in terms of what works for us in our learning measurement is not so much trying to correlate training interventions with selling more tractors, engines, and parts, but trying to say, what outcomes are we trying to drive from that OD perspective, from the organizational perspective, what metrics do we have that we can track against to get a sense of what that looks like, and create that measurement plan as part of the initial design of those programs.

Certainly I would say when that is in place, that’s when I’ve been the most successful with being able to create that story and articulate that story to stakeholders and anyone else who would benefit from having that information. What doesn’t work, I think even others started to mention that, is when it’s all ad hoc and there’s this expectation that the data that is available. It’s been very exciting. There’s a whole lot of people data around, but it’s not necessarily something that we can use readily.

I think that’s where there’s still a lot of struggle of saying, “Ooh, I’d really like to know this tidbit.” The data might be there, but the resource and the capability around turning that data into something that’s actually actionable can be very difficult. I think going into it’s really important to understand what are you trying to measure and keeping it very targeted versus adding in all these different curiosities. And that can be challenging.

Dr. Jaimie Krause:

Lindsey, that’s a great point, and I love what you said about what doesn’t work is thinking about things at an ad hoc, one-off sort of way. At Indeed, what we found to be really effective is as much as possible, standardizing our approach to measurement, so standardizing our sentiment survey, working with our sales effectiveness teams to build and maintain dashboards that we can self-serve, standardizing how we tell our impact story so the format is clear. It’s easy to follow. Folks know what they’re getting when they’re getting it.

We’ve also required observable behavior change from mandatory training requests. To your point of aligning our work with business goals and business priorities and having available data, we want to ensure that when there is a training request, we know what knowledge skills or attitudes are expected to be impacted, and we have the data sources to tell us what that expectation or what that impact looks like. I think those are some of the things that have been really effective at Indeed.

I think the last thing I’ll say is just what doesn’t work is not agreeing on what impact looks like before accepting a training request. I think we’ll get to it a little while, but being able to understand and align with the business, not only what we’re hoping to impact, but also being really fair about… Allan, I think you said this really well.

Training will account for a percentage of the performance changes that we see, but what else might be contributing to that and trying to account for all the business strategies that are at play and as we’re looking to demonstrate the impact of our request. I think that’s really important.

Dr. Allan Church:

Let me share a little story from my time there at PepsiCo, and there’s a Harvard Business case that you can look up on this. I think it’s 2009, but it’s about Steve Reinemund, the CEO at the time who took over. I literally joined the day he took over as CEO, and he had a very large de and I agenda. Back then it was more diversity and inclusion, but it’s the same thread. It was the first time there was ever a center led focused agenda on DE&I for the organization. It had always been decentralized over the years.

Part of that process, and there’s a whole data argument for why we were doing it from a turnover perspective and culture, but part of that agenda in the mid-2000s was literally to create a new organization focused on learning around inclusion. New training, the first ever centrally funded, centrally led training and development intervention for the entire organization. Up until that point, it had been distributed to different groups and there were COEs, but they did pocket things, not that large scale interventions.

We approached it from a culture change perspective, as you can imagine, because that’s going to be my orientation and linked a lot of other processes to that work. But what we found was we started by asking Steve, “What are you really trying to achieve?” It’s not just I want to put training in place. Yes, I want everybody to go through training over the next two or three years and I want to see impact, but to what? We took a look at culture, how would culture be measured? We looked at our org surveys and changed those.

We implemented a brand new pulse survey that was going every quarter that looked at people who had attended the training and who hadn’t, because we could segment that, and look at their attitudes real time as they went to the training and came back, what they remembered, what they felt, how they felt about the organization, our agenda versus those who were sampling peers that did not go. We tracked that for three years. We also linked things to PMP, performance management objectives and talent metrics.

But that pulse survey process and linking it to behaviors later on the following years in -360 and other feedback tools really enabled us to say, “Here are the messages. Here’s what we expect from you when you attend. Here’s how we want to track how you’ve changed, your perceptions, and here’s the behavior set we want you to have afterwards and we’re going to measure you on it and reinforce it.” That really drove that training agenda incredibly. In the end, we went from, I have the data here for you, 33% in terms of the organization feeling like we’re making progress to 71 over three years.

That was driven by yes, the penetration of the training, for sure, but also we were able to measure it and show that. And by showing that data back, kind of the ROI of the ROI in a way, by being able to show the impact through just simple pulse surveys that were targeted, we could show that it was making a difference, and that reinforced senior leader communications to get people engaged. It created the whole energy source that drove that program for five, six years.

Michelle Eggleston Schwartz:

That’s great. Thank you all for sharing your experiences. I think you all helped illustrate just how important it is to really understand the impact and the outcomes and the behaviors that you’re trying to change before you even agree to that training request. Thank you all for sharing. As we all know, there are so many models and frameworks out there to help learning and development professionals measure training’s impact. I’d love to hear which you’ve found to be the most helpful and if you have any tips for our listeners looking to begin using these models or frameworks on the job.

Dr. Jaimie Krause:

I’m happy to jump in on this one. At the Association for Talent Development Conference in 2022 in Orlando, I had the good fortune of attending a session led by Kristopher Newbauer, he’s the CHRO at Rotary International, and it was aligning ADDIE, the instructional design model that we all know and love, with business impact. He takes the ADDIE and almost creates an accordion look to it.

He’s found three areas across the ADDIE model to really incorporate evaluation. By keeping evaluation in mind throughout the design process, it’s a really nice way to ensure that we are able to measure the impact of our training all along. We’ve been touching on it throughout the podcast, but it starts with the analysis phase. It starts right off the bat, making sure that we are identifying the business challenge and quantifying a business goal in a language that resonates with the business, which is often money.

It could be metrics, metrics is tied to money, but we’re ensuring that the work that we’re doing is laddering up and we’re focused on the right things with relation to the request. The next thing that he talks about is determining if training is an appropriate strategy to influence this goal. Because I’m sure we’ve all seen it, training is not always the answer or it’s not the only answer. There may be lots of other things that we need to consider if we’re trying to achieve a goal.

Let’s imagine though that we determine training is the answer, then we have to think about what knowledge, skills, and attitudes would be important to reach the goal and identifying learning objectives from there. In the analysis phase, Kristopher talks about establishing evaluation criteria. What does it look like if we were to achieve this goal? What are the metrics associated with achieving this goal? How are we going to measure this?

And then in the design phase, thinking about, all right, we may have existing instruments that we can leverage, but are there others that we need to design as part of our evaluation strategy? Of course, we have to think about at what level do we want to measure the impact of this work, levels one through five? Is ROI something that we want to consider as part of this?

And then lastly, if you’ve been looking at the ADDIE model, this accordion, and you’ve established that evaluation criteria and you’ve aligned with the business, you’ve identified and designed your evaluation instruments, then that last E, the end, the evaluation, all you need to do at this point is just execute it. Everything is really nicely set up for you and you’ve been accounting for evaluation all the while.

That’s really helped me think differently about how to incorporate evaluation at every step, so it’s not just something that we’ve accepted a training request, we’ve executed it and now we have to think about how we’re going to measure it. Measurement is throughout the process.

Dr. Allan Church:

I’m obviously a fan of all the old traditional models. I go back to Kirkpatrick, so even before Holton and some other people worked on the model, and they’ve always kind of… Every version I’ve seen over the years is pretty much the same in my mind with some tweaks. But the fundamental basics of five levels or however many you want to have, but something around how people feel about it initially, what they’ve learned over time, the application behaviors that people see in terms of change, overall impact, which I might put in the context of culture or other types of things, and then return, which is the hardest one to measure, as we all talked about.

I tend to have those five in mind on every project I’m looking at, whether training or anything else, and have that sense of survey data, feedback data, performance data, culture data, and then you start to look at if you can with enough metrics, business impact, but that’s the hardest piece. But I also like to talk through, and Jaimie, I think this is what you’re getting at with the accordion I think, I like to talk through also helping the client, helping my organization understand the difference between micro, mezzo, and macro perspective on some of this.

Because again, as I said earlier, I think core training for some types of skills at the micro individual level is pretty straightforward. You can do it, you can check it. You can see if it works, and you can look at if the job performance changes and hopefully equate that up. But when you start to pull that up at higher levels, you start looking at more broader organization development constructs like team climate, team dynamics and things, and then the macro levels of culture, and that takes broader measurement strokes.

It takes a bit more of a time horizon too you need, as I mentioned. Thinking about all of it together really can help you lay out where you have outages in your measurement strategy. If you think about the three levels by the five types of data you want, reaction, learning, application, impact, return, you can really get strategic and figure out, have I got everything I need in place? Maybe I don’t need it all, but have I thought about all the bases before somebody comes to me after?

Because the hardest thing any kind of evaluation work is figuring out that you needed to measure something after you’ve passed the point at which you started to measure it, right? You forgot to take a baseline. Getting a baseline is key.

Lindsey Clayton:

I mean, you both covered that really well. I’ll just also add that at Cat, we also, to Jaimie’s point earlier, are using standard models in order to define what we’re going to measure across programs. We may not measure it for every program, but still we have a standard approach. In the question you asked about resources, so I will just share that we are currently using the Evaluating the Impact of Leadership Development book from the Center for Creative Leadership.

I think Allan covered it all. It just has a very methodical way of addressing all those things that you might want to measure and helping you think through designing that. But again, it really comes at the beginning. The more forethought and proactive you can be when designing your measuring, no matter what model you’re using, the better.

Sarah Gallo:

Perfect. Thanks so much for sharing more about those models and frameworks. We know there’s a lot out there, and we’ll link to some other job aids and helpful tools in the show notes for this episode. But we’ve covered a lot of ground so far and we’ve definitely talked about why it’s important to identify what impact looks like before rolling out a solution or even working on a solution. But can you each shed some more light onto how to actually identify which learning metrics you should track? Do you have any examples of common training metrics that our listeners should consider?

Dr. Allan Church:

To me, one way of pushing this, and it’s a bit of the diagnostic discussion, but in order to know what to measure, you have to know why you’re doing it, why you’re engaging in this particular intervention, why you’re doing training. I think most of us know about the five why’s, but I would always start with the five why’s and push the client, push the senior leader, the sponsor, whomever it is that’s driving the training request, push them to explain why they want this, why they need to have it, why they expect to see change, what’s important about the whole thing, so you can get to the actual set of expectations/change that they want to see.

You get to the real level at which you can measure it. Because a lot of times that I’ve seen it’s going to be, I want some of this training, I want some of this intervention because it’s cool or it’s popular, or because we have some money and I want to spend it, or because I think this is important, but they haven’t really thought through why they think that or what the real purpose is and how it might link.

If you can push them to articulate for you the strategy, the real reason behind, the root cause behind why something’s being asked to be done, you can figure out how to measure it, because fundamentally you’re going to get to, what needs to change? What do I expect to see? Jaimie, to your point, I think, and Lindsey, you were getting into this too, I think at some level the training may be part of the answer or not the answer, or maybe it’s the perfect answer, but you need to know why you’re doing it.

Dr. Jaimie Krause:

Allan, I think you spoke really well to this idea of we talk about a lot, let’s not be order takers, let’s be consultants, let’s be credible business partners. I think getting down to that root cause is a great route to enabling us to be the consultants that we really want to be. Because you’re right, I think sometimes business leaders have come and said, “Hey, you would like a training on time management. That’s the thing that we need right now.”

In engaging in that analysis with the business leader and asking them why, we may find that time management isn’t it all the thing that we should be focusing on and the challenge is in another lane altogether or something deeper or maybe part of it, but not all of it. I think those conversations are so useful and I do think really marked change in the industry and how we’re positioning ourselves as enablement professionals. Rather than saying yes, we’re saying why.

Lindsey Clayton:

Yeah, and I’ll add on that too. I think once you get to the why part, I know part of the question was asking about standard measures and it’s really hard to be standardized and say we are always going to measure this when you’re looking at the why and developing questions around that. But there are the precursors with, what was the learner perception of the value of the training? How does the learner feel about their ability to apply their training?

Across all of our training programs, we do measure that both learner reaction immediately following training, you’re level one, and level three with some space six to nine months after training, how are you feeling about it? Maybe depending upon the program, we may also get manager or direct report input into that as well. Those I think are pretty standard measurements that are relatively easy to implement across training programs. They are very impactful.

So much of what we’re talking about today is the business outcome, but a lot of times our stakeholders also understand that what we’re trying to do is impact culture. The anecdotal feedback and even turning that into quantitative feedback via surveys is really significant to tell the story of did the training do what we wanted it to do? How did it impact our employees?

Dr. Allan Church:

And that raises a really interesting point, Lindsey, for me. Sometimes we assume that ROI means you have to go all the way to the end. I mean, it is return on the investment. A lot of the models talk about that as the level five or level six.

But in some cases, if culture changes the agenda and you’re driving let’s say the inclusion capability I was talking about before, which was manager behaviors, et cetera, or something else around leadership or whatever it might be, if you’re driving something like that and the ultimate goal is to influence the culture, it’s not your job as the training leader to influence the outcome of the organization’s culture to the business. You’re really influencing the culture.

Is it really fair to make that argument? I think it’s a great point. I think there are cases where, as we discussed, you may want to say, “Listen, the realistic expectation and the realistic ROI here is culture change as measured by some survey work or by employees or by turnover rather than trying to go all the way to pure business results. I think that’s a great point.

Lindsey Clayton:

I just want to add. Right now at Caterpillar, a few months ago I had an article on trainingindustry.com about our frontline manager development program, and we’re currently measuring it right now. We’re conducting an impact study. And that’s exactly what we’re doing is throughout the program, we’ve done level one reaction or that immediate response. We’ve gone back and resurveyed everybody who took the training in 2022, gotten their perceptions of how that training is continuing to impact them or not.

We’re coupling that with some HR data like our employee insights and seeing if we can glean any themes or key takeaways from that. But really it’s to that point of what we’re trying to measure here and what we’re trying to articulate is that this has provided value to our business by improving the culture, improving leadership capabilities at the front line. That is the ROI, and it’s not direct. It’s not dollars and cents necessarily. We could extrapolate it out to that when you talk about attrition and things like that, but really the buck stops at, what is attrition looking like?

Has that bar changed? That’s a very robust measurement, I guess is what I’m saying too. While ROI is the blue dot in the sky, as everybody has stated here, it may not really tell the story better than other measurements and methodologies.

Dr. Jaimie Krause:

Lindsey, that is so well said, and it sounds like you all are doing a great job of building that chain of evidence really nicely across various levels. I think what’s interesting too, and this is maybe somewhat paradoxical to say inside of this conversation, but I mean, ROI is expensive to measure or certainly can be because of the investment on demonstrating it. We can’t at the same time overlook level one, the employee sentiment on the training, especially when we’re designing new programs. It’s vital to understand how this program landed with our audience, with our learners.

In fact, the science of learning, if we don’t capture folks’ attention and engagement, we’re not going to be able to move towards any other level. We’re not going to be able to see knowledge transfer occur as we’d measure through L2 or see behavior change happen as we’d measure through L3 if we haven’t captured their engagement and attention. I think that’s a really important call out. ROI is exciting, but the other levels can’t be ignored either.

Dr. Allan Church:

I would totally agree. I think you need to get data at all levels as much as you can going back to that grid construct and link it together across different types of measures because the more you can do that, even if they’re similar, so manager surveys versus culture surveys versus reaction surveys initially, et cetera, because it tells a story. You can’t skip the reactions. I mean, you can’t do it. People expect to be able to give you feedback on any program, especially new programs. You can’t do that. I think those are great points.

Sarah Gallo:

Such great points there. I love what all of you said and especially what Allan said about that realistic ROI. That’s great to keep in mind. We’ll be right back after a brief message from Training Industry’s Certified Professional in Training Management Program.

Advertisement:

The Certified Professional in Training Management Credential or CPTM is designed to convey the essential competencies you need to manage a training organization. When you become a CPTM, you gain access to alumni resources like monthly peer roundabouts and a full registration to the Training Industry Conference and Expo. If you start today, you can earn the CPTM credential in as little as two months. To learn more, visit cptm.trainingindustry.com.

Michelle Eggleston Schwartz:

As we record this episode in July 2023, many companies in the US are experiencing layoffs and budget cuts due to the current state of the market. How can measuring the impact of training ultimately prove the value of L&D to the business, which in uncertain economic times is even more important? I’d love to hear your thoughts.

Lindsey Clayton:

I’d certainly be interested in what my co-presenters here have to say as well, but for me it’s really about the storytelling. People love to hear a compelling story and having the data behind the impact that you’re making with your training can really put the exclamation mark on the point that you’re making. However, you can’t just send numbers, I guess that’s the point I’m trying to make. It’s not that the data alone or measuring the impact will do it.

You have to also combine that with the messaging. I think that’s where even in the last bit we were talking about all the way through of, what is the experience to the learners? How is this impacting the people in the business? So much of training is people focused. You can’t leave that part out. But then coupling that with we’ve got this great compelling human story and here’s some information about the way that that human story is impacting our business performance. To me, that’s how we prove the value of L&D. It’s still absolutely human led.

Dr. Allan Church:

Yeah, I think that’s a great point, Lindsey. I think storytelling is a key skill for just about every change related intervention, HR intervention, talent intervention, anything. We’re professionals in the field. We’re all used to kind looking at the data that we have and presenting it, but storytelling for impact is critical. You can make amazing stories out of data that’s not as robust as it might appear, that yet can have a significant impact on the vibrancy of a program or an entire function, to be honest.

I don’t know, Michelle, you asked this question. I’m not sure that much has changed. I mean, I’ve been around a long time in the field and L&D and OD have always been targets during different business cycles over the different decades. I’m not sure I see that much changing. What I do see is a little bit more emphasis on global business services and the move of functions, learning functions in particular, but also analytics and some others to other parts of the world, offshoring elements of them.

I worry a little bit about that because to me that actually ties right into, Lindsey, to your point about the ability to tell stories if other people are really doing all the execution and deepen the data. You might have a small or even smaller L&D group than… Jaimie, you’re mentioning small to begin with, right? Lots of people are lean. You might get even smaller and just have somebody doing strategy and maybe some content work and all the other work goes away. That makes it much harder to get your hands around what’s really going on.

To me, being able to tell a compelling story and figuring out now going forward where you’re going to pick your story and going deep on that to make that story meaningful is what you need to do to bolster the organization you’re trying to keep alive and vibrant. I would be saying it’s important to do that story and it’s important to think very carefully about where you want to put your emphasis so that you can go as deep end to end as you possibly can to support the need for your work and the impact of your work.

Dr. Jaimie Krause:

Allan, I totally agree. I think for me it’s, yes, Lindsey, you said it really nicely, being able to tell that impact story in a really compelling way. But if we’re measuring and reporting on the business impact of training, it assumes that we’re aligning with the business on priorities and goals and what matters to them. That will help L&D have a seat at the table, and it will help us really be able to identify the enablement strategy aligned to business impact. We will focus on the things that matter to the business, and in this way we’ll become business partners.

I think that’s one thing that we can continue to do in our field, really work with our business partners and our business leaders to ensure that the things that we’re focused on are the things that also matter for them and the things that they’re trying to drive to. We should really be seen as partners because we can impact so much. I mean, Allan and Lindsey, you both were talking about all the ways that training can influence everything from the individual reps skills to the organizational culture.

There’s so much that we can do, but we can also help the business better align their requests with things that are going to ultimately impact the things that matter to them too.

Sarah Gallo:

Perfect. Well, everybody, we’ve touched on so much here and I know we could keep this conversation going and going. But before we do wrap up, are there any final takeaways that you’d like to leave our listeners with today?

Dr. Allan Church:

I’d like to just build on Jaimie’s point. I think the whole idea is having the training function or the learning organization, if you will, depending on what you call it, or capability, there’s lots of different names, but have a strategic place in the function of HR in the business. I think one of the things I would ask all of my training colleagues and all of my OD colleagues and all of my IO psychologist colleagues and TM colleagues work together, because there’s a tendency for specialists and specialties to silo even when it’s an integrated agenda or integrated business strategy you need to solve.

I think by coming together and learning about different specialties, whether it’s cross-training or whether it’s simply partnering better, you do a much better job of integrating your impact and thinking about all the connectivity points. I’ve never built programs at PepsiCo in my years where I wasn’t working closely with the head of learning and development, whether it’s that person building training to support my interventions or my building data tools, surveys and feedback tools, performance metrics, whatever to support his or her learning interventions.

I think it’s really critical you bring them all together and work not only with your clients, which is, Jaimie, absolutely critical as well, and your HR business partners, but also the other professionals who can make the impact of your training even greater.

Dr. Jaimie Krause:

Yeah, that’s a great point, Allan. I love that. You’re right, if we can work to centralize change management, which I think is what we’re really getting at across all the various parties who contribute to it, we’ll all be laser focused on the things that matter the most to the business, and it’ll be a much more compelling story across all of our work.

Lindsey Clayton:

I agree. I don’t think I have much to add. Both of you have summed it up really nicely. Collaboration is definitely key. Jaimie, one of your last points was being tied into the business, ensuring that that connection is clear and understood by all sides. I think also is probably more important necessarily than the true ROI, it’s what are we measuring together and how will we go about achieving that. Thank you.

Sarah Gallo:

On that note, thank you all so much for speaking with us today on the podcast. How can our listeners get in touch with you if they’d like to reach out?

Dr. Allan Church:

Email for me is fine, allanhc2020@gmail.com.

Lindsey Clayton:

I’m on LinkedIn. You’re welcome to add me on LinkedIn. Lindsey Clayton should be relatively easy to find, and I’m sure there may even be a link provided with the podcast or some sort of reference point there, but feel free to reach out there.

Dr. Jaimie Krause:

I agree. LinkedIn is great for me too. I have a common name with an uncommon spelling, so do make sure you put J-A-I-M-I-E Krause in LinkedIn. I’m looking forward to connecting with you all.

Michelle Eggleston Schwartz:

To learn more about training measurement, visit the show notes for this episode at trainingindustry.com/trainingindustrypodcast.

Sarah Gallo:

If you enjoyed this episode, let us know. Leave a review wherever you listen to your podcasts. Until next time.

Ad:

If you have feedback about this episode, or would like to suggest a topic for a future program, email us at trainingindustry.com or use the contact us page at trainingindustry.com. Thanks for listening to the Training Industry podcast.