Most learning and development (L&D) professionals agree on the necessity of evaluating training. However, implementing a robust training evaluation program is difficult, as I found out the hard way when I spent nearly three years trying to implement a robust evaluation program for a medium-sized training office.

The training office used evaluation forms for Kirkpatrick levels 1 and 2 and disseminated an annual training needs survey. Some programs did a better job of using the evaluation forms than others, and there was inconsistent recordkeeping of the results. The survey results were valuable sources of information, but, at best, 19% of the workforce responded; the average response rate was between 8% and 9%. And, like with similar surveys, it was difficult to access valuable information in the qualitative comments.

The data was useful to the training office, but we needed more to demonstrate our office’s value to upper management. We trained staff in the Phillips Return on Investment (ROI) Model, which helped us which helped us collect data that verified the business impact of the training programs. Demonstrating the training programs’ business impact helped elevate the conversations beyond the training office’s needs to the whole organization’s needs.

The History of Training Evaluations

We can trace training evaluation back to the late 1940s, when Raymond Katzell formulated a four-step process to measure training effects. Donald Kirkpatrick referenced Katzell’s four-step process in creating his four levels of training evaluation in 1959, and Jack Phillips created the ROI method of training evaluation in the 1970s. Other training evaluation methods soon came onto the learning and development scene.

Most training evaluation methods start by measuring participants’ reactions to the program and then whether they learned anything. Then, there are measures to determine if the training led to behavior changes that benefit the organization. Some evaluation methods add a top layer to discover if the training was efficient and cost-effective.

Lessons From the Agile and Lean World

I came to the training and development field from the information technology (IT) project management field, which was revolutionized by Agile and lean thinking. Agile and lean approaches focus on optimizing value by eliminating waste and releasing smaller batches of products more frequently. Metrics are vital to lean and Agile management, especially metrics based on leading indicators that can predict value. When I became a training professional, I searched for a similar set of metrics for training evaluation.

Objectives and Key Results (OKRs)

During the 1990s and 2000s, many of my IT projects involved building business dashboards. In the process, I learned about concepts such as the balanced scorecard, key performance indicators (KPIs) and management by objectives. When I became an L&D professional, I looked for similar concepts in training evaluation but found little other than using ROI to predict a training course or program’s future value. I wanted to build a training analytics dashboard to demonstrate to C-level managers the value of the training office.

I needed objectives and key results (OKR). OKR is an update of the 1970s management by objectives system. Organizations establish one to three objectives every quarter, written as qualitative statements such as, “Significantly increase customer satisfaction with the organization’s services.” The key results are three to five measurable milestones under each objective that indicate when the objective has been achieved. For the customer satisfaction objective, for example, the key results might be, “Customer satisfaction surveys show a 10% happiness growth” and “Service renewals increase by 25% this quarter.”

Measuring Training through Objectives, Results and Enablers (ORE)

I paired OKRs with ROI by creating the ORE system. ORE stands for objectives, (key) results and enablers. I used the organization’s strategic plan and mission statement to create three objectives with associated key results. For each key result, I developed two to three enablers, which were training outcomes. Suppose the goal is to significantly increase customer satisfaction and one of the key results is that service renewals grow to 25% this quarter. In that case, one training outcome might be to help salespeople use persuasive selling techniques, and another training might be to increase salespeople’s ability to troubleshoot common service issues.

I use predictive ROI methods to forecast the impact of training outcomes on the key result or results. After the quarter is over, the organization discovers which objectives we achieved based on the key results. Then, using ROI isolation techniques, I calculate the training outcome’s contribution to the key result(s) and adjust the training programs and courses to achieve the key results and objectives better. With ORE, I can build an analytical dashboard and demonstrate measurable impacts from training courses and programs and the training office.

Evaluating training was not only crucial in the past for training and development offices; training evaluation is vital for the future of training and development. The key is linking the past work on training evaluations to the broader field of organizational metrics to tell training’s story better.