Knowing whether participants apply what they learned in a training program on the job is a critical issue for both learning and development (L&D) and the business executives L&D supports. Demonstrating learning application back on the job speaks directly to L&D’s ability to be viewed as a credible partner by senior executives.

Unfortunately, measuring on-the-job behavior change is not an area where many L&D professionals excel. For example, according to ATD’s report, “Effective Evaluation: Measuring Learning Programs for Success,” only 54% of organizations evaluate some learning programs at Level 3 behavior. Further, these organizations only assess 34% of their programs at Level 3. However, when asked what value Level 3 evaluation data had for their organization, 79% indicated it had either a high or very high value. The message here is clear: L&D isn’t providing senior executives with the Level 3 training transfer data they want.

The reasons for the disconnect include, “It’s too hard,” “I don’t know how,” “My leader doesn’t require it,” “I don’t have access to Level 3 data,” and “It costs too much.” While there may be an element of truth in some of these, it doesn’t absolve L&D professionals from providing business executives with the data they want.

What’s needed is an innovative new approach to conducting Level 3 evaluations. A method that is easy to implement, produces credible, high-valued data and provides clear direction for corrective actions to improve training transfer. Let’s examine one Level 3 evaluation method to illustrate how to make evaluations more simple and effective.

Easy to Implement

An evaluation method should be easy to implement for optimal success. For instance, the Level 3 Evaluations Simple method only requires collecting data from 25 to 30 participants and asking three questions.

When selecting the participants, choosing them randomly is essential. A random selection will ensure that the data you collect represents the entire participant group and not merely a subset, which will increase the credibility of your results. While there can be more than 30 participants in a group, generally 25 to 30 participants is adequate if chosen randomly.

Once you have identified the participant group, you next need to decide how to collect the data. Three options are available:

  • A focus group (virtual or face-to-face).
  • Interviews with program participants.
  • A survey.

 

A focus group is the most efficient and effective method of these three options. It takes less time than conducting one-on-one interviews and allows you to ask follow-up questions that aren’t available with a survey.

However, before gathering any data, there are two essential prerequisites. One is to wait at least 30 days post-program before collecting any data. The second is to create a job aid summarizing the program content and providing a copy to the participants before collecting any data.

The 30-day time lag allows participants to apply what they learned in the program back on the job. Consequently, you will collect more valid training transfer data. However, 30 days is only a guideline. You can extend it if more time is needed for participants to apply what they learned.

Creating a job aid that summarizes the program content is crucial because it will counter the forgetting curve effect — the decline of memory retention over time.

Distribute the job aid to participants at the start of the focus group session, review the information and answer any questions about the program content. Or, if you are doing virtual focus groups, send the job aid to the participants in advance of the meeting and follow the same process described above. Either way, you will have refreshed participant memory of the program content and are ready to start asking the questions.

The data collection process itself is simple and revolves around getting answers to three questions:

  • What percent of the material taught in the program are you applying back on the job?
  • How confident are you that your estimate is accurate where 0 = no confidence and 100 = complete confidence?
  • If you are not applying 100% of the program material back on the job, what obstacles have prevented you from using what you learned?

The data collected from the first two questions are the basis for calculating the amount of training transfer associated with the learning program. Question three, in contrast, lays the groundwork for taking targeted corrective actions to improve training transfer.

Produce Credible, High-valued Data

With the data collected, you are ready to calculate the amount of training transfer associated with the learning program. The expert estimation technique is a great option to use, which has been around since 1983, when Jack Phillips of the ROI Institute developed it. You can learn more in Jack Phillips’ and Patti Phillips’ book, “Real World Training Evaluation.”

The easiest way to perform the various calculations is to use a spreadsheet program. Set up your spreadsheet with the following column headings: participant identification number, percent program applied back on the job, confidence level of estimate, potential error in estimate, potential +/- error range, best-case adjusted training transfer percentage and worst-case adjusted training transfer percentage. Next, let’s look at the associated calculations.

To begin the training transfer score calculation, enter the data for each participant using the first three columns. Note: The responses you receive for question one goes in the percent program applied back on the job column. The second question answers go in the confidence level of estimate column. See Table 1 for a visual depiction with sample scores for the first two data points.

Table 1.

Participant

Identification

Number

Percent

Program

Applied Back

On the Job

Confidence

Level of

Estimate

Potential

Error in

Estimate

Potential

+/-

Error Range

Best-case

Adjusted

Training Transfer Percentage

Worst-case

Adjusted

Training Transfer

Percentage

011 70 60 40 28 98 42
022 10 90 10 1 11 9
012 50 35 65 32.5 82.5 17.5
004 90 100 0 0 90 90

 

  • Column Four (Potential error in estimate): To calculate the potential error estimate in column four, subtract the confidence level of estimate percentage in column three from 100. For example, looking at participant 011, the confidence level is 60, so the potential error equals 40.
  • Column Five (Potential +/- error range): Multiply the number in the percent program applied back on the job column by the percent in the potential error in the estimate column and divide by 100. Using participant 011 as an example, the percentage of the program applied back on the job, 70, is multiplied by 40, the potential error in the estimate. The resulting total, 2,800, is divided by 100. That figure, 28, is entered in the potential +/- error range column.
  • Column Six (Best-case adjusted training transfer percentage): Add the number in the potential +/- error range column to the percentage of the program applied back on the job. Again, using participant 011 as an example, the percentage of the program applied back on the job is 70 plus 28, which equals 98, the best-case adjusted training transfer percent.
  • Column Seven (Worst-case adjusted training transfer percent): Subtract the number in the potential +/- error range from the number in the percent program applied back on the job column. Using participant 011, the percentage of the program applied back on the job is 70 minus 28, which equals 42, the worst-case adjusted training transfer percentage.

 

Calculating the Best, Worst and Most Likely Case Training Transfer Percentages

With all the column calculations completed, you are now ready to calculate the best, worst and most likely case training transfer percentages. To compute the best-case training transfer percentage, add the best-case adjusted training transfer percentage scores and divide by the number of participants. Do the same for the worst-case adjusted training transfer percentage scores.

To calculate the most likely case training transfer percentage, add the best and worst-case percentages and divide the total by two. The resulting three percentages enable you to credibly point out the amount of training transfer associated with the program. The training transfer data is credible because the estimation process accounts for any error in a participant’s estimate of the percentage of the program material applied back on the job. It’s also credible because it uses a range to identify the training transfer percentage and not a single number.

 

Provide a Clear Direction for Taking Targeted Corrective Actions

The first two data collection questions helped to calculate the amount of training transfer associated with the learning program. Now, let’s focus on the obstacles question: If you are not applying 100% of the program material back on the job, what obstacles prevented you from using what you learned? The biggest challenge with collecting qualitative data is organizing it so you can make sense of it. A proven way to do this is to use the following four-step approach:

  1. Analyze the obstacles preventing learners from applying what they learned by looking for themes or patterns in the items.
  2. Consolidate all like-minded obstacles into clusters.
  3. Count the number of obstacles in each cluster.
  4. Place the groups into numeric order from highest to lowest.

Quantifying the data lets you prioritize which obstacles to focus on first, with targeted corrective actions to eliminate or mitigate them and increase training transfer. To maximize training transfer, you should continue going through the clusters, taking corrective actions to minimize or eliminate each group.

This process offers a simple, easy-to-follow, credible and actionable method for conducting Level 3 evaluations. Hopefully, it also encourages you to start performing more Level 3 evaluations with your training programs. In addition, the data you collect will benefit the L&D department and the business executives you support.