What is arguably the most prevalent issue facing learning professionals today? The answer is scrap learning – the gap between training that is delivered but not applied on the job. It’s the opposite of training transfer. It’s also a critical issue for both learning and development (L&D) and the organization’s L&D supports because it wastes money and time.

Two studies, one by Rob Brinkerhoff and Timothy Mooney in 2008 and one by KnowledgeAdvisors in 2014, found scrap learning to be 85% and 45% respectively in the average organization. I’ve conducted three scrap learning studies over the past several years with three different organizations – each using a different training program – and found the scrap learning percentages associated with the programs to be 64%, 48% and 54%. Combining the scrap learning percentages from all five studies results in an average of 60%.

To further highlight the magnitude of the problem, consider the effect scrap learning has on time and money. According to the 2018 ATD State of the Industry research report, the average per employee organization training expenditure in 2018 was $1299, and the average number of training hours consumed per employee was 34. Table 1 shows how much scrap learning costs the average organization.

Table 1. Cost of Scrap Learning in Wasted Dollars and Time

Average per-employee training expenditure = $1299 x 60% scrap learning = $779 wasted
The average number of training hours consumed per employee = 34 x 60% scrap learning = 20 wasted hours

How to Combat Scrap Learning

A possible new solution to combat scrap learning is predictive learning analytics™ (PLA). PLA provides L&D professionals with a systematic and credible process for optimizing the value of corporate L&D investments by measuring and monitoring the amount of scrap learning associated. Unlike other training transfer solutions, which focus almost exclusively on training delivery and design, PLA provides a holistic approach to increasing training transfer. The methodology is founded on three research-based training transfer components and 12 research-based training transfer factors.

Learning Program Design

Learners:

  1. Acquire new information.
  2. See the program as relevant to their job.
  3. View the program as something that will enhance their career.
  4. See improvement in a critical department business metric if new information is applied. 

Learner Attributes

Learners:

  1. Are personally motivated to use the new information.
  2. Are confident in their ability to apply the new knowledge learned.
  3. Reflect on lessons learned and how they can improve their performance.
  4. View the program as an opportunity to learn new things.

Learner Work Environment

Learners:

  1. Are actively engaged by their manager before attending the training to discuss how the program will improve their performance.
  2. Are actively engaged by their manager post-program regarding how learning will be applied.
  3. Are supported by colleagues.
  4. Have an immediate opportunity to use the new information learned.

Predictive Learning Analytics Methodology

The PLA methodology consists of three phases and nine steps, and it provides L&D professionals with insight on actions required to maximize training transfer.

Phase 1: Data Collection and Analysis

The objective of Phase 1 is to identify the underlying causes of scrap learning associated with a training program. During this phase, five specific data sets are identified. Two of these are predictive, and three are data driven. The two predictive data sets pinpoint include:

  • Which participants are most and least likely to apply what they learned back on the job.
  • Which managers of the participants are inclined to provide support for the training.

The three data-driven data sets pinpoint:

  • Which research-based training transfer components and research-based training transfer factors contribute to training transfer.
  • Obstacles participants encountered post-training that prevented them from applying what they learned on the job.
  • A just-in-time measure of scrap learning.

Data for calculating the two predictive data sets are collected from participants immediately following their participation in a learning program using a survey. The survey consists of 12 questions developed from the 12 training transfer factors described earlier.

Data Set 1

To predict which learners are most likely to apply what they learned in a training program, participant responses are summarized into average scores. The average scores are then organized into numeric order from highest to lowest, and the top 15%, middle 65% and bottom 20% scores are calculated.

These percentages align with the results Brinkerhoff and Mooney found in their 2008 training transfer research:

  • 15% of participants applied what they learned in training back on the job.
  • 65% of participants tried to apply what they learned, but within 30 days or less reverted to their old ways.
  • 20% of participants made no effort to apply what they learned back on the job.

Data Set 2

To predict which managers of the learners are inclined to provide support for the training requires two sets of scores. One is a composite score based on the 12 survey items described earlier, and the other is an average manager training support score. Composite scores are calculated based on the number of employees a manager sends to training, and the score is an average of the employees’ scores on the 12 survey items. For example, if a manager sends three employees to training and the average scores on the 12 questions for the three is 5.92, 5.43 and 5.69, the composite score would be 5.68 (based on a seven-point scale).

Manager training support scores are also calculated based on how many employees a manager sends to training. The score is an average of employee responses to the two survey items measuring the level of support the manager provides to the employees before and after the training. For example, a manager sends three employees to training and each one scores the pre-training and post-training manager support survey items as follows:

  • Employee 1 pre-training support 6, post-training support 7
  • Employee 2 pre-training support 3, post-training support 2
  • Employee 3 pre-training support 4, post-training support 4

The training support score for the manager would be 4.33 (6 + 7 + 3 + 2 + 4 + 4 = 26 ÷ 2 = 13  3 = 4.33).

Predictions regarding which managers are inclined to provide support for the training are calculated by subtracting the composite score from the average manager training support score. Positive difference scores indicate that a manager is inclined to provide active support for the training. In contrast, low and negative ratings suggest that a manager is inclined to provide weak support. In the example above, the manager would have a difference score of – 1.36 (4.33 – 5.69), indicating he or she is inclined to provide weak support. Only managers with three or more employees attending the training are included in the predictions to ensure valid results.

Data Set 3

Data set three identifies which of the training transfer components and training transfer factors described earlier are contributing to training transfer. Scores are calculated by computing an average for each of the 12 training transfer factors and grouping the factors according to the training transfer component with which they align. An average score for each component is then calculated. To determine if any of the component score differences are significant, a statistical test is performed. Components identified as contributing the least to training transfer are candidates for corrective action.

Data Sets 4 and 5

Data for calculating the final two measures are collected from participants 30 days post-program using a survey or focus groups and consists of three questions:

  1. What percent of the program material are you applying back on the job?
  2. How confident are you that your estimate is accurate?
  3. What obstacles prevented you from utilizing all that you learned?

Waiting 30 days post-program is critical because it allows for the “forgetting curve” effect to take place and provides more accurate data.

The scrap learning percentage score provides a baseline against which follow-up scrap learning scores can be compared. These comparisons serve as a way to monitor the effect targeted corrective actions had on increasing training transfer.

The obstacles data identifies barriers participants encountered that prevented them from applying what they learned. Waiting 30 days to collect the data allows for the full range of training transfer obstacles to emerge since some are likely to happen almost immediately while others will occur later. Frequently mentioned obstacles are candidates for targeted corrective actions to increase training transfer.

Phase 2: Solution Implementation

While pinpointing the underlying causes of scrap learning is valuable, being able to monitor and manage the targeted corrective actions taken to address them is of even greater significance. It is the focus of Phase 2 and this is where the “rubber meets the road.” It’s where you can be strategic and use data to connect a training program with job application. It’s also an opportunity to demonstrate creative problem-solving and the ability to manage critical business issues to a successful conclusion.

Phase 3: Report Your Results

The objective of the third phase is to share your results with senior executives. Deliver the data as a story and take the executives on a journey of discovery. Start with a hook, tell the truth without bias and provide context.

In summary, scrap learning has been around forever. However, there is now a way to measure, monitor and manage it using predictive learning analytics.™