Published in Summer 2022
The world loves data. That includes those of us in learning and development (L&D):
- Stakeholders want evidence their training investments are being handled wisely and learners’ time is being used productively.
- L&D leaders want to connect learning experiences to performance and ensure they deliver the right training to the right people efficiently, at scale.
- Learning designers want to understand the impact of their user experience (UX) design choices, so each new project isn’t yet another first-best-guess at what to teach and how.
The Kirkpatrick Model provides a useful model for learning analytics. It posits four levels of measurement: reaction, learning, behavior, and results.
Let’s focus on Kirkpatrick’s Level 1, reaction: the degree to which the participants find the training favorable, engaging and relevant to their jobs. How do you gain Level 1 data? Typically, it’s collected via surveys of participants after the training is complete (often called a “smile sheet”).
Smile Sheets Can Be Useful
A smile sheet is a survey provided to learners following a learning experience, typically containing questions like:
- “Was the training engaging?”
- “Can you apply what you learned to your job?”
- “Did you like the style or method of training?”
- “Would you recommend the training to a colleague/friend?” (Net Promoter Score)
If learners actually fill out smile sheets (typically it’s a small fraction), they can be useful. What learners think about the training is an important data point for stakeholders, leaders and designers alike.
Smile Sheet Data Is Limited
Are people who complete the survey disproportionately haters or fanboys? Do their responses represent the many others who took the training but remained silent? Unless you demanded an answer, there’s no way to tell —it’s not a random sample.
Finding out that learners loved or hated the training, overall, tells us very little about what specific improvements are warranted, and improvement is inevitably warranted. How did they feel about the video setup? How about the general approach? Where was coaching useful, and where not? If they had trouble completing the training, where did our learner experience (LX) structure go wrong?
Smile sheets alone don’t provide sufficient visibility into learners’ actual engagement. Learners may give a high score to a microlearning module only because it was short, or to an animation only because it was cool, even if neither led to any change of behavior. Or they may give a low score to something that was difficult or uncomfortable (breakout group, stretch assignment, team project), which in hindsight became a pivotal learning moment in their development.
Our Timing Is Off
Smile sheet data tells us little about how to improve a learning experience at the time we’re designing it, when that feedback would do the most good. Even if we have the luxury of doing a pilot and collecting survey data, the limited information we gain doesn’t provide much insight into adjustments we might consider.
And that’s if there’s time to adjust. The more common scenario is that once training is rolled out, L&D needs to get on to the next project in the pipeline. When the program is redesigned, two to five years down the road, the data is even less useful.
Enter xAPI
The L&D ecosystem offers a way to complement smile sheet surveys: send usage data to a learning record store (LRS) to measure Kirkpatrick’s Level 1, augmenting a smile sheet to paint a richer picture of how people actually interacted with training.
With xAPI (experience-focused application programming interface), we don’t have to rely on the learner’s impression of the training experience, as smile sheets do. We can track what learners actually did — a dashboard camera for gaining learning insight.
This approach is technologically agnostic. Any LRS that is in compliance (stand-alone or integrated into a suite or platform) should be able to accept statements. There are free versions to work with.
Using xAPI and an LRS, the things learners do within a digital learning experience can be dutifully recorded: When each enters, exits, clicks, scrolls, answers a question, restarts, weaves around a hotgraphic, navigates, asks for help, moves on, plays or skips a video, you name it. Each event is transcribed in the LRS, and we can then analyze the statements to distill interesting patterns.
xAPI Answers Some Interesting Questions
Aggregating this stream of xAPI data at a macro (overall experience of an audience) and micro (experience per individual or group) level, L&D can answer important questions.
On the macro level:
- How did an audience engage with the training overall?
- How many days or weeks did it take learners to complete training?
- How many times did they visit, and how long did they spend per session?
- What were their most common mistakes? What did most get right?
- By joining xAPI data to HR data, were there aggregate differences in these values by role, region, seniority or age?
And on the micro level:
- Is there insight into what learners chose to do, component by component:
- Flip cards they hit or skipped.
- Video they watched or paused (or rewatched).
- Animation they watched or slid past.
- Where they dwelled, where they raced through.
- Are there interesting patterns of behaviors? For example, after failure on a knowledge check, how many went back and studied, how many immediately re-tried, how many moved on and how many quit the module?
- Are there correlations of the above to behavior in other training experiences?
- What sequence of decisions did they make in a sim or game with many degrees of freedom and many possible outcomes?
- What additional resources and support did they seek to transfer the training to their job? What did they skip?
- Do attitudes toward the subject matter shift as learners are trained, as measured by a survey embedded in the training or likes?
And much more.
Love the Questions, How Might One Get the Answers?
Getting answers to these questions is relatively straightforward: Embed JavaScript functions into an eLearning module that recognize events such as clicks, views and scrolls, and dutifully send xAPI statements to an LRS to record each one.
An xAPI statement consists of ACTOR VERB DIRECT-OBJECT. For example,
person@whatever.com ANSWERED ChoiceC in Question1 of Module1
That is, a learner clicked the third choice in the first multiple choice question (MCQ) in the first module. Each statement is timestamped so we can assemble sequences of behaviors for each learner, or aggregate results.
Here is a sequence of statements that may offer interesting insight:
- person@whatever.com CLICKED Tab2 in Component3-10-2 in Module8
- person@whatever.com PLAYED Video4 in Component3-11-1 in Module8
- person@whatever.com PAUSED Video4 in Component3-11-1 in Module8
- person@whatever.com ANSWERED Choice2 in Component3-12-1 in Module8
- person@whatever.com CLICKED Resourcelink3 in Component3-13-2 in Module8
Dozens or hundreds of such statements might be sent for each individual, depending on what we want to track. It can be as simple as when they came and left and whether they finished, to as complicated as chronicling each movement and choice.
Agency is Critical
If we provide learners with genuine choices in a learning experience, data becomes richer.
Suppose we’ve locked all content, so learners must click every flip card and hotspot, watch every video, open every doc and so on to complete it. In this case, xAPI data offers little about what they are thinking, and hence how to make training better.
Instead, suppose we present a short piece on a topic, and allow learners free choice of what to study next and how. Each choice then becomes interesting. Which resources did they consult, when? Which ones did they skip? Which flip card, hotspot, or other item did they click? Did they slide past an animation or video, or watch the whole thing? This data gives us more insight into learner choice and attention, and hence engagement.
Get the Full Picture
With xAPI and learning design built to deliver it, L&D can gain rich insight into what’s working and what needs to be improved, rather than rely on post-training feedback alone. Imagine having access to this data after user testing or a pilot. It would illuminate specific tweaks that could be made to improve the training before you roll it out to a larger population.
xAPI data also complements smile sheet data. Let’s say learners’ reaction survey scores were high, but their engagement (xAPI data that shows how long they lingered, what they clicked, number of times they returned, etc.) was low. What does that tell us about the training — perhaps people liked it because it didn’t ask much of them? Or smile sheet scores were low, but engagement data high — perhaps it was challenging or uncomfortable? A whole lot to decipher, debate and decide on.
Furthermore, comparing data from different learning programs can yield discoveries that improve learning design across the board, making your whole team better at creating effective learning.
So continue to collect smile sheet data, but don’t stop there. Use xAPI.