Expanding your learning portfolio into virtual reality is like jumping from “Flatland” into the fourth dimension: It’s a whole new world. And as learners move through virtual reality experiences and interact with the environment, they can generate robust performance data that offers a fuller picture of their performance and skilling needs.
It’s exciting to imagine our learning and data in 3D — but before we break ground on a new virtual world, we need a detailed understanding of our organization’s business and learning landscape. That means starting with a thorough needs analysis, just as we would for learning in 2D.
Lead With the Needs
The outcomes of our needs analysis help us establish some hard points for our virtual reality (VR) learning solution. Here are a few of the items we’ll want to understand:
- Current and future business needs.
- Existing hardware and infrastructure resources.
- Development timeline, budget and human resources.
- The learner performance gap, or what learners are currently doing vs. what they should be doing.
- Learner audience(s) and their preferences, opinions, feelings, goals and interests.
Knowing what’s possible from an IT, hardware and budget perspective helps us identify constraints governing the level of immersion, interactivity and complexity our VR learning experience will have. From a learning and development (L&D) perspective, identifying what’s important for learners to know and do tells us which in-experience data points will be most impactful to measure. Finally, allocating a team dedicated to data analysis helps us leverage these rich sources of learner information to create adaptive learning experiences, improve current curricula and add supplemental learning and reinforcement experiences to our VR learning landscape.
These adaptive experiences can even be delivered live via VR instructor-led training (VR-ILT), in which an instructor leads one or more learners in a live practice — while delivering feedback and modifying the challenge level and pace to respond to learner performance in real time.
In short, leading with the needs sets us up to take full advantage of VR learning’s data analytics superpowers — and tell the story of the impact we’re making for learners and the business.
A Spatial Smörgåsbord: Data Analytics in Native VR Applications
A native VR application is ideal for performance-based tasks that require learners to use their entire bodies to navigate a 360° environment: For example, conflict de-escalation, surgery, hazard response or equipment maintenance. Native VR applications can be downloaded and experienced within a headset and offer learners maximum immersion and interactivity — and a safe 3D space where they can practice hands-on, high-stakes skills without harming people, property or relationships.
Here are just a few of the powerful insights L&D can track with the headsets and hand controls learners use to access native VR applications:
- Completion and completion time: These classic metrics can be tracked and recorded within VR learning solutions — and displayed and stored in an existing learning management system (LMS) or learning experience platform (LXP).
- Performance: Assessments for performance-based tasks encompass more than just “the right answer.” They might include gesture, tone of voice and movement. Native VR applications show us exactly what learners are doing right … and where there’s room for improvement.
- Response time: Timing is everything when it comes to embodied, performance-based tasks such as surgery, hazard response and de-escalation. Headsets and hand controls offer insight into what’s giving learners pause.
- Position: It’s impossible to secure a perimeter or repair a machine from far away — or from too close. Position data from headsets and hand controls help learners approach tasks optimally.
- Posture: Posture is a major part of nonverbal communication, and it can mean the world when it comes to displaying empathy or defusing a tense situation. Native VR applications can offer a world of insight into the (often unconscious) messages learners are conveying.
- Gestures: Whether learners are repairing equipment or leading a difficult conversation, their movements speak volumes about their mood, attitude, competence and intentions. Data helps us ensure that they’re making the motions that matter.
- Gaze: In-headset pupillometry, or eye tracking, helps us measure what (and whom) learners are attending to and even their cognitive load. Because eye movements are unconscious and often involuntary, feedback about gaze provides learners with a wealth of a-ha moments.
- Word choice and sentiment: In-headset voice and tone analytics powered by AI help us analyze whether what learners say is consistent with our message — for example, whether it matches a branded customer service process. And because delivery matters, learners’ tone can also be analyzed to ensure they’re conveying the care that makes the message meaningful. The generated transcripts and sentiment analyses can then be used by L&D teams to update and improve the learning experience.
No Headset? No Problem! Data Analytics in WebXR Learning Experiences
Rolling out a native VR application can present budget and logistics challenges to organizations with vast, distributed learner audiences.
That’s where WebXR content comes in: As with any website or web application, learners can access these VR experiences either via a headset or via a web browser on their laptop or desktop computers.
Learners who access a WebXR experience with a headset will have a fully immersive, 360-degree experience. Learners who join via web browser won’t be quite as immersed, but they can still enjoy a highly interactive, engaging experience comparable to a first-person game.
WebXR learning experiences are also a rich source of data and can provide insight into many of the same metrics as a native VR application, such as word choice, sentiment and response time.
However, because WebXR content must be designed for the “least common denominator,” or least powerful hardware, it cannot measure full-body data such as:
- Position.
- Posture.
- Gestures.
- Gaze.
Thanks to its learner experience (LX) and data analytics capabilities, L&D teams are increasingly turning to WebXR as a scalable way to meet immersive learning needs that don’t require full-body presence.
VR Data Analytics: Living and Thriving in the LMS or LXP
Despite the benefits of launching a VR learning strategy, many L&D teams have been hampered by organizational constraints requiring all content to live in an LMS or LXP. Whether for reasons of storage, privacy, data collection or all of the above, these constraints have prevented these teams from adding VR content to their learning portfolios.
Today, the LMS Integration Tool (LIT) enables both native VR applications and WebXR content to live in and launch from, an organization’s existing LMS or LXP.
With LIT, learners launch a VR experience directly from an existing eLearning course. (For WebXR experiences, learners continue by choosing the device they’ll be using: headset or computer.) LIT then enables learner data sharing between the LMS and the VR content, creating a seamless experience on the front and back ends: The VR experience “recognizes” each learner and sends their performance data back to the LMS for analysis and storage.
Of course, some VR performance data lies outside the bounds of what our LMSs and LXPs can currently compute. These platforms simply may not have the means to process such 3D learner data as gesture or pupillometry.
This is where the LIT Analytics Dashboard comes in. This customizable feature captures and displays complex performance data that the LMS or LXP can’t process. Depending on storage capacity and infrastructure, these data can live either in the LMS or be hosted by the VR vendor partner.
There’s no question that VR can help us level up our learning design and spark an entirely new level of learner engagement. However, what’s even more exciting is the opportunity it presents to focus more on evaluation and measurement. Not only of learners’ performance but also of the effectiveness of our learning experiences.
As L&D leaders and professionals, we’re often called upon to produce — quickly, and within ever-shrinking budgets. These pressures lead to what I call “maker’s bias,” or the tendency to value “building a thing” over evaluation and analysis. We need a balance between the lively carpe diem spirit that drives us to build and the reflective spirit that guides us to inventory our needs and analyze the effectiveness of our existing programs. Planning a VR learning strategy offers L&D teams fresh opportunities to strike this balance over the long term — and to cement our reputations as valuable consultative partners to both our internal and external clients.