TAP3D Platform

Integrating AI to Reduce Training Drop-off

A data-driven approach to solving silent churn in B2B VR training

Read the study to learn how
AI reduced training drop-off
Product Designer Web Feature Lead 6 months B2B / EdTech
Checklist MVP UI
AI Chat Agent
TAP3D Dashboard
Checklist MVP
AI Chat Agent
Web Dashboard
Impact
+0pp
Issue resolution rate
+0pp
Task start rate
+0pp
User engagement
Team
1 Product Designer (me)
Web Feature Lead
6 Cross-functional partners
PM · FE/BE Engineers · QA
Timeline
6 Months
VR Training Platform · B2B
Domain
EdTech · Advanced Manufacturing
+38pp
Issue Resolution
AI Agent
n = 24
WoZ Users
+25pp
Task Start Rate
~2 min read

Short on time?

Get the quick highlights in a visual, swipeable format.

Swipeable slides · 4 key insights · ESC to close
Context / Problem

Users Were Dropping Off — But Why?

TAP3D is a B2B VR training platform preparing workers for advanced manufacturing roles, offering a seamless experience between VR and Web.

Despite a seamless platform, internal data revealed a significant lag in the training completion cycle. Users were struggling to complete training and move to the review stage, directly impacting customer retention metrics.

Expected
Complete Training
Goal
Higher Retention
Reality
Early Drop-off
Training Completion Rate
Monthly cohort analysis · Illustrative
Started Drop-off point
92%
Module 1
74%
Module 2
41%
Module 3 ⚠
28%
Module 4
19%
Module 5
12%
Review
Sharp drop-off at Module 3 — data showed where, but not why
* Values are illustrative. Actual data is confidential.

Usage logs showed a clear pattern — learners were dropping off early, unable to keep pace with initial modules. The quantitative data showed where they dropped off, but not why.

Onboarding
100%
Initial Modules
Struggling
Drop-off
← Churn Risk
Review Stage
Unreached
Research

9 Interviews. 3 Barriers.

To uncover the root cause, I conducted 9 user interviews and performed affinity mapping on 200+ coded quotes to identify recurring themes.

My research centered on two key questions: What triggers a user to start right now? and What is the minimum information needed to commit?

User Interviews
9 in-depth interviews with learners
Affinity Mapping
200+ coded quotes mapped for recurring themes
Affinity Map — Research Synthesis
0
user interviews conducted
0+
coded quotes analyzed via affinity mapping
0
critical cognitive barriers confirmed

Research confirmed three critical barriers that explained the drop-off:

Barrier
Commitment Debt
"Feels too long." Learners deferred because a "full lecture" felt like a heavy burden.
Solution
Bite-sized Checklist
AI recommends 5–10 min actions instead of full lectures, making starting feel effortless.
Barrier
Decision Paralysis
"I have so much to do, but I don't know what to do first."
Solution
Personalized Suggestion
AI picks the best next step with a one-line rationale — "Here's a 5-min safety check. Do this now."
Barrier
Support Gap
"No quick help." Without an immediate way to get answers, learners would defer tasks to "tomorrow."
Solution
Context-Aware Handoff
Users upload a recording → AI auto-drafts a message with context for the instructor. A "rage quit" becomes a seamless escalation.
Core Goals
Make starting feel bite-sized. Make "why today" obvious. Provide a quick help path.
Strategy

Ship Fast, Prove More Later

I evaluated three options to address these goals, considering development cost and user value.

Option 1
Human Mentor
High trust, handles complex issues.
Low availability, high cost, slow
MVP ✓
Checklist
Low dev cost, leverages existing LMS metadata.
Quick delivery, immediate value
Option 3
AI Agent
Personalized, 24/7, context-aware.
High risk, needs validation first
Drag each option onto the matrix — or scroll to reveal
↑ High Impact
← Low Feasibility
High Impact
Low Feasibility
High Impact
High Feasibility
Low Impact
Low Feasibility
Low Impact
High Feasibility
Human Mentor
Checklist
AI Agent
Human Mentor
Checklist
AI Agent
High Feasibility →
↓ Low Impact
Human Mentor
Checklist
AI Agent
1
Phase 1 — Now
Checklist MVP
Low cost, quick win
2
Validate
WoZ Simulation
Prove ROI, n=24
3
Phase 2 — Ship
AI Agent
Full rollout
The Phased Approach
Trade-off

The AI Agent clearly won on impact, but engineering was constrained — they were finalizing AI inside the VR sessions. Latency and token costs were concerns.

Decision

Checklist as MVP to deliver immediate value. I designed a "Checklist Card" pinned to the top of the training page to clarify tasks and reduce cognitive load.

MVP Checklist Design
Validation

Simulating AI to Prove ROI

What the MVP couldn't solve
MVP — Checklist
See task list
✓ solved
"What should I do first?"
unsolved
"I'm stuck, need help now"
unsolved
"What did I do last week?"
unsolved
Proposed — AI Layer
See task list
Personalized "do this now"
Context-aware issue handoff
Weekly progress summary

To justify the investment and convince skeptical stakeholders, I led a Wizard of Oz simulation with 24 users — I acted as the AI backend in real-time.

Wizard of Oz Simulation
n=24 users · I acted as the AI backend · Real behavior data
01
Issue Handoff — Reducing Friction

Learners upload a short web/VR recording; the system auto-drafts a message with context for the instructor.

Try it — Click "Upload" to experience the handoff
Learner stuck on Module 3
Recording: 0:42s
screen-recording-module3.webm
33%
Before
0%
+38pp
02
Personalized Suggestion — Iterative Design

AI recommends a 5–10 minute action with a one-line rationale.

Real-time Iteration
Users added items but didn't start — "Oldest-First" hid new tasks. I iterated to "Newest-First" with a "New" indicator.
Before — Oldest First
Jan 3
Jan 5
Jan 8
New ↓ buried
After — Newest First
New ✨
Jan 8
Jan 5
Jan 3
38%
Before
0%
+25pp
03
Report Summary — Context Retention

AI summarizes weekly/monthly progress and proposes the immediate next step to sustain motivation.

Your Weekly Summary
Modules completed 3 / 5
Time invested 2h 15m
Safety score ↑ 72% → 88%
Suggested Next Step
Complete "Equipment Lockout" quiz (8 min) to finish your safety certification this week.
25%
Before
0%
+33pp
Outcome

The CFO Said Yes

I presented a side-by-side comparison (AI Agent screens vs. MVP screens) to Engineering, the PM, and the CFO.

The CFO was initially worried about ROI and development costs. The specific conversion metrics from the simulation test provided concrete evidence. We decided to roll out the AI Agent immediately after the VR AI integration wrapped.

Drag the slider to compare MVP vs. AI Agent
MVP Checklist
MVP — Checklist
AI Agent Final Product
AI Agent

The AI Agent is now live across the training flow. I designed the AI widget as a modular component to ensure consistency across the design system.

I pitched that a consistent AI experience creates job-ready talent efficiently, and that localizing for workforce-demand markets is key to scale. The CFO agreed, and we are now progressing with Chinese localization.

AI Agent Live
Across training flow
Modular Component
Part of design system
Global Expansion
Chinese localization in progress
Collaboration

One Designer, Six Partners

As the sole designer on a 6-person cross-functional team, I had to balance speed with alignment. Collaboration wasn't a phase — it was embedded in every step of the process.

Weeks 1–3
User Research
Conducted 9 in-depth interviews, coded 200+ quotes, and built an affinity map to surface recurring patterns.
Collaboration
PM
Co-planned interview scripts and scheduled participant access
ENG
Weekly sync to present affinity map findings — aligned on problem framing before solutions
Week 4
Problem Framing & Strategy
Synthesized three cognitive barriers and evaluated three solution options on an impact-vs-feasibility matrix. Proposed a phased approach: Checklist MVP first, AI Agent after validation.
Collaboration
PM
Jointly prioritized barriers by severity and frequency to focus the MVP scope
ENG
Flagged AI latency and token cost constraints early — shaped the phased rollout decision
Weeks 5–8
Design & Prototype
Designed the Checklist Card MVP and AI Agent prototype in Figma. Iterated through multiple rounds — from wireframes to high-fidelity interaction specs.
Collaboration
ENG
Bi-weekly Figma critiques — iterated on feasibility constraints like API response time and component reuse
QA
Reviewed prototypes for edge cases — empty states, error handling, network failure scenarios
Weeks 9–12
WoZ Validation
Ran Wizard of Oz simulation with 24 users. I acted as the AI backend in real-time, while the team observed and captured behavior data.
Collaboration
PM
Observed sessions live — together caught the "Oldest-First" sort issue and fixed it between test rounds
QA
Monitored edge cases in real-time, flagging failure patterns I couldn't see from the operator seat
Weeks 13–24
Ship & Scale
Handed off annotated specs, built the AI widget as a reusable design system component, and kicked off Chinese localization.
Collaboration
ENG
Co-built the AI widget as a reusable component — paired on interaction states and responsive behavior
PM
Defined CN localization scope together — created layout guidelines for CJK text reflow
Reflection

What I'd Carry Forward

Pre-launch simulation testing wasn't common at TAP3D. This project demonstrated its value to the C-suite, helping seed a culture of data-driven design.
Validation Culture — Simulation testing proved its value, seeding a culture of data-driven design and early validation at TAP3D.
Cross-Functional Leadership — Navigated constraints with PMs and Engineers, making strategic trade-offs to arrive at the optimal phased solution.

This project taught me to navigate constraints and make strategic trade-offs. I realized I want to design services that scale globally and solve complex ecosystem problems — leveraging cross-disciplinary collaboration and large-scale problem solving.

Pitching WoZ simulation results to C-suite
Pitching WoZ simulation results to C-suite for AI rollout approval