How to Set Up xAPI Statements in 8 Simple Steps
Setting up xAPI statements for detailed tracking can feel a little intimidating at first. I remember staring at the spec thinking, “Okay… but what do I actually send?” The good news is you don’t need to overcomplicate it—you just need a clear plan for what you’re tracking and a consistent way to turn those events into statements.
In my experience, the fastest way to get unstuck is to treat each step like a small deliverable: pick events, map them to verbs/activities, add the context you care about, standardize it with profiles/rules, then prove it works by testing against your LRS. That’s exactly what the steps below walk through.
Quick preview: you’ll go from “we want better learning analytics” to having copy/pasteable statement examples, a validation checklist, and a practical testing routine you can run before you trust your dashboards.
Key Takeaways
Key Takeaways
- Start with a short list of high-value events (completions, quiz answers, replays) and ignore the “nice to have” stuff until later.
- Use the xAPI statement structure consistently: actor, verb, object, plus result and context when it adds meaning.
- Add detail the right way: durations, response correctness, attempt number, device/screen size, and session metadata.
- Use profiles and rules to enforce consistent fields (so you don’t end up with half your team sending different shapes of data).
- Connect to an LRS and confirm your endpoint is accepting statements (auth, content-type, and response codes).
- Test with real payloads and real timestamps. Validate required fields and check that the LRS stores what you think it stores.
- Follow best practices: keep verb/activity naming stable, avoid tracking redundant noise, and revisit your plan as your course changes.
- Use a checklist to document your tracking plan, templates, and what “valid” means (so you can troubleshoot when something breaks).
![]()
Step 1: Identify Key Events (and the Data Behind Them)
First, figure out which learner actions actually map to your learning goals. Not every click deserves to become an xAPI statement. If you track everything, you’ll drown in events that don’t help you make decisions.
For example, these are usually high-value:
- Lesson completion (use when you want progress metrics)
- Quiz attempts (use when you want mastery or remediation signals)
- Question answers (use when you want item-level performance)
- Replays / rewatching (use when you want engagement and difficulty hotspots)
- Time-on-task (use when you want to detect confusion vs. speed)
What I do in practice is make a simple table: Event, Why we track it, Where it happens (module/video/quiz), and What fields we’ll include (duration, correctness, attempt #, etc.). Once that’s done, designing statements gets way easier.
Step 2: Design Core xAPI Statements (Copy/Paste-Friendly)
Once you know what to track, you turn each event into an xAPI statement. In plain terms: “[actor] [verb] [object]” and then you add optional fields like result or context.
Here’s the core structure you should expect every time:
- actor: who did it (mbox, account, or other supported identity)
- verb: what happened (completed, answered, watched, replayed…)
- object: what it happened on (activity + ID)
- timestamp: when it happened (ISO 8601)
- context (optional): extra info like registration, category, parent activity
- result (optional): score, success/failure, response, duration
Below is a minimal working example for a quiz answer. I’m including the exact fields you’ll need to populate.
Example: “answered” a question
POST your statement to your LRS endpoint (you’ll wire the exact URL in Step 5):
Statement JSON:
{ "actor": { "mbox": "mailto:learner@example.com", "name": "Jane Learner" }, "verb": { "id": "http://adlnet.gov/expapi/verbs/answered", "display": { "en-US": "answered" } }, "object": { "id": "https://example.com/activities/quiz/quiz-1/question-A", "definition": { "name": { "en-US": "Quiz 1 - Question A" }, "type": "http://adlnet.gov/expapi/activities/question" } }, "result": { "response": "B", "success": true, "duration": "PT8S" }, "context": { "registration": "b1a2c3d4-1111-2222-3333-444455556666", "contextActivities": { "parent": [ { "id": "https://example.com/activities/quiz/quiz-1", "definition": { "name": { "en-US": "Quiz 1" }, "type": "http://adlnet.gov/expapi/activities/quiz" } } ] } }, "timestamp": "2026-04-13T14:22:10.000Z" }
What to watch for (common mistakes):
- timestamp format must be ISO 8601 (e.g.,
2026-04-13T14:22:10.000Z). - object.id should be stable and unique per activity (question A shouldn’t share an ID with question B).
- verb.id should match a known verb URI when possible (or use your own consistent verb IDs).
- result.success should be a boolean, not a string like “true”.
Also: don’t rely on random IDs for activities. I generate activity IDs from a predictable pattern (example: /module/{moduleId}, /quiz/{quizId}/question-{questionId}) so they don’t drift when you redeploy content.
Step 3: Enhance xAPI Statements for Detail (Without Making Them Messy)
Core statements tell you what happened. Detailed statements tell you how it happened. That’s the difference between “completed” and “completed after three retries with long pauses on question C.”
Here are the details that usually matter most:
- duration (time spent watching / time to answer)
- response (what the learner selected/typed)
- success (correct/incorrect or pass/fail)
- attempt (attempt number, if you can compute it)
- device (mobile/desktop, screen size if you have it)
- session metadata (course run ID, player/session ID)
For video, the “gotcha” is that you need to decide what “watched” means. Is it 100% completion? 50%? First play? In my projects, I track both:
- viewed when they start/engage
- completed when they hit the threshold (like 90% watched)
Example: “watched” a video with pause/replay context
{ "actor": { "mbox": "mailto:learner@example.com" }, "verb": { "id": "http://adlnet.gov/expapi/verbs/watched", "display": { "en-US": "watched" } }, "object": { "id": "https://example.com/activities/video/intro-biology", "definition": { "name": { "en-US": "Video: Introduction to Biology" }, "type": "http://adlnet.gov/expapi/activities/media" } }, "result": { "duration": "PT8M12S", "extensions": { "https://example.com/xapi/extensions/timeWatchedPercent": 0.82 } }, "context": { "registration": "b1a2c3d4-1111-2222-3333-444455556666", "extensions": { "https://example.com/xapi/extensions/deviceType": "desktop", "https://example.com/xapi/extensions/sessionId": "sess-9f2a1c" }, "contextActivities": { "parent": [ { "id": "https://example.com/activities/module/module-1" } ] } }, "timestamp": "2026-04-13T14:28:44.000Z" }
Quick real-world scenario (what I ran into): on one project, the “timeWatchedPercent” extension was being sent as a string (e.g., "0.82") because the front-end stored it as text. The LRS accepted the statement, but downstream reporting treated it like categorical data instead of numeric. The fix was simple—send it as a number and keep extension keys consistent. That one change made the charts behave immediately.
Also, if you’re using tools like LearnRecord or similar middleware, just make sure you’re not losing fields when the tool maps your event payload to xAPI. I always validate by sending one statement manually first (Step 6) before trusting an automated pipeline.
![]()
Step 4: Implement xAPI Profiles and Rules (So Data Stays Comparable)
If you want your analytics to be reliable, you can’t let every statement drift into a slightly different shape. That’s where xAPI profiles and rules help.
Think of a profile as a contract: for a specific statement type (like “answered a question”), you define which verb/object combinations are allowed and which fields must appear in result and context.
Example profile idea (quiz answers):
- verb.id must be
http://adlnet.gov/expapi/verbs/answered - object.type must be
http://adlnet.gov/expapi/activities/question - result.success must be boolean
- result.response must be present
- context.registration must exist (so you can group attempts)
Minimal rule-style checklist you can enforce in your implementation:
- Required fields present: actor.mbox, verb.id, object.id, timestamp, context.registration
- Type checks: result.success is boolean, result.duration is ISO 8601 duration (
PT#H#M#S) - Allowed verbs: only use “answered” for question answers (don’t reuse “completed” for answer events)
- Allowed object IDs: question IDs follow your pattern (example:
/quiz/{quizId}/question-{questionId})
Personal take: profiles aren’t glamorous, but they save you weeks later. Without them, your reporting team ends up writing custom parsing logic for every “almost the same” statement variant. I’d rather define the contract up front.
Step 5: Connect to a Learning Record Store (LRS)
An LRS is where statements go so you can query them later. If you don’t connect correctly, everything else is just sending JSON into the void.
When I set up an LRS, I usually check three things immediately:
- Endpoint URL (the exact statements endpoint)
- Authentication (Basic, OAuth, or whatever your LRS uses)
- Content-Type header (
application/json)
Some teams use platforms like Scoresware or Learning Pool for learning infrastructure, but the important bit for xAPI is the LRS endpoint you’ll POST to—not the LMS marketing name.
Example endpoint (generic):
- Statements endpoint:
https://your-lrs.example.com/xapi/statements
Example Postman request setup:
- Method: POST
- URL: your LRS statements endpoint
- Headers:
Content-Type: application/jsonAuthorization: Basic base64(user:pass)(or OAuth token)
- Body: raw JSON (the statement payload)
What you should expect back: most LRS implementations return a 2xx response when the statement is accepted. If you get 4xx, don’t guess—inspect the response body and error message. It will usually tell you what field failed validation.
One more thing: if your LRS supports it, enable a “statement inspection” or “debug” view so you can confirm what actually got stored. That cuts down troubleshooting time massively.
Step 6: Test and Validate Your xAPI Statements (Don’t Skip This)
Here’s the truth: if you don’t test, you don’t really have xAPI—you have hope. I run a small suite of tests every time I change statement templates or extensions.
Use tools like Postman (or any REST client) to simulate events and verify they land correctly in your LRS.
Test checklist (with concrete assertions)
- Assertion 1: required fields
- actor.mbox exists
- verb.id exists
- object.id exists
- timestamp exists and is valid ISO 8601
- Assertion 2: timestamp sanity
- timestamp ends with
Z(UTC) or includes an offset - duration values use
PTformat (e.g.,PT8S,PT1M30S)
- timestamp ends with
- Assertion 3: identity consistency
- actor.mbox (or account) matches the identity format your reporting expects
- you’re not mixing mbox and account for the same learner in different statements
- Assertion 4: stored values match what you sent
- result.response is the exact string (e.g., “B”)
- result.success is boolean (true/false), not “true”/“false”
- extensions values are correct types (numbers stay numbers)
- Assertion 5: context wiring
- context.registration exists so attempts can be grouped
- contextActivities.parent IDs match your content structure
Validation tip I use: send one statement with a known timestamp (like 2026-04-13T14:22:10.000Z) and then query it back from the LRS. If the stored timestamp is different (or missing), fix your client-side timestamp generation before you scale up.
Also, test negative cases. For example: omit context.registration once and confirm your LRS rejects it (or confirm how it behaves). That tells you whether your profiles/rules are actually protecting you.
Step 7: Follow Best Practices for Recording Detailed xAPI Data
Once your statements work, the real work starts: keeping your dataset clean as content grows. Here are the best practices I stick to.
- Capture meaningful detail: durations, correctness, attempt number, and replay behavior usually pay off. Random UI events usually don’t.
- Standardize verbs and activity names: if you sometimes use “completed” and other times “finished” for the same event, reporting gets annoying fast.
- Use consistent IDs: stable URLs/IDs for activities are worth their weight in gold.
- Keep extensions disciplined: use extension keys with your domain (like
https://example.com/xapi/extensions/...) and keep types consistent. - Don’t overload statements: if you add 30 extensions to every action, you’ll slow down ingestion and make debugging miserable.
- Document your tracking plan: what you track, why you track it, and what “valid” means.
- Review periodically: courses evolve. Your tracking plan should too.
One thing I don’t like is unsupported “stats” thrown around without sources. Instead of repeating a random number about “data points per learner,” I’ll tell you what I’ve actually seen: when teams add duration and attempt/context metadata, dashboards usually become more useful immediately—because you can segment learners by behavior (fast guesses vs. slow retries) instead of just outcomes.
Step 8: Create a Quick Checklist for Setting Up xAPI Statements
- Define key activities: list modules, videos, quizzes, and question IDs you’ll track (and how you’ll name them).
- Standardize verbs and activity types: use consistent verb IDs and object.type values (e.g., question, quiz, media).
- Decide what goes in result vs. context:
- Use result for outcome data (success, response, score, duration).
- Use context for metadata (registration, parent/ category, device/session extensions).
- Build statement templates: create reusable JSON templates for your top 5 events (complete module, view video, answer question, etc.).
- Enhance with detail (sparingly): add duration, correctness, attempt number, and device info only where it helps you interpret behavior.
- Implement profiles/rules: enforce required fields and type checks for each statement type.
- Connect to an LRS: confirm endpoint URL, headers, and auth. Test with one known statement.
- Validate with assertions: check required fields, timestamp format, types, and that values match what you sent.
- Document IDs and mappings: keep a mapping doc for verb IDs, activity ID patterns, and extension keys.
- Re-test after changes: whenever you update content structure or statement templates, run the same tests again.
FAQs
Track the actions that reflect learning progress and mastery—things like module completions, quiz attempts, question answers, and meaningful engagement signals (like replays or time-on-task). If it doesn’t help you answer a learning question later, it’s probably not worth the data volume.
Start with clear verbs (completed, answered, watched, replayed) and specific activity objects with stable IDs. Then only add result and context fields that help interpret the outcome—response, success, duration, registration, and parent activity are usually the big winners.
Use xAPI profiles and rules (plus a shared template library) so the same event always produces the same statement shape. Consistency is what makes reporting and troubleshooting painless later.
An LRS stores your xAPI statements so you can query them across time and systems. Without it, you don’t have a reliable source for analytics, auditing, or cross-platform learning history.