How To Create Competency-Based Assessments For Online Learners

By StefanAugust 31, 2024
Back to all posts

Competency-based assessments for online learners can feel like wandering into a maze—there are a lot of paths, and most of them don’t connect to what you actually need to measure. I’ve seen teams end up with “assessments” that check boxes (a quiz here, a discussion there) but don’t really prove mastery.

So the real question I always ask is simple: how do your assessments show that learners can do the thing? Not just that they read the material. Not just that they clicked through. Can they demonstrate the competency at the level you claim?

In this post, I’ll lay out a practical framework you can use to design competency-based assessments that work in online courses—clear benchmarks, repeatable rubrics, measurable scoring, and a way to keep improving after you pilot. No fluff.

Key Takeaways

  • Write competencies as observable outcomes (verb + object + context), then build assessments that produce evidence for each one.
  • Use a mastery rubric with 3–4 levels (e.g., 1–4) and anchors that describe what “meets” looks like—not just percentages.
  • Map each competency to 1–3 assessment artifacts (quiz items, project tasks, simulations, oral/video demos) so scoring is defensible.
  • Define a scoring rule (e.g., “Mastery = level 3 or higher on all rubric dimensions”) before you collect learner data.
  • Track both quantitative results (scores, pass rates, time-to-mastery) and qualitative evidence (reflection, peer review, rationale) to interpret results.
  • Give feedback that points to the next action: which rubric dimension to fix, what example to follow, and what to resubmit.
  • Pilot with a small cohort, calibrate scoring across graders, and revise competencies/assessments when data shows consistent gaps.

Ready to Build Your Course?

Try our AI-powered course builder and create amazing courses in minutes!

Get Started Now

Steps to Create Competency-Based Assessments for Online Learners

Here’s the workflow I use when I’m building competency-based assessments from scratch. It’s not complicated, but it is structured—because without structure, you end up with subjective grading and learners who can’t tell what they need to improve.

  • Step 1: Define competencies as observable outcomes. Don’t write “understands debugging.” Write something like “debugs a provided code snippet by identifying root cause and applying a correct fix.”
  • Step 2: Build a competency-to-evidence map. For each competency, list the artifacts that will prove mastery (quiz items, project tasks, simulations, oral demo, portfolio artifact).
  • Step 3: Create a mastery rubric with anchors. Use 3–4 performance levels and describe what “Meets” looks like using concrete behaviors.
  • Step 4: Set scoring rules. Decide how rubric scores convert to “Mastered/Not yet,” and what happens after a learner misses.
  • Step 5: Pilot and calibrate. Run a small test group and check scoring consistency between graders (or between your rubric and real submissions).
  • Step 6: Measure mastery and improve. Use analytics to find which competencies are failing consistently, then revise either instruction or the assessment.

To make this less abstract, I’ll show an example competency and what “good” evidence looks like.

Understanding Competency-Based Assessment

Competency-based assessment is evaluating learners against specific competencies—skills and knowledge they can demonstrate—rather than measuring time spent in a course. In practice, it’s about the evidence you collect.

For me, the biggest difference is this: traditional courses often ask, “Did you finish the module?” Competency-based courses ask, “Can you do the task to the standard we defined?”

Example competency (real-world style):

  • Competency: “Create a structured troubleshooting plan for a web app bug.”
  • Evidence artifact: a short written plan plus a recorded walkthrough (5–8 minutes) showing the steps taken.
  • Rubric dimensions: (1) problem framing, (2) hypotheses & tests, (3) correctness of fix, (4) communication clarity.

Now you’re not guessing. You’re collecting proof that matches what you claim learners can do.

One more thing I’ve learned the hard way: if you don’t define mastery levels, “competency-based” turns into “opinion-based.” The rubric is what keeps it fair and consistent.

Identifying Competencies for Online Learning

Identifying the right competencies is where the whole system either takes off or collapses. If your competencies are vague, your assessments will be vague too—and learners will suffer.

My rule: every competency should answer “what can they do, and what does good look like?”

Start with your course goals or program outcomes. Then break them down into skills. For a coding program, you might end up with competencies like:

  • Understand algorithms and explain tradeoffs using a real scenario.
  • Debug code by identifying root cause and applying targeted fixes.
  • Design a user interface that meets usability requirements.

Next, bring in stakeholders. This doesn’t have to be a formal committee. In my experience, even 2–3 subject matter experts reviewing your competency list is enough to spot mismatches—like skills that sound important but aren’t actually used day-to-day.

You can also use content mapping to visualize which parts of your course support each competency. That’s useful because it forces you to ask: “Do we teach what we’re going to assess?”

Prioritization tip: rank competencies by impact and frequency in the target job role. If you have 12 competencies but time to assess only 6 well, learners won’t benefit from the rest if they never get real evidence collection.

Designing Effective Assessment Tools

This is where you decide what kind of assessment actually produces evidence for each competency. And no, it doesn’t have to be only one format.

Here’s a simple decision guide I use:

  • Use quizzes when the competency includes recall, concept checks, or identifying correct/incorrect reasoning.
  • Use projects when the competency is about building, applying, or producing something.
  • Use simulations when the competency needs realistic decision-making (customer support tickets, incident response, sales calls).
  • Use oral/video demos when the competency includes explanation, process, or communication.

Variety matters, but not because it’s “fun.” It matters because different competencies require different types of proof.

Digital portfolio idea that works well online: learners submit artifacts tagged by competency (e.g., “C2 debugging plan,” “C3 UI usability checklist”). If your LMS supports file submission and metadata, you can track mastery per competency without digging through everything manually.

Accessibility check (non-negotiable): if your assessment requires video upload, provide an alternative (text transcript, written explanation, or live audio recording). Also, keep instructions short and plain. If learners can’t understand the task, you’re not testing competency—you’re testing confusion.

And yes, self-assessment can help. But here’s what I recommend instead of generic reflection prompts: tie reflection to rubric dimensions.

  • After a submission, ask: “Which rubric dimension do you think you met (and why)?”
  • Then: “What is your next action to improve the dimension you didn’t meet?”

That turns reflection into something measurable, not just journaling.

Quick template: competency-to-assessment mapping matrix (copy this structure)

  • Competency: (name + observable outcome)
  • Evidence artifact(s): (quiz items / project task / simulation / demo)
  • Rubric dimensions: (2–5 dimensions max)
  • Scoring rule: (e.g., “Mastered if all dimensions ≥ 3”)
  • Feedback loop: (what learners resubmit and what changes you expect)

This matrix is the “glue” that keeps your assessment aligned with what you’re trying to teach.

Implementing Assessments in Online Courses

Implementation is where most competency-based systems either become usable… or fall apart. If learners don’t know what to do, they won’t produce evidence. And if your course schedule is unclear, everything feels chaotic.

What I look for when implementing:

  • Clear timing: formative checks happen before the final mastery assessment.
  • Transparent expectations: rubric + examples shown before learners submit.
  • Simple submission workflow: one place to submit, one place to see feedback.

Communication matters. I usually include:

  • When assessments open/close (include time zone).
  • What “Mastered” means for each competency.
  • What learners can do if they don’t meet mastery (resubmission rules).

For the technical side, I’ve had good results using an LMS that supports submissions and rubric scoring. Moodle is solid for structured grading, and Teachable (and similar platforms) can work well when you want a streamlined learner experience.

One practical setup tip: if your LMS allows it, create assignment categories that map directly to competencies. Then you can export or review results by competency instead of treating everything as one big grade.

Also, include formative assessments. Not because “more assessment is better,” but because it reduces failure at the final stage.

  • Example checkpoint cadence: week 2 (micro-quiz + short draft), week 4 (project milestone), week 6 (final competency submission).

Measuring Learner Performance

Measuring learner performance in a competency-based system isn’t just “what score did they get?” It’s “what evidence do they have per competency, and do they meet the mastery threshold?”

Define your metrics before you launch. Here are concrete metrics that actually help you make decisions:

  • Rubric mastery rate per competency: % of learners scoring at mastery level (e.g., Level 3 or 4) on each dimension.
  • Overall mastery attainment: % of learners who meet the scoring rule for the competency (not just average rubric score).
  • Time-to-mastery: number of days from first submission attempt to mastery.
  • Resubmission frequency: how many attempts are needed before mastery—this often signals instruction gaps.
  • Item/skill difficulty (for quizzes): % correct per question group mapped to competency subskills.

Here’s a scoring example I’ve used successfully:

  • Rubric dimensions: 4 dimensions, each scored 1–4.
  • Mastery rule: Mastered if all dimensions are ≥ 3 AND at least 2 dimensions are scored 4.
  • Not yet rule: If any dimension is < 3, learner gets targeted feedback and resubmits only the missing part.

That approach keeps resubmissions from turning into “redo the whole thing.” Learners focus on what they actually need.

Use analytics from your LMS to track trends. But interpret carefully. If learners fail a competency repeatedly, it could be:

  • the assessment is unclear (rubric mismatch),
  • the instruction didn’t build the needed skills, or
  • the scoring rubric is too strict (or inconsistent).

And don’t rely only on numbers. Qualitative evidence like reflections, peer feedback, and instructor notes can explain why learners got stuck—especially when multiple attempts happen.

Providing Feedback and Support

Feedback in competency-based assessment should do more than judge. It should tell learners exactly what to change next.

My feedback checklist:

  • Specific: mention the rubric dimension(s) and what was demonstrated.
  • Actionable: provide the next step (what to revise, what to include, what to resubmit).
  • Timed: ideally within 24–72 hours for assignments that feed into later tasks.

Instead of “Good job,” I prefer something like:

  • “You met dimension 2 (hypotheses & tests)—your test plan was clear and logically sequenced. Next, improve dimension 3 (correctness of fix) by validating the fix against the original failing case.”

Video or audio feedback can be great here. I’ve noticed learners respond better when they hear the “why,” not just the score. Even a 60–90 second Loom-style recording can reduce back-and-forth questions.

Also, keep the growth mindset real. Don’t just say “mistakes are part of learning.” Tie it to the system: “Here’s what to do differently on the resubmission.” That’s how you build confidence.

Support systems should be built into the assessment workflow, not added as an afterthought.

  • Discussion forums for competency questions (“C2 debugging plan confusion”).
  • Virtual office hours during assessment windows.
  • Sample submissions that show what mastery looks like.

Peer review can help too—especially when you structure it with the same rubric dimensions. If peers don’t have rubric anchors, peer feedback becomes “I liked it” instead of “here’s what needs improvement.”

Evaluating and Revising Your Assessments

If you want competency-based assessments to stay trustworthy, you have to treat them like products: measure, learn, iterate.

What to evaluate:

  • Which competencies show low mastery rates?
  • Are learners misunderstanding instructions or rubric expectations?
  • Do resubmissions improve outcomes—or do they stall?
  • Are graders scoring consistently?

When data shows a repeated problem, use a decision rule.

  • If > 40% of learners miss the same rubric dimension across multiple cohorts, revise the instruction (teach the missing skill) before you change the assessment.
  • If learners score inconsistently (different graders give very different rubric scores for the same evidence), calibrate rubric anchors and retrain graders.
  • If learners report confusion in feedback surveys (“I didn’t know what you wanted”), rewrite the assessment prompt and add 1–2 examples.

Don’t skip piloting. I’m a big fan of running a small cohort test first—like 8–15 learners—because it surfaces issues you won’t catch in planning.

Pilot what?

  • Clarity of instructions (can they complete it without questions?)
  • Rubric effectiveness (does it separate mastery vs not yet?)
  • Time estimates (how long does it take learners to submit properly?)
  • Grading load (how long does it take you to score accurately?)

Ready to Build Your Course?

Try our AI-powered course builder and create amazing courses in minutes!

Get Started Now

Best Practices for Competency-Based Assessments

Let me be blunt: “best practices” only matter if they show up in how your assessments are built and graded. Here are the ones that consistently make a difference.

  • Align assessments to the competency, not just the topic. “Knows about X” isn’t the same as “can do X.” If your competency is performance-based, your assessment must be too.
  • Write measurable rubric criteria. Aim for 3–5 rubric dimensions. More than that usually creates noise and grading drift.
  • Use scoring anchors. For each dimension, include 1–2 example excerpts (or described behaviors) for Level 3 and Level 4. This improves consistency fast.
  • Build a clear feedback loop. Learners should know what to fix after they get results. If they can’t resubmit or don’t know what to change, you lose the competency angle.
  • Include realistic tasks. Case studies and simulations work well because they mirror how the skill is used. If it feels like an academic exercise only, learners may pass without being able to transfer the skill.
  • Calibrate grading. If you have multiple graders, do a calibration session with 5–10 sample submissions. Discuss disagreements and update anchors if needed.
  • Track outcomes per competency. Don’t just watch overall course completion. Watch mastery rates by competency dimension so you can pinpoint what’s failing.

One more practice I like: keep your assessments “tight.” If a submission prompt is 2 pages long, you’ll get inconsistent evidence. Short prompts with specific deliverables are easier to score and easier to improve.

FAQs


Competency-based assessment evaluates learners based on their ability to demonstrate specific skills and knowledge, rather than traditional methods that focus on time spent in a course or other metrics.


Identify competencies by analyzing course objectives, consulting industry standards, and collaborating with subject matter experts to ensure that selected competencies align with both learner needs and course goals.


Best practices include aligning assessments with learning objectives, using diverse assessment formats, ensuring clarity in instructions, and incorporating opportunities for learner reflection to enhance the assessment experience.


Provide feedback promptly and focus on specific strengths and areas for improvement. Use examples from the work submitted and encourage a growth mindset to promote ongoing learner engagement and development.

Related Articles