Design Thinking To Solve Problems in 7 Practical Steps

By StefanMay 10, 2025
Back to all posts

Ever notice how you can feel totally stuck on a problem… even though you “know what to do”? Most of the time it’s because you jumped into solutions before you actually understood the real problem.

That’s where design thinking comes in. I’ve used it on everything from messy internal processes to course planning, and what I like most is that it doesn’t pretend you can get it right on the first try. It’s built for figuring things out with people—not just for people.

It’s basically a practical, repeatable way to tackle problems by looking at the situation through the user’s eyes, generating options, building something you can test, and improving based on what you learn.

Key Takeaways

  • Design Thinking is a structured problem-solving approach that starts with understanding the user’s needs (not your assumptions).
  • The process typically runs through 5 steps: empathize, define the real problem, generate ideas, prototype quickly, and test with real people.
  • You should expect to cycle back—often multiple times—because feedback changes what you think the problem is.
  • It’s not a rigid checklist. You can adapt the steps to match your timeline, budget, and how risky the decision is.

Ready to Create Your Course?

Try our AI-powered course creator and design engaging courses effortlessly!

Start Your Course Today

Use Design Thinking to Solve Problems Effectively

Let’s be honest: most “problem solving” at work turns into arguing about solutions. Someone says, “We should do X,” and suddenly you’re debating X instead of figuring out whether X actually solves the right thing.

Design Thinking is different. It’s a method that’s grounded in empathy, creativity, and experimentation. You start with the people involved, not with your preferred answer.

Do big companies use it? Sure—IBM Design Thinking is one example you can look up. I’m not interested in name-dropping, though. What matters is whether it helps you make better decisions faster.

In my experience, the biggest payoff is clarity. You stop guessing. You build a small version of the solution, test it, and learn quickly. That’s especially useful when the problem is “messy” (multiple stakeholders, unclear requirements, or user behavior that doesn’t match what the team expects).

And it’s not only for product teams. If you’re working on a course syllabus or building your first masterclass online, design thinking can help you structure what you teach based on what learners actually need—so you’re not just creating content you hope will land.

Steps in the Design Thinking Process

Alright, how does it actually work in practice? Here’s the version I use most often—simple, but not shallow.

Design Thinking is commonly described as a 5-step loop. I’m also going to show a “7 practical steps” flow you can run in real life by adding two lightweight checkpoints (because in the real world, you’ll want them).

Step 1: Empathize

  • Gather what users experience: frustrations, motivations, context, and constraints.
  • Artifact to use: an empathy map with fields like Says, Does, Thinks/Feels, and Pains/Gains.

Step 2: Define

  • Turn insights into a problem you can work on.
  • Artifact to use: a problem statement and a How Might We question.

Step 3: Ideate

  • Generate lots of options without locking into the first “good idea.”
  • Artifact to use: a short list of idea “themes” (not just random suggestions).

Step 4: Prototype

  • Build the smallest thing that lets a user experience the idea.
  • Artifact to use: a prototype with a clear fidelity level (paper sketch, clickable wireframe, script/video, etc.).

Step 5: Test

  • Run a test with real people and a simple script.
  • Artifact to use: a test plan with success criteria and what you’ll measure.

Step 6: Learn & Decide (checkpoint I recommend)

  • Summarize findings into “What we expected vs what happened.”
  • Decide: persevere, pivot, or stop.

Step 7: Iterate

  • Update the definition, prototype, or both—and run another cycle.

And yes—repeat. Design Thinking isn’t linear. You’ll bounce back through stages when you learn something new.

Here’s what that looks like with a real scenario I’ve personally run: we were trying to improve learner engagement for an online course module that “should” have been interesting (based on the topic), but completion and quiz attempts were low. Instead of adding more content, I ran quick empathy interviews (5 learners, 20 minutes each), pulled patterns into an empathy map, and rewrote the problem statement around confidence and next-step clarity—not “students weren’t interested.”

Then we prototyped a revised learning flow as a short clickable sequence (lesson intro, one practice task, feedback loop) and tested it with another small group. What changed? The module went from feeling like information to feeling like “I know what to do next.” Completion rose, and the number of learners who attempted the practice task increased—because the prototype directly addressed the moment where confusion was happening.

That’s the whole point: the process forces you to find the real friction point before you spend time building a full solution.

Understand Users’ Needs Through Empathy

Empathy is basically your reality check. It’s how you stop designing for your internal logic and start designing for how people actually behave.

When I talk about empathy, I don’t mean “be nice” or “ask a few questions.” I mean getting specific about what users do and why they do it.

Here’s how you can practice it:

  • Observe: Watch someone use the product/course path. Don’t just listen—look for hesitation, re-reading, backtracking, or moments where they stop.
  • Ask open questions: “What were you hoping would happen?” “Where did you get stuck?” “What did you try next?”
  • Listen for tradeoffs: People don’t only have needs—they have constraints (time, budget, skill level, tools, confidence).

Quick example: in online shopping, cart abandonment is extremely common (the often-cited figure is around 69.8%—for many sites this comes from friction like unexpected shipping costs, confusing checkout steps, or irrelevant product recommendations). The empathy move is to identify which friction is happening for your users, then test small changes to reduce it.

If you’re building courses, empathy might look like: “When learners feel lost, they don’t fail because they’re lazy—they fail because they don’t know the next step.” That insight is gold. It directly supports student engagement effectively because you’re designing for momentum, not just information.

Practical tip: after each interview/observation, write one sentence starting with: “They keep doing X because they believe Y, but that belief breaks when Z.” That sentence usually becomes the backbone of your problem definition.

Ready to Create Your Course?

Try our AI-powered course creator and design engaging courses effortlessly!

Start Your Course Today

Clearly Define the Problem Statement

If your problem statement is fuzzy, your team will “solve” random things. I’ve watched this happen way too many times.

When you define the problem, you’re trying to answer one question: what’s the real obstacle for the user?

Here’s a simple structure that works:

  • User: who is struggling?
  • Need: what are they trying to do?
  • Friction: what gets in the way?
  • Impact: what happens because of it?

Example (course context): “Online learners struggle to stay focused during the first 10 minutes of a module because the learning steps aren’t obvious, so they don’t attempt practice tasks and they lose confidence.”

Notice how that statement gives you testable directions. Compare it to a vague one like “We need better retention.” Retention is the outcome. You need the reason behind it.

Actionable tip: write a How Might We question right after your problem statement. For example:

  • How might we help learners know the next step within the first 10 minutes—so they can confidently start practice?

If you’re stuck, try this exercise: list the top 3 things users said during empathy research. Then ask, “What do those statements have in common?” That common thread is usually the problem.

Generate Ideas with an Open Mind

This is where you stop trying to be “right” and start trying to be useful. Ideation is messy on purpose.

I like to set a timebox for idea generation—like 15–25 minutes—because otherwise it turns into overthinking.

Here are a few tactics that actually help:

  • Silent brainstorm first: everyone writes ideas separately for 5–10 minutes, then you share. It prevents the loudest voice from steering everything.
  • Use constraints: “Ideas must be testable in 48 hours” or “Ideas should reduce confusion at the first step.” Constraints produce better ideas than blank brainstorming.
  • Cluster ideas: group them by theme (e.g., clarity, motivation, feedback, pacing). This makes prototyping easier later.

For example, if your problem is “remote learners disengage,” ideas might include:

  • Interactive “choose-your-path” scenarios
  • Short quizzes that trigger specific next content
  • Micro-assignments with instant feedback
  • Examples that match learners’ real jobs

One thing I’ve learned: don’t throw away “weird” ideas immediately. Sometimes the weird one becomes the best test because it reveals a new angle on the user’s needs.

Create Prototypes to Visualize Solutions

Prototyping is where design thinking stops being talk and starts being reality.

And no—you don’t need a perfect build. You need a version that lets someone react to the idea.

What kind of prototype should you build? Match the prototype to the question you’re trying to answer.

  • If you need to test clarity: build a simple script, storyboard, or short video walkthrough.
  • If you need to test flow: build a clickable wireframe (even low fidelity).
  • If you need to test communication: prototype the message copy and structure (headings, prompts, examples).

Real-life tip: choose the lowest fidelity prototype that still answers the core question. Perfection wastes time. A “good enough” test is better than a beautiful guess.

Course example: if you’re testing a new lesson structure, create a 2–3 minute “lesson preview” video (or a slide-to-video mock) showing the exact flow: hook → what you’ll do → quick practice → feedback. Then ask learners to complete a tiny task while you watch what confuses them.

In my experience, that kind of quick prototype can save weeks. You find out early whether the issue is the content, the pacing, the instructions, or the feedback loop.

Test Solutions with Real Users

Testing is where you stop debating internally and start learning externally.

Give real people your prototype and watch what they do. Then ask questions that uncover thinking—not just reactions.

Here’s a simple test script you can reuse:

  • Task: “Try this for 5 minutes—tell me what you expect to happen.”
  • Probes: “What’s confusing?” “What would you do next?” “Why?”
  • Feedback: “What should we change first?”
  • Confidence check: “How confident do you feel about completing the next step?” (1–5 scale)

When you collect feedback, focus on:

  • Emotions and friction: Where do they hesitate? What do they misunderstand?
  • Behavior: Do they complete the task? Do they abandon the flow?
  • Language: What words do users use to describe the problem? Those words often become your messaging.

About metrics: it’s tempting to chase vanity numbers. Instead, use metrics that map directly to the problem you defined.

For customer-facing products, teams often track CSAT (Customer Satisfaction Score) or NPS (Net Promoter Score). The key is how you set them up.

Mini playbook I use for CSAT/NPS-style testing:

  • Baseline: measure the current experience first (even a small sample).
  • Success threshold: decide what “meaningful improvement” looks like (example: +0.5 on a 5-point CSAT, or +5 NPS points).
  • Sample size: if you’re doing usability testing, 5–8 users can uncover big issues; if you’re doing survey metrics, you’ll need more people for stability.
  • Interpretation: if satisfaction rises but task completion doesn’t, you might have “pleasant confusion.” Fix the flow first.

And yes—stay open-minded. If users are critical, that’s not failure. It’s data.

Iterate and Improve the Solution

Iteration is just improvement through learning. It’s not “redo everything because we’re bored.”

Here’s how to iterate like a pro:

  • Compare what you expected with what actually happened.
  • Update the prototype based on the highest-impact issues first (the ones that caused abandonment, confusion, or failed tasks).
  • Decide whether to persevere (keep the direction), pivot (change the approach), or stop (the idea isn’t worth pursuing).

If you’re building online courses, iteration can look like swapping lesson order, rewriting instructions, or changing the practice/feedback timing.

One practical example: if learners can follow the explanation but don’t attempt practice, don’t just add more practice. Add “micro-success” steps—smaller tasks that build confidence quickly, then scale.

And if the data suggests your original direction was wrong, pivot. That’s not a setback. It’s the design thinking loop doing its job.

Flexibility in Applying Design Thinking

Design thinking isn’t rigid. The whole method is meant to be adaptable.

In practice, you’ll adjust based on:

  • Timeline: can you do one cycle in a week, or do you need a month?
  • Budget: what’s realistic for prototypes and testing?
  • Risk: how expensive would a wrong decision be?
  • Team size: do you have enough people to run interviews and test sessions?

For small projects, you might do a lightweight version: quick empathy (3 users), define the problem, build one prototype, and test it. For bigger initiatives, you’ll likely run multiple cycles and go deeper into research.

If you’re designing course curriculum, you’ll often benefit from repeated empathy and testing—because learners’ needs aren’t always obvious until you watch them try.

The goal isn’t to “follow the process.” The goal is to build understanding, reduce risk, and improve outcomes for real users.

FAQs


Empathy helps you understand what users are really experiencing—their frustrations, motivations, and day-to-day constraints. When you observe and talk to people directly, your solutions become more relevant because you’re designing around real pain points and unmet needs, not assumptions from inside the team.


Prototyping turns ideas into something people can interact with. That matters because you can’t truly validate assumptions with slide decks or opinions. A simple prototype lets you test early, discover misunderstandings fast, and refine the solution before you invest heavily in full development.


Idea generation works best in a judgment-free environment. Start with quantity (lots of ideas), then sort and combine later. Techniques like timed brainstorming and having people write ideas separately first can prevent dominant personalities from steering the group too early. The goal is to explore possibilities before you decide what’s best.


Iteration is how you improve based on what you learn from testing. Instead of assuming your first version is “close enough,” you refine the prototype and sometimes even rewrite the problem statement. This reduces the risk of building the wrong thing and increases the chances that your final solution actually fits the user’s needs.

Related Articles