Tina Iurkova
Tina Iurkova

Resources

Resources on product design and productivity

Design Usability Testing: 11 common pitfalls you should avoid

Usability testing with a participant.
 
 

Usability testing helps us cut down on biases because we get to see how people are different from us when using digital products. Yet, it’s not until you’ve run a dozen tests, that rules your read online how to conduct them start to sink in differently.

Here are 11 lessons I’ve learned from my own testing experiences that you might want to steer clear of in your future tests:

 

Trap 1: Being overly friend

I used to believe that being excessively friendly would help participants ease into things, so I’d use jokes and try to connect on a personal level before getting started. But I found out it’s actually better to stay neutral right from the get-go, not just during the process. Being too friendly might prompt users to feel they should give back that same vibe, potentially leading to more positive feedback.

An image of a too friendly woman
 

Trap 2. Interviewing in the meantime

When resources are tight, every testing session feels like gold. It’s tempting to ask more questions. Gathering extra data isn’t a bad thing, but if you notice your users are focusing more on you than their screen, it might not be a usability test anymore. The key difference between interviews and usability tests lies in the thinking-aloud technique. This technique helps uncover what truly works for the users, rather than just what they think.

 

Trap 3. Avoiding awkward silence

You know those awkward moments of silence? In usability testing, they’re your allies. When you stay quiet, people tend to open up more. They start explaining things more thoroughly and thinking harder about why they do what they do. If your participant isn’t chatty, don’t rush to break the silence. Ask something like, “What’s going through your head?” if they’re looking a bit lost on what to say.

 

Trap 4. Not moving forward when you should

That’s a tricky one, but it is essential if you conduct testing with sensitive groups like children, elderly, people with disabilities, or participants that show a high level of distress. Once I had a testing session with a vulnerable testing group. While some of the users had no big problems to navigate the prototype, one user struggled with every task. In desire to get at least something, I started guiding the user, observing how much of guidance would help them to complete the task. Eventually, I talked too much, making the session overly long and probably tiring. I should’ve switched tasks sooner instead of lingering on one.

Yet, don’t ignore them when they struggle or get stressed. You can always show appreciation by saying, “Thanks for trying. What did you think about the task before we move on?” This way, you acknowledge their effort and gather insights without making it overwhelming.

 

Trap 5: Intervening too soon

As a moderator, it’s best to keep your talking to a minimum so your participants can fully dive into the scenarios without distractions. If someone gets quiet suddenly, give it a sec before asking what’s going on in their head. They might just be caught up in reading.

 

Trap 6: Recruiting participants from the design communities or re-using the participants

Bringing in participants from design circles or using those who’ve been in previous tests might seem very pragmatic — it speeds up the process and gets us to the analysis phase more quickly. But there is something different about users who have done the usability test in the past. Over time, they can shift from being product users to becoming design critics. Instead of engaging with the product, they might start focusing on layout specifics, spacing, or suggesting ideas from other platforms.

Testing repeatedly with the same individuals might lead to designing specifically for them rather than the broader user base. That’s why we roll with smaller test groups for usability testing — each time, it’s a fresh set of faces bringing in new perspectives.

 

Trap 7: Expecting all participants to show up

Share some cool pictures that show you in action. Use photographs of you conducting user research sessions, sketching wireframes, or working with the design tools. By including these visuals, recruiters can better understand your expertise and envision your involvement in the project. This not only strengthens your credibility but also enables them to visualize your potential contributions.

 

Trap 8: Asking for opinions

Our questions should be focused on what works for the users, not what they think would work. Questions like “What do you think of the visual design?” or “Would you like a side or top navigation layout?” might not bring the valid insights.

It is not that you should ignore any preferences or opinions shared. Volunteering an opinion is a data point, not when you directly ask for it. Sure, when you do, the participants always have something to share. That is why when you review your notes in the end, you might spot recurring themes around questions you asked, thinking they were big deals for users. Yet, those answers might not give you the real picture of the user experience as they’re not coming from a natural place. Keep your focus on observing behavior — it tells the real story.

Asking "So, whatchu think?"
 

Trap 9: Fixating on usability metrics when testing with small samples

Some key usability metrics may fit your goals even with a smaller sample size. For instance, if you’re assessing whether the length of your new onboarding assessment is just right, tracking task on time metric could be insightful. Yet, relying solely on these metrics when your sample size is small might result in incomplete or misleading conclusions.

It’s important to distinguish between formative and summative usability testing. Formative testing, involving 5–8 users, focuses on gathering qualitative insights, catching errors in an iterative process. On the other hand, summative testing, with 15–20 participants, measures design using specific UX metrics like task completion rate, success/failure rates, time spent, error rates, and overall user satisfaction.

 

Trap 10: Testing in a waterfall fashion

If you often find yourself waiting around for the next testing need while other designers polish up the final UI, you might be stuck in a testing waterfall. Instead, think of testing as “rounds” of research rather than doing all of it upfront. You can do quick user testing every sprint, evaluating initiatives you are working on or those areas of your product you would like to improve.

 

Trap 11: Not reflecting on the previous user sessions you moderated

Reflecting on the user sessions you’ve conducted is a goldmine for personal growth as a researcher. I greatly improved my moderation skills by listening to the session recordings of the past. This is where I could spot that I talked too much or over-explained my questions in a way I should have avoided.

 

Usability testing gets talked about a lot, yet we keep stumbling on the same mistakes. But if you have the right participants linked to your product, set tasks that mirror real-life actions, and focus on capturing behaviour instead of running conversations, you can’t go very wrong in the beginning. Keep reflecting on the process, and it’ll become more intentional and effective over time.

If you want a stress-free setup for your next usability test, take a look at this Usability Test Planner I’ve created. It covers everything from the setup phase to analysis, making it easy for you to get started.

 
An image of a man testing the mic.