How to run better mockup and prototype usability tests | Heart Internet Blog – Focusing on all aspects of the web

No matter how skilled or experienced you are, designing a fully functional website takes time and effort. The more iterations you go through, the more time and effort you have to expend, and if designing websites is how you make your living, then every additional iteration can cut into your profit margin.

That’s why designers test – to make sure they know what potential users will want from a site before work starts on the final design. But testing isn’t magic – the results you get are only as good as the tests you run, and sloppy testing can be as bad as no testing at all.

In this post, we’re going to look at ways to make sure you run the best possible mock up and interactive prototype tests. We’ll do this by highlighting some common testing mistakes that could be skewing your results. By refining your testing process, you’ll be able to work on your final site designs with more confidence than ever before.

Mockup testing pitfalls

Despite the obvious limitations of a mockup, they provide a crucial stepping stone between IA testing and interactive prototypes. Mockups allow you to confirm the results of your earlier testing without a huge investment of time and resources.

But all those advantages disappear if you get your tests wrong. Here are the pitfalls to avoid.

Ignoring your IA test results

Sometimes, it can be tempting to ignore previous test results and view mockups as a chance to start from scratch. Perhaps your instincts as a designer are telling you the IA results you got can’t possibly be right, or maybe there’s pressure from the client to follow their personal preference over what the data is telling you. Whatever the reason, you need to do your best to resist caving into this temptation.

If you’re concerned about the validity of your IA tests, then instead of pushing on through the testing process, run more IA tests.

If there’s pressure from your client to ignore earlier test results, then consider testing mockups based on their preferences, as well as the ones you’ve created based on your IA test results. That way, you should be able to show them that what they want to do will prove unpopular with the site’s users.

Remember – there should be a clear, unbroken line running from your IA test results to your final design.

Unfocused testing

Do you need to test whether users can find the “About Us” page from the homepage? Or whether they know where to click to leave a comment on a blog post?

The chances are, the answer to both these questions is no. And if you are running tests like this, the chances are you’re wasting both time and money.

Good testing is always aligned with the overall purpose of the specific webpage being tested and/or the overall purpose of the site as a whole.

Remember – your real aim in all of this is to create process flows that people find easy to use, so make sure the bulk of your testing is focused on that.

Using an inappropriate test

With five main test types available to you (first click, five-second, navigation, preference and question) it can be easy to slip into the trap of picking the wrong one for the job.

One of the biggest temptations is to use question testing as your go to option. But although question testing is an excellent way to get an overview of how people feel about your designs, if you try and use it to replace, say, first click testing, then you’ll quickly discover its limitations.

Preference testing is another option that is sometimes misused – it’s really good for things like comparing logos, or gathering general preferences on two potential designs.

But if you try and use it to gather data on process flows, then you’re not going to get reliable results – what people say they prefer in theory can be very different from what they actually do in practice.

Hoping to gain a lot of information quickly through using question and preference testing in this manner may feel like a shortcut, but you won’t end up with reliable data.

First click and navigation testing are the key to developing proper process flows. Don’t try and cut them out.

The wrong number of participants

The more people participate in a test, the better your results will be, right? Well, sort of. Research by the Nielsen Norman Group does show a very weak positive correlation between the number of usability findings and the number of test participants, but the difference is so small that it’s rarely worth using more than five participants.

Why might you want to use more participants? Well, sometimes you might be designing for multiple target audiences, in which case you’d want to test with two different types of people. Or you might have a client who wants extra participants because it helps them to convince their superiors that your results are accurate.

But really, in most cases, there’s no need to use more than five people.

Someone writing about thought leadership on their blog

Interactive testing pitfalls

Interactive testing is your last chance to make sure things work as they should before you commit to building the final site.

This stage of testing can become both complex and expensive. That means mistakes are even more costly.

Here are things to avoid.

Failing to record your tests

Prototype testing is complex, but that means it offers you the chance to gather a huge amount of information.

If you’re not taking that opportunity, then you’re likely missing out on some very useful data.

Videoing your subjects as they interact with your prototypes will allow you to see exactly what they do, and you’ll also be able to record their responses when they explain why they did something.

Videoing should be a given for remote testing and usability lab situations.

However, if you’re on a tight budget and are using guerrilla testing because of that, then you need to make sure you’re videoing your subjects. (With permission, of course.)

Using the wrong facilitation method

It’s possible to use the wrong facilitation method in all types of usability testing, but doing so is likely to have more of an effect on your prototype testing results.

Why? Well, both IA and mockup testing are essentially artificial, whereas with prototype testing, you’re testing how people will interact with the site once it’s finished and live.

Picking the correct facilitation method is crucial when you’re looking at things like how long it takes users to complete a certain activity.

If you’re using the concurrent think aloud method and/or concurrent probing in a time to completion test, then participants will be distracted from their task and take longer to complete it, skewing your results.

Retrospective think aloud testing might mean longer session and hence cost you more money, but in this case it would provide you with much better results.

The wrong number of participants

As with mockup testing, you can probably get away with just five participants for your interactive tests – especially if you’re only interested in identifying usability issues that are measured qualitatively.

However, assuming you’ll also have more quantitative goals at this point, you’ll probably want to aim for 20+ users for the purposes of statistical confidence.

As always though, the number of people you use may depend on your budget.

Introducing bias into your tests

(This one also applies to mockup testing.) If you’re asking questions to your participants, then there’s a chance you might inadvertently influence your results. And if that happens, your results probably won’t be much use.

One of the biggest issues you face here is the fact your test subjects will (either consciously or unconsciously) want to please you by giving you the “right answer”.

Of course, as far as you’re concerned, the “right answer” is a summation of the participant’s true experiences of the test. But the chances are your participants won’t know this.

That means if you ask questions like “did you find the site easy to navigate” then people are likely to say “yes”, even if they were very confused by the whole thing. They’re not trying to mislead you, they just think yes is the answer you want to hear.

Avoid this by using open questions such as “What was your experience of navigating the site?”. That way, you’re more likely to get an answer which accurately reflects the respondent’s real feelings.

Summing up

“Rubbish in, rubbish out” applies to a lot of things in life, including usability testing. Although testing and learning from your mistakes is to be encouraged, if you don’t know you are making mistakes then your tests will never improve.

Make sure you stay up to date on both the theoretical and practical side of testing – that way you’ll have the greatest possible confidence that you’re getting things right.

Comments

Please remember that all comments are moderated and any links you paste in your comment will remain as plain text. If your comment looks like spam it will be deleted. We're looking forward to answering your questions and hearing your comments and opinions!

Got a question? Explore our Support Database. Start a live chat*.
Or log in to raise a ticket for support.
*Please note: you will need to accept cookies to see and use our live chat service