How to Conduct a Usability Study: Quickstart Guide to Remote Usability Testing

Opinion UX Design This post may contain affiliate links. If you buy something we get a small commission at no extra charge to you. (read more)

We all know keeping your users at the core and testing consistently is crucial for any online product or service. However, that’s usually easier said than done. What happens when your customers are thousands of miles away? Do you have the right tools for the job?

You’ll come across tons of tools, articles and techniques for proper user research and testing on the internet. And you’ll quickly find out that you need a team of dedicated researchers to be able to do it all. Obviously this brings up the question of resources. Only a few companies will be able to spontaneously create a research team out of thin air.

We are not one of them, either.

Nevertheless, any conversation you can have with a current or potential customer, any insight into how they benefit from your product is immensely more valuable than anything else you can do in terms of moving your business forward.

Remote usability testing is one way to do just that. However good or bad, any kind of user testing is far better than no testing, no conversation at all.

Let me tell you how we roll at JotForm.

jotform usability testing

Planning

The first thing you should do is to decide how much time or money you are willing to spend. Exactly how much of either resource you’ll spend can change based on a few parameters, explained below.

It’s pretty much unanimously accepted that 3 to 5 participants is the ideal number of people for a usability study. If you test with less, you won’t have enough variance and if you test with more you risk not being able to see any new issues come up because of the old ones.

It comes down to your budget, your schedule and how quickly you iterate.

We reached an equilibrium at running a single study each week with 3 participants. Lowering the density and increasing the frequency of the studies works for us because the user paths we’re currently testing are close to each other.

This costs us $150 and about 4 hours per week. We spend the $150 on a 3rd party testing service at $50 per test. 2 hours are spent watching and discussing the results, and 1–2 hours are spent ordering tests and reporting.

Note that we have an absurdly fast development environment and it may not be the case with other companies. You can order a single study with 3 participants each month and still gain substantial benefit from a usability testing study.

Like many other things in life, the key here is to find what works for you and do it in a continuous cycle.

Finding Testers

Now that there’s a basic plan of action, you need to find testers; you have two options. You can recruit them yourself or you can use an external panel of testers from a 3rd party service.

You should start by deciding on what demographic you will be testing. Bring out your persona sheets, and make sure that you’re recruiting the right people. An 18-year-old tech-savvy student will have much different experience than a 40-year-old teacher. Establishing target demographics will allow you to prioritize what’s more important in your Information Architecture.

If you would like to recruit them yourself, you can use JotForm or a dedicated tool like ethn.io to create a screener survey in order to find them right on your website and manage a group testers manually.

Alternatively, you can work with 3rd party usability testing tools that usually foster their own group/panel of testers.

While recruiting your own testers can be free, it costs a lot of time. You would need to correspond with a lot of people and schedule testing sessions with your users.

They will need to take precious time out of their daily routines to help you out, so offering some sort of compensation for their time is a good tactic to further persuade them, especially when your user numbers are small.

A $5 Amazon Gift Card, a month off of your service, or priority support should suffice. The key here is that the compensation should not be large enough to become the main point of interest in a study.

On the other hand nearly all online testing tools offer the ability to select a target demographic and their panel of testers apply to take part in your test, saving a lot of time in exchange for a fee.

Preparation

Nobody goes into testing blind and you shouldn’t either. But how exactly should someone plan their usability tests?

Establish a Scenario

Usability testing, in simplest terms, is observing people while they interact with your product. The problem is that when you set out to do a testing session, your users won’t necessarily have any motivation to interact with your product.

This is why you should start by creating a scenario, preferably containing an ultimate goal, such as “Add a contact form to your website”. Here’s a sample scenario that we used recently:

Imagine that you’re building a new portfolio site for yourself. You have coded up everything and only thing remains is a contact form. You can do it yourself, but that’s going to take some time so you’re looking for alternatives. That’s when you stumble upon JotForm.

Next you need to figure out which tasks your testers should complete and give them a goal to accomplish.

Creating Tasks

Testers from 3rd party panels will usually know the drill but if you’re recruiting your own testers, encourage them to think out loud and speak up. There is usually a big difference in what you see on the screen and what the user is trying to accomplish.

When creating tasks for a test there are a few things that we pay attention to. The initial task we want the testers to do is:

Click on the “Integrations” button and integrate your form with Google Spreadsheets.

First we write about what they should be trying to achieve rather than how they can go about doing it. So instead of what’s written above, we say:

Integrate your form with Google Spreadsheets so when your form is filled, the response is automatically added into a spreadsheet.

Second we try to not give any hints to the user upfront. Once a user sees a word in your task text, they will immediately look for that word in your interface. So “integrate” should go:

Connect your form to Google Spreadsheets so when your form is filled, the response is automatically added into a spreadsheet.

And lastly, we try to avoid big words and keep the copy as simple as possible in order to avoid confusion:

Connect your form to Google Spreadsheets so when a new response arrives, it’s also added into your spreadsheet.

Testing Tools

Now that we got our scenario and some tasks laid out, it’s time for the actual testing part.

Unmoderated Testing

Neilsen Norman Group has an excellent article on what tools are available for unmoderated usability testing, as well as tips for choosing the right one.

There are a variety of tools in this domain, including some that allow you to run your own tests, and we have tried a fair share of these since we run mostly unmoderated tests.

WhatUsersDo is one such service provider and we’ve had nothing but good experiences with them.

We had good experiences with User Testing as well until the recent price hike. Their testing tool is definitely the best on market in terms how easy it is to be used by testers. This matters a lot when you’re testing with your own users.

One important lesson we learned was that the brevity and quality of your instructions matter a lot when doing unmoderated testing, since there’s no one to guide them if they get stuck.

Moderated Testing

While moderated tests are really good, they are usually a lot more expensive than unmoderated tests.

Some tools provide a paid service for moderating usability testing sessions which cost a lot of money.

Alternatively you can do it by scheduling a conference call over join.me or Skype, and then recording the call with Quicktime. The downside of moderating the tests yourself is that each test will take anywhere from 30 minutes to an hour to schedule and record.

The benefit of doing moderated testing is that you can immediately respond to user behavior and ask them questions about what they were thinking or trying to achieve in order to better understand their reasoning.

However it’s not always sustainable in the long run, that’s why we rarely do moderated tests. Admittedly it wouldn’t hurt if we did this more.

Analyzing Results

As software developers and designers, we tend to live in a bubble. We make decisions and assumptions that affect the everyday lives of our customers, yet we rarely check back on whether we’re doing the right thing.

We believe that developers and designers should actively seek out and observe the results of their actions, so we watch the results of our testing efforts together every Friday,(a.k.a Demo Fridays). This creates an atmosphere of discussion and everyone gets to see the big picture, which in turn sparks some heated discussions in the following team demos.

Another lesson we learned was not to jump to conclusions. As stated earlier, There is often a big difference in what you see on the screen and what a user is trying to do. You may sometimes think that your testers are acting in an erratic fashion, while they may be executing a completely logical behavior from their perspective.

Reporting

When all is said and done, any findings you have tend to be forgotten and disappear. Creating weekly reports and sharing them with everyone bolsters the importance of our findings inside the office, it also creates an easy-to-access reference point.

At JotForm we write our findings in a Google Document and create tasks for actionable items on our Asana board every week. We love that we are always learning something new about our customers and how they think. It keeps us lean. *ba dum tss*

Epilogue

In many ways user testing is a magic wand that will increase conversion rates, sales, or whatever metric you’re tracking. It’s your customers telling you what sucks and what doesn’t. It’s a given.

Here’s the catch: If you wait too long and make too many decisions without testing them, it might be too expensive to pivot when (not if) your users discover a blatant usability issue that is costing your customers left and right.

Consider this cautionary figurative fairy tale: If you’re late past midnight, then you’re stuck standing there with a pumpkin, a bunch of vermin and a missing shoe. Good times, eh?

Start testing as soon as possible. Don’t wait. It’s even better if it’s not perfect.

Be brave enough to be wrong sooner rather than being stuck later.


Author: Ege Görgülü

Ege Görgülü is the Director of Product Design at JotForm. He is a multi-disciplinary designer with a passion for usability and all things SaaS.