When’s the Best Time to User Test? A Case Study

Article summary

On a recent project, we had enough time to do a small user test, which got me thinking about the optimal time to get feedback. We discussed two options: at the visual design stage with a clickable prototype, or with a rudimentary minimal viable product.

Our client already had users who knew their existing product, but they were looking to expand. Some of their potential users for the new product had years of experience with the company’s branding, while others did not.

The client did some initial, but informal, surveying of potential users to help infer the foundation of the product. This user interface relied heavily on numerous filters and a few key elements to control the data in the application.

Given the limited time our deadlines allowed to test the product design, we debated on whether a clickable prototype with visual design or a simplified implementation of the app would give us more valuable feedback.

Tradeoffs

When choosing between user testing with a clickable prototype or a scaled-down application, there seem to be two tradeoffs. From one perspective, the clickable prototype would give users the cohesive look and feel they expect, hopefully letting them focus their attention on the features and value we tried to create. By contrast, the clickable prototype would allow only mouse clicks and a linear path through the application to simulate the feel of a working application.

On the opposite end, the MVP version of the application would allow the user to complete and freely explore the main value proposition, albeit through sub-optimal controls and a lack of branding and polish.

Our Approach

I voted for the clickable prototype that would allow us to establish a feedback loop earlier, test on a linear path throughout the product, and reduce the risk of feature changes and developer re-work. We didn’t head in that direction, but it led us to break up user stories in a way which I hadn’t tried before.

We divided the user interface element stories into two categories. In the first, we implemented basic, but lightly styled, HTML elements. The second set of stories focused on adapting those HTML elements into custom elements already defined in the client’s style guide.

Splitting the work into multiple stories allowed us to defer a non-insignificant cost and thus increase the risk of meeting our primary initiative. It also gave the advantage of parallelizing stories to reduce risks when blockers arose.

This concept was likely an over-optimization. However, it did help reduce risk for the project, and the team found it helpful during development. All in all, it was a win-win.