Checking if product influences users’ decisions

As a part of the product team I helped to prove the products’ value by learning about motivations and behaviour of users after using Triage symptom checker.

Background

Triage is a white label symptom checker which based on a quick interview and symptom assessment can suggest possible diseases to consider and next steps to take. A version of this product is available to the general audience as a free app called Symptomate.

One of the promises of Triage is to impact users' behaviour, guiding them to the most appropriate and efficient care. To learn if the product fulfils the promise, the Triage team designed a survey to check what users planned to do about their symptoms before and after using the tool. I joined the team as a product designer in an exciting moment of making sense of the first rounds of data and mapping out assumptions.

Responsibilities

  • user research and testing

  • user recruitment

  • user interviews

  • data analysis

Results

  • UI design

  • user recruitment process

  • research summary presentation

  • following recommendations from 72% to 77%

Challenges

Measuring value is crucial to confidently sell the product but also to help with decision-making and keeping the product on the right track. Thanks to the newly designed metric we could identify areas of growth and work on them as a team. 

How might we increase the percentage of people following Triage’s recommendations?

Based on desk and user research we mapped assumptions on an Opportunity Solution Tree and set priorities. Hypotheses set around trust were our first concern. We ran numerous unmoderated tests on the target market to challenge our assumptions as quickly as possible and implement the most impactful ideas on production.

Opportunity solution tree

We closely observed the metric to check how the changes affected the product. Unfortunately, after making some influential changes we plateaued and couldn’t make any impact. What to do when you feel lost and the possible directions dwindle? Get inspired by the users of course! :)

Process

We noticed that to move forward we need information about motivation behind our users' choices which we simply couldn't get from quantitative data. I proposed scheduling user interviews and adding a comment to the survey to learn about how users understood the available options, whether they have any trouble picking between them and what are their reasons for selecting one over the other.

Comment analysis

I categorised comments from a whole month and then looked for recurring themes and most popular categories. I had in mind that the sample was not big but I could use the learnings for further investigation. 

Learnings

Thanks to the analysis of comments I’ve learned about themes and trends in answers and how those trends change depending on a specific user group (i.e. results, age, sex, etc.). I also identified top learnings that guided me in the process later.

Based on the analysis and the top insights I knew which scenarios are hard for users to match to available answers. I prepared a scenario matching exercise on usertesting.com to test new options I prepared. I tested different sets of options with this method to see which are the easiest to understand and match.

User interviews

In parallel to collecting and analysing comments I took care of user interviews. There was no process of recruiting real users of Triage or Symptomate so I was eager to run such an initiative for the first time in the company. I designed a process of recruitment, an invitation card with copy as well as all copy templates. 

Invitation was displayed on the results page - the final page of the symptom checker interview. The invitation directed users to a screening form where they left contact information. After that I reviewed applications and emailed those who fit the criteria. We hurried to release it but the results were anticlimactic. Something was off, the response rate was really low, too low for us to get answers in time.

After looking into potential problems I changed the flow of the survey to maximise automation and get the user as quickly as I can to Calendly booking page. To do this I changed the screening to a branched survey with conditional answers, to lead only the eligible group to the Calendly link on the last screen.

The rules of showing the invitation were changed too, to include a broader, less specific group. Additionally, after some hiccups I improved the copy of email templates after each response to minimise misunderstanding.

The effects were almost immediate and we saw a significant spike in scheduled interviews. This was not the end of problems, though. After many no-shows I succeeded in talking with four users. Despite a small number it was really exciting to finally talk to users, to learn about how and why they use the tool. 

Learnings

On top of learning about users’ general pains and needs and opportunities for potential improvements I also identified top insights from that part of research.

Results

This was definitely an intense learning experience thanks to which we managed to increase the percentage of users following the recommendations from 72% in May to 77% as of the end of June.

Additionally, the recruitment process was designed in detail and tested, so it was soon used by other product teams to recruit users and test out ideas.

The road to our goal was bumpy so at the end I shared learnings and mistakes transparently in a presentation so other teams could learn from our mistakes.