My 4-step process for analyzing user research

Kira Street
Bootcamp
Published in
8 min readAug 15, 2022

--

3 people using sticky notes to annotate a user journey
Starting the research analysis

Imagine this. You’ve just finished conducting your user research, whether it’s a survey, a series of interviews, or a usability test on a new design. You’ve got a ton of data to sift through and sort, and maybe you’re not sure where to start.

Do you create charts? Or start copying all the charts from whatever interview software you were using? Maybe you rewatch the interviews and fix transcripts. Either way, starting the research analysis can be daunting.

In the research work I’ve done so far, I’ve developed a 4-step process for myself to make the analysis more streamlined. Of course, before I get to these steps, I’ll have already formed a research plan and conducted the research. Having a well-defined plan and success metrics helps the analysis go smoothly so I’m not left guessing how to analyze the data. Perhaps I’ll write a post later on how I plan UX Research, but for now, assume that we have a well-defined research plan with data that matches the questions we’re trying to answer.

In a nutshell, the 4 steps are:

  1. Centralize
  2. Classify
  3. Find patterns
  4. Make recommendations

Just so we’re not talking about these steps in the abstract, I’ll walk through my analysis process using a recent research project I did for implementing in-product feedback for my in-house work at Guild Education.

Context of the problem

At the time, our product didn’t have an easy way for users to give feedback in the product. Past projects had relied on embedded Google surveys or secondhand feedback from education coaches who interacted with our users. But our product team wanted to implement a way to get feedback directly from users so we could address their needs, quickly catch issues, and track feedback trends so we could design a better product for them.

The constraints were that it needed to be a global solution. It had to be modular so that it could be attached to any page with minimal development effort. It also needed to be visible so users could find it easily.

As I was designing the solution, I came up with three options for where to put a link to give feedback:

  1. In the footer (Blue version)
  2. In a module at the bottom of the page (Red version)
  3. In a sidebar tab (Yellow version)
Three design options for feedback link placement (left to right: Blue, Red, Yellow)

I then conducted an unmoderated, remote click test via UserZoom. My goal was to understand where users would expect to find the link for feedback, and I was looking for the best-performing option, thinking that option would be the most visible. I also asked other questions related to their expectations, preferences, and ease of use.

Among the data I received were click heatmaps (with percentage success and failure), answers to multiple choice questions, ease of use ratings, and answers to open-ended questions.

With that context out of the way, let’s see how I went about analyzing the data.

1. Centralize

For me, it’s immensely helpful to have all the data in one place and in the same/similar format. It’s annoying to have to flip between multiple windows or tabs to read through and understand the data.

For example, seeing the quantitative data in UserZoom is quite simple. I can usually glance at the graphs and understand them. But I still have to click through to each question. Jumping between questions makes it hard for me to find patterns since my context is constantly switching.

Enter Google Sheets. I love how UserZoom (I promise this is not sponsored, I just liked using it) allows you to copy and paste the data into a spreadsheet so I can see everything at a glance. For this research, I ended up doing exactly that, copying all the data into Google Sheets to cross-tabulate it for each question, whether multiple choice or open-ended. I organized the sheet so each user was on their own column and the questions were on each row. This made it much easier to scan the answers.

Granted, this can be tedious and time-consuming (and I’d love to hear how others centralize their data), but I find it valuable. The main reasons why I take the time to centralize the data are:

  • I can comb through and familiarize myself with the data, thus making it easier to talk about to others
  • It’s easier to manipulate, sort, and analyze the data
  • It creates one link or place to share data with other stakeholders
  • I can easily add notes on key insights

I’ve also used Dovetail as a centralizing platform for other work. I especially love how it transcribes interviews and gives the ability to tag quotes. Whichever tool you use, find one that allows you to see all the raw data in one place before you start the formal analysis.

2. Classify

Once all the data is in one place, I start classifying it into groups. Are there certain themes that come up in user interviews? Color-code or tag your notes or the transcript to find those themes more easily. Do you have a sheet of how often different users sorted cards in certain categories? Use conditional formatting in Google Sheets to show a color scale from 0–100% that you can sort by category.

Affinity maps are your friend here too. If you used Miro or FigJam or another whiteboarding tool to take notes on interviews, use digital sticky notes to capture key themes and arrange them by subject.

Here are a few examples of how I’ve classified data:

  • Color-coding subjects on open-ended questions
  • Marking successful tests green and failures red
  • Marking interesting or noteworthy insights blue
  • Using conditional formatting to code a number scale
  • Creating graphs for certain demographics (e.g. age ranges)

For this research project, I ended up color coding the answers to the open-ended question, “Thinking of typical websites or desktop apps that you use, where would you expect to give feedback or submit an idea for the product?” to see what user’s preferences were. This helped me see that about 50% expected a feedback link to be near the bottom of the page, whether on its own or near a “Contact Us” link.

3. Find patterns

Next, my goal is to see how all the data relates to itself. I first go back to my original research goals and questions. If I did my research plan right, I should have the appropriate data I need to start looking for patterns.

It’s also key to define ahead of time what the success metrics are. Are you looking for a majority agreement? Or simply the one that has the most success, even if it’s less than 50%?

Based on the data collected, compare and contrast them. There are several analysis methods for qualitative and quantitative data that you can use in this part (here are a couple of resources from NNGroup andUser Interviews; I also like going through the Universal Methods of Design for inspiration).

How you’ll relate the data to each other depends on the data you’ve collected, but here are some examples of questions to ask yourself when looking for patterns:

  • Is there a correlation between how users rate ease of use and their demographics (e.g. age range, industry experience, etc.)?
  • How does user behavior compare with user-reported data?
  • How does the time it takes to complete the task relate to how users rated the difficulty of completing the task?
  • Does how the user describes the design relate to how friendly they feel it is?

For the in-product feedback research, I looked at my data on user behavior (where they clicked), user preference on the given options(through a multiple choice question), and user expectation (from an open-ended question).

Using those three data sources, I saw the pattern that when the feedback link was placed near the bottom of the page, more users were able to find it and click on it. That behavior corresponded with their preferred options and their expectations that such a link would be near the bottom of the page.

Having multiple data sources to rely on made my argument stronger once I got into recommendations.

4. Make recommendations

Now to make these findings practical and actionable. It’s at this point where you consider how to communicate the research findings to stakeholders in a way that they can understand and implement the findings. A few questions I like to think through are:

  • How are the recommendations related to the data and patterns I found?
  • How much effort can I expect for the recommendation? Do I need to suggest options for a higher or lighter lift based on ideal UX, engineering level of effort, or business goals?
  • What other stakeholders would be affected by or need to be involved in this recommendation? (e.g. UX Writer, Engineering, Data scientist, etc.)

Depending on the research, I may have multiple recommendations for different parts of the design. Sometimes I’ll just have one. Either way, lay out clear next steps for how you’d implement or work with your team to implement the recommendations based on your research.

For my in-product feedback example, I only had one recommendation: to place the link near the bottom of the page in a module that could be re-used across the whole product. It was a recommendation that considered the user experience (from research, the options with the link at the bottom of the screen performed the best and were expected) and the technical limitations (the solution needed to be modular and simple so it could be implemented by any engineering team across the product).

My next steps were to clean up the design for the module and create specs for it that could be implemented on any page long after I was done with the project.

Design specs for the feedback module
Design specs for the feedback module (top to bottom: full-width, partial-width, and mobile)
Examples of the module in the product (left to right: Student Home and Applications)

Conclusion

This is just one way to implement my 4-step process. The specific actions inside each step may change depending on the type of research you’re doing, the data you have, and the software/service you’re using to collect the data.

However, I find it to be a clear, general way to think through analyzing any user research.

Tell me, how do you analyze your user research? Do any of these steps resonate? Do you do something different? Maybe you have ideas on how to improve this process. I’d love to hear your thoughts so we can learn together on how to analyze user research.

--

--

I am a freelance product designer and a maker, with a passion for education and mental wellness.