Qualitative research that allows firms to understand unmet and emerging needs is now the bottleneck for the specification, development, and delivery of significant new products. This is the result of substantial investment in the last two decades in tools for software development, quantitative evaluative research, marketing and sales. This has fundamental implications for how product development, sales, and support operations need to be organized and to collaborate.
How To Scale Up Qualitative Research Efforts
Q: I’m working in a large organization (around 15 000 employees) which is committed to design thinking and lean startup for the last 2 years. I am a senior user research person, in particular qualitative methodology. In the organization everyone is now starting to do customer interactions but they really don’t have any methodological competence neither in qualitative data collection nor analysis.
The good thing is that we are getting an empathetic mindset within our culture.
The negative side is that we are basing our decisions on unreliable data and analysis, i.e. not solving the right problem. And the situation is also influencing our professional studies. We are having problems with recruiting customers (because everyone is talking to them). We are also having problems with the professional insights being heard because now everybody thinks that they know what the customer needs because they have talked to them themselves (without any self reflection on how reliable those insights are).
Could you share your experiences and also perhaps offer some solutions?
Sean Murphy at SKMurphy, Inc.
A: I think it definitely represents progress to have many employees interested in learning about how customers are using your products and what their future needs are. Your original set of questions kicked off an extended discussion between Mary Sorber of Practical Insights and Jeff Allison and myself at SKMurphy. We have been exploring and refining some approaches to qualitative research and product roadmap definition. Here is a short summary of our discussion from three perspectives, starting with mine.
I would look at this as an opportunity to help provide direction and structure to a lot of currently ad hoc activities that could yield a number of insights when they are more organized and subject to a more systematic data collection and review process. One approach to consider is offering internal brown bag lunches, workshops, seminars, training on how to capture insights from customer conversations based on your groups current set of tools and practices.
Mary Sorber at Practical Insights
I have been in a similar situation and can share both what I did and what I would do if faced with the same situation today.
I worked for a large company doing qualitative UX research on a product team. The executives brought in a CX consultant, who undertook an effort to have each member of the product team visit customers. The single goal was “increasing empathy”. She did some light training on how to listen, but nothing extensive and nothing on how to record, analyze or extract insights. From a research perspective, it was a real mess. The output was a gigantic journey map that was plastered up in a conference room. It was research theatre. And like your situation, I felt it was poisoning the water for my team’s real research.
I wasn’t at all supportive of this effort and shared that opinion with the consultant and my leadership team. I came with criticism of what was wrong with their approach instead of what might be different. And I viewed these actions as a threat to my team. You might guess that it was not effective in changing the situation and it didn’t help my internal reputation.
In hindsight, and with a few more years of perspective, what I would do differently is use this as an opportunity to extend the reach of the research team (which in my case was always spread too thin.) I would think about the following:
- First, what kind of training people need in order to do decent, basic qualitative research.
- What collection methodology would need to be in place in order for me to be able to trust the information gathered?
- How should interviews be documented? Can we create a standard template that enforces some meaningful consistency across diverse interviewers?
In addition, (pending bandwidth)I would:
- Take responsibility for the analysis of the visits, perhaps collaboratively in order to expose the broader team to the difference between quotes from a single customer and themes across customers.
- If I had a repository, I would establish criteria or qualification for when something is good enough to be included in the repository (based on age, or methodology, or audience).
- Ask my team to ride-along on some visits to “audit” how much the training stuck and whether to trust the data from specific individuals.
Then I would use the opportunity to educate the leadership team on good qualitative methods and the questions to ask to determine quality. In my experience–and yours may match this or not–the quality of insights is from the methodology of collection and analysis, which is not always made visible or communicated transparently in the end results or findings. Even when it is, executives may just look at the results may not be able to tell the difference between those that are sound and those that may not be.
Also, if you don’t already know about it, there is a very active google group for UXR folks. You can also put your query there to see if that generates some additional advice from seasoned practitioners.
Jeff Allison at SKMurphy, Inc.
Because your qualitative research is in service of product planning or roadmap development it might be useful for you to help the product team to take a step back and look at what other practices need to be in place for product planning to be effective. Just as you hold yourself accountable for rigorous and transparent insight collection processes, it’s reasonable to hold the full product team accountable for the product planning process.
We see four foundational practices that must be in place for a strategic conversation about the final roadmap to take place:
- Project post-mortem process (also known as after actions or retrospectives): Have you determined what you have learned from past product development efforts and product support activities that you will apply to the next set of products the team develops?
- Risk analysis for feature content (premortem): Have you identified key risks in your current product plan and put in place mitigation plans to address? Features with high development risk might deserve special scrutiny as to their desirability.
- Learning objectives: What expertise gaps exist that prevent the product team from delivering on feature requests in the product plan? Capabilities may need to be acquired in parallel with new feature development. In turn, these new capabilities may enable additional feature content in the downstream roadmap.
- Organized abandonment: Knowing what you know now, what current features will you give up to be able to do new things? How much backward compatibility do you need to maintain? How many legacy platforms and interfaces do you need to carry forward? What are implications for how and when customers need to migrate?
Without these practices in place, if the product team just focuses on new products and features it skips the chance to learn from the past, to learn from a working consensus perspective on future risks, to be explicit about capability development objectives, and to identify what should be abandoned to make room for the new. Even with a sound methodology for qualitative analysis of customer requirements, if you lack a structured approach to product planning you may not deliver the best product your team is capable of.
Related blog posts
- Managing Change in an Organization: An Incomplete Resource List
- Video and Slides from “The Limits of I’ll Know It When I See It”
Imaged licensed; © Kanstantsin Shcharbinski