In recognizing that many of the decisions we had made over this past year were not driven by or supported with real evidence, I decided it was time to change how the Product team gathered and used data in our product development process.
Metrics and the related analysis are as important to Product teams as they are to any other department. But in my experience, it can be hard to nail down exactly what data your Product team should be monitoring. Because we interact with so many other parts of the organization, we may find ourselves gravitating toward (or flat out borrowing) directly from what other departments are (or should be) measuring. For example, it may seem obvious for us to latch on to customer conversion metrics from the Sales funnel. But then we also recognize that Customer Success is interacting directly with actual paying customers so maybe Net Promoter Score is something we should be tracking.
I have not found a single, universal measure that works for every Product team at every stage of its growth. I think good Product teams must continuously evaluate what data they need to help them make the best decisions. It is safe, if not self-evident, to start with metrics that ultimately tie back to and support the organization's high-level goals. But settling for superfluous or vanity metrics would seem to go against the prevailing wisdom which suggests adopting metrics that are truly actionable.
Product teams must continuously evaluate what data they need to help them make the best decisions.
I was interested in advancing my own organization's capabilities around gathering and using product data. In this article, I will share what came out of the exploratory discussions I initiated with members from the Product, UX, Engineering, Operations, and Data Analytics teams. In a companion article, I will report on how this analysis led to the next set of decisions to improve our overall proficiency.
What drove this decision
If you were to ask me to defend any of the decisions I've made with the teams over the past 10-12 months, I would not be able to reach for charts or graphs that clearly show one or more metrics trending up or down over time. In fact, if pressed, I might rattle off mostly anecdotal evidence like:
- We've experienced little to no push back from internal stakeholders about this past year's product roadmap
- There has been a noticeable absence of customer complaints around this year's product releases
- The Product team and I collectively lack any real regrets in our decisions so far (i.e. no major screw-ups)
Oh, and we closed two of the largest deals in the company's history this past year!
Had we been getting lucky by landing indifferent customers? Had the Product team simply taken the easy path or picked the low hanging fruit to avoid complexity or confrontation? I don't think so. I think the more likely story is that we had sensibly chosen to tackle the most pressing/glaring issues that were the highest priorities for all parties.
But it is not as though we were operating in a bubble. Along the way, we had certainly been talking directly with end users and also indirectly with the internal folks who themselves, talk with customers. But many times we pushed forward without adequate data and that hindered our ability to understand the impact of the changes we were making to the products.
I knew we could make better decisions if we had more information with which to work.
The decision: Confirm with the team exactly what data we were missing and what we would need to do to get our hands on it.
I want to be clear that this was an internal Product team endeavor driven by a desire to improve our own capabilities. Good or bad, the organization had not reached a point where my team was responsible for reporting KPIs or similar measures to our senior leadership or other internal stakeholders.
But before I could dream about some impressive analytics dashboard that might make Edward Tufte smile, I thought we could start small and work our way up from there.
Plan of attack
There was unanimous agreement that the Product team would benefit from collecting both qualitative and quantitative data. Thanks to an outstanding UX team, we had been making steady progress with capturing and evaluating qualitative data from customer interviews, surveys and the like. That work had indeed driven some major initiatives including the release of our new product earlier this year.
We determined that we would focus instead on complementing what we already had with more cold hard numbers from our own customer databases.
Focus on understanding feature usage to assess the impact of changes
The team concluded that the most obvious place to start would be to get a better grip on which customers were using what features and how. Much of our ongoing work would continue to revolve around revamping the existing platform components to provide a better user and administrative experience for our customers' primary use case.
Our challenge was knowing the size of the impact of making changes. Which customers would be affected? What migrations would be necessary? How much revenue would be at risk?
I shared the following, relevant story with the team to drive home the point:
Earlier this year, I had to push back on an anxious Customer Support team who was, as it turned out, overly worried about an upcoming feature migration I had scheduled for the entire customer base. Based on no real evidence, they had gotten themselves all worked up over what they believed would be a disruptive conversion. After running some simple reports, I concluded that less than 50 customers would be affected and most of them would barely notice as they had not really invested much in the (soon to be) legacy version of the feature. In the end, we pulled off the feature migration without a hitch and never received a single customer complaint.
Update our product hypotheses to include outcome-based metrics
The Product team had been getting better at creating problem hypotheses to help drive individual enhancements and sometimes, even entirely new products. The hypotheses helped us focus on real problems and generally followed this pattern:
We believe that [user persona] is struggling to [complete this task, achieve this outcome, ...]
“We believe if we provide [solution] to [customer], it will result in [outcome] as measured by [measurable success metric].”
If we could tie measurable outcomes to the problems we were attempting to solve, it would certainly give us some specific metrics to zero in on.
Help the team with product and design (re-)discovery
I am convinced that the 10-year-old software platform I inherited had accumulated much more code than was necessary to attract and retain customers. So in an attempt to reduce the size of the product set, I committed to devoting some time in every upcoming release to begin paring down the code base.
But we needed to be prudent about how to do this. As the teams looked to rebuild single pages or overhaul entire features, we desperately wanted to know what had to stay and what could be eliminated. We agreed that we would be able to draw good data from our existing customer base to help make smarter design decisions and prevent us from (re-)building stuff that is not needed.
I felt a little deflated at the end of this week's exercise. In the back of my mind, I had known that we should be operating better but it did not truly hit home until we assessed our current situation. It was a more than a little frustrating that we had so little data to go on.
Ours was not the first Product team at the company and our predecessors apparently had tried similar efforts in the past. I tracked down a previous Head of Product and learned from him that this had been a struggle for him in the past as well.
I am not discouraged though and am prepared to lead the team through the next stage in this process. Look for the companion article that describes the results of our push to acquire the product data identified here.
Look for more reports from theProductPath around product data, metrics & analysis, and product culture here on PM Decisions.