Metric-driven Product Management

Date

If you can’t measure it, don’t do it. 

Losing focus

Developing Web-based software is a complex process, often involving a diverse range of stakeholders, each with their own goals and their own set of personal biases and opinions. Even when there’s a clearly defined product owner, satisfying all these stakeholders can become a never-ending quest. In many cases, this quest ends up diluting the efforts of contributors and not substantially increasing the utility of the website (e.g., focusing exclusively on the pain points of content editors often means no benefit to the actual end user).

It can be challenging to even notice that this dilution and loss of focus is occurring: after all, one thinks, we accomplished these eleven tasks over the last fiscal year, each of which was deemed important enough to spend time on. Surely the net effect has been positive? At least we’re moving forward! But did those efforts advance the strategic goals of the project?

If we can’t measure something, it’s not worth doing, because we’ll never know if we succeeded. 

We’ve all been through an exhausting strategic planning process, potentially lasting months and involving a large team. We’ve documented our biggest goals for the project (hopefully keeping our focus narrowed to a couple of main objectives). We’ve decided on what features need to be built to make progress towards those objectives, and began an implementation cycle. All too often, that’s where the product development cycle stops. Perhaps a few months later, some effort is made to determine if the feature is doing what it says; but this often takes the simplistic form of “look how many visits it’s received!”

Metrics should drive all development work 

Metric-driven product development is a way to ensure that all efforts go towards satisfying one or more strategic goals, and ensures that development doesn’t stop until those goals have been met. By this rubric, everything that doesn’t result in progress towards those goals is superfluous. For this to work, each goal must be associated with one or more quantifiable and measurable metrics. 

There’s a pattern in programming called “test-driven development,” or TDD. At the risk of oversimplifying, with this technique, the product owner writes requirements for a particular feature. The developer, before writing a line of production code, decides on a series of tests that must be successfully passed for the feature to meet requirements. For instance, if a customer is able to customize their coffee mug, the tests might look like this: 

Tests for coffee mug customization feature:

  1. A “Customize” button appears on appropriate products.
  2. If a user clicks this button, they are presented with an input field, allowing them to type between 10 and 25 characters.
  3. If input doesn’t meet criteria, do not allow a user to proceed until it passes.
  4. A preview of the customized mug is generated. And so on…

By design, since the feature doesn’t exist yet, each of these tests should fail whenever the test suite is run. If the tests are written accurately and comprehensively, all a developer needs to do is write code that satisfies each test; when they add the “Customize” button, the first test should pass, while all the remaining tests will continue to fail. When all tests are completed, the feature is complete and ready for further testing.

Applying TDD to product development

Image
Product developers discuss strategic planning and business development.

How can we apply this software development pattern to product management? One obvious limitation is that the results of these tests won’t be immediately available; if our strategic goal is to ensure 35% of users who begin the coffee mug customization process actually complete it, it’s going to be some time before we discover if our efforts are successful. It’s more useful to break that goal down into smaller, testable components. We can start by thinking about the steps users will move through on the way to customizing their coffee mug. 

Steps a user follows to customize a coffee mug: 

  1. First users need to select a base product that meets their needs.
  2. Next, users should be able to see what customizations are possible before they begin, so they don’t waste their time.
  3. If there are multiple parameters that can be customized, it should be clear to the user which are dependent on others: maybe it’s only available in black or in 64 oz. sizes. 
  4. We should update the image dynamically after each parameter is changed. 
  5. And so on…

We can attach some metrics to each stage of this process: maybe we set a goal for what percentage of users visiting the product list choose a product to begin customizing. We’ll define a conversion metric around moving users to that second stage. If we’re trending below that percentage goal after a couple weeks, we can borrow a technique from direct response marketing, and run an A/B split: maybe half our users are presented with a product overview page emphasizing unique product features, while the other half are given a case study showing how the product improves outcomes. We can measure how many of each pool moved to the next stage of the cycle, and decide which of these approaches is more effective. 

More complex decision points like step three may require some creativity to measure. Ideally, we are capturing each mouse click and movement, so we can construct a model to help us comprehend the user’s goals and outcomes. I find testing the UX with a small group of users provides more insight than Web analytics for these more intricate interactions. 

Keeping attention on strategic goals 

The most important analytics data we can collect are those which help us make decisions about the product and how it should evolve. It’s really important to think about those inflection points and what metrics will best inform us. If we measure, say, profile completion across device usage, we may learn that mobile users are far underperforming vs. desktop users, and that suggests a new feature to improve the usability of the mobile profile editing feature, or perhaps add incentives for completing a profile.

Just as the test-driven development process requires writing the tests before any code can be written, metric-driven features should define their criteria for success before the feature requirements are completed. In the strictest sense, if we can’t measure something, it’s not worth doing, because we’ll never know if we succeeded. So it’s worth applying effort and creativity to determining those metrics.

Contact us to learn how we can help drive the successful adoption of your solution with targeted marketing strategies and effective sales enablement.