I’ve taken on Product Management for a set of internal tools, and found myself lost in 700-some open tickets (including meta-tickets and sub-tickets and all that goodness). Product’s a relatively new discipline at the company, the tools team is saddled with technical debt and severely resource-constrained, and my early discussions with internal customers ran strong with discontent.
As a fan of Mellisa Perri generally and “Rethinking the Product Roadmap” in particular, I wanted to try seeing if a Problem Roadmap meeting would help.
I hoped a problem roadmap would give us all agreed-on, prioritized problems we could evangelize and pursue, going from being ticket-or-project focused (700 tickets!) to outcome-focused, and start reducing the iteration time from months to weeks and soon, days. Then I’d be able to start culling that backlog like crazy and lining up ideas and bugs against outcomes we were pursuing, and we’d all have clear success metrics we could look to.
I invited members of the development team and a cross-section of interested people in the support organization for two hours. We ended up with ~12 people.
To start, I presented the goals for the company that related to the discussion: where did we need to get to with customer satisfaction overall, and our goals specific to our customer support organization.
I introduced what we were trying to do in the meeting, along with an example problem and a metric that could trace it. On the giant whiteboards, I drew two columns for problem and metric to measure it.
Then I asked “What are the problems we’re facing getting to our goals?”
Early, our conversations were specific: “Bug X is hurting us” which in turn led to “Oh, we’re working on that” (which I was guilty of). We’d come up with metrics to measure those and move on. As we filled each whiteboard up, I’d move to the next board (or take pictures and erase all three)
We quickly moved to larger issues, and the discussions got into new, interesting problems I knew we weren’t already discussing. Which led to eager participants jumping to “how we could fix that.” This was challenging: when do you bring that back, and when do you let it run?
I’d explain (or re-iterate) once we’d defined problems and metrics, we’d vote and then pursue solutions. But some of the ideas were so good, it was hard to rein them in.
With more problems, we got better defining the metrics we’d use, and it led to a focus I hadn’t seen in other meetings trying to address this. In some cases, needing metrics meant reconsidering what we thought the problem meant, sometimes discovering there was more than one problem.
New, more specific descriptions often illuminated issues there’d long been angst but not clarity around, and the metrics provided a way for us to target them. For example, a problem that a tool didn’t work right resulted in us defining three issues: workflows, tool design, and then the technology, all with metrics. That clarity would have made this worth doing on its own.
Requiring metrics forced uncomfortable discoveries that we didn’t have useful measurement against our goals, which I’d also have held the meeting just to find out.
Towards the end, we’d gotten to amazing discussions I hadn’t yet seen. New problems just under the company and organizational goals, and in considering those, a problem around whether the organization was structured to pursue our larger goals.
I’ll offer two examples of the kind of problems we came up with early, and then later as conversation opened up
Problem | Metric |
---|---|
Portland coffee is 10% worse than Seattle’s | Survey of employee satisfaction; dev team velocity; chats answered/hour |
… | … |
We can’t see if we’ve met our goals if we can’t measure them | Yes/no: are there metrics in place that measure x/y/z? |
Yup. 90 minutes from Bug A (Bug A, measure Error A) to sweeping, actionable metrics (Organizational issue B, employee satisfaction, workflow measurement, other good stuff).
Then the voting. I re-wrote the list from the photos for space, and then gave everyone five votes, multi-votes okay. Here’s what happened:
Problem | Votes |
---|---|
Huge existential thing we’d never talked about | 9 |
Large systemic issue with banky thing A | 5 |
Large systemic issue with bank thing B | 4 |
… followed by a long tail of 2s and 1s.
We’d never talked about the top item before! Anywhere! It wasn’t on a roadmap! It wasn’t in any of the 700 tickets! Brand new! I’m using exclamation points!
Both the problems and metrics we came up with around the 2nd-and-3rd place priorities clarified huge problem clouds (dozens of tickets filed against something with different solutions or issues, all without metrics or an overall goal).
That’s gold. I’m so happy.
I’d recommend this approach to anyone doing Product (or Program) Management looking for a way to re-focus conversation on outcomes. I’ll report back on how evangelizing the findings goes. The discussions it inspires around the problems, and how to measure them, made it worthwhile.
I can see where it might be less valuable if you’re tightly bound to prescribed work… but also, I can see where it might help you break out from that.
Now, some random miscellany.
Logistical challenges running the meeting:
- It was hard for me as person-at-the-whiteboard to keep up with discussions as they picked up pace, and especially when
- Problem A would come up, inspiring someone to shout Problem B, sparking discussion, and Problem A would languish
- The layout of the conference room. I wrote on the wall-of-whiteboards and everyone else faced me from the other side. I’d like to find a better way to do that, but all conference rooms are going to have some version of this problem
Questions I’m considering for next time:
- Do I do this again with different people from the same teams? When?
- How to better communicate what the next steps will be, and does that improve focus?
- Is there a better way to introduce the concept of the meeting?
- Would a note-taker help?
- How would a meeting like this incorporate remote employees?
- What’s the best way to manage voting on a list like that? Are the differences between voting methods even meaningful?
You may try giving everyone a fixed set of votes to spend. Say, everyone holds five voting chips/stickers. They can put all of them on an issue they are passionate about, or put 2 on an issue they care a lot about and then distribute the rest. Best of luck!
Hey, great write-up. Thanks!
Re your question about voting, paired comparison analysis might be helpful: https://www.mindtools.com/pages/article/newTED_02.htm