After my last blog around the case for IS-related compliance (e.g. SOC 2) becoming feasible also for very small organisations thanks to Agile/iterative ways-of-working and Lean practices, I thought of taking the opportunity of a recently concluded consulting gig to write about a more “conventional” topic that I’ve come across several times in the last 10 years of my field-experience.
The brief: advise a tech company and subsequently develop a lightweight methodology to support the Product team when validating and prioritising their backlog of new feature requests for back-end and front-end solutions and, in the process, become more efficient in reaching a “Ready-For-Engineering” state.
So, the problem statement was: our Product Development Backlog (in Jira) is getting longer by the day and there is a growing struggle to validate and prioritise all these feature requests for them to be ready for development work (engineering).
I’d be surprised to learn that anyone, even in more junior product management roles, have never come across this issue, especially when considering modern go-to-market strategies based on the “the-sooner-the-better” approach which, inevitably, feeds the number of competing product features trying to get over the (production) line.
Observation
Following my (ritual) period of observational analysis, this time to specifically assess status-quo in terms of ways-of-working and the existence (or not) of a collaborative model for prioritising new features, I soon had confirmation of the main issue at hand, since it being pretty common in tech workplaces: the validation of any new product-related idea for its articulation into a feature request and its subsequent prioritisation in the backlog was not really following a specific thought process, but rather left to a few senior Product or Engineering team members to make those calls often pressured by circumstantial criteria.
In these scenarios there is a strong likelihood of subjective interpretations taking place by those very few stakeholders, with not enough “Systems-Thinking” contributing to the decision-making; one could argue that decision-making is faster when fewer people are involved, but what’s underestimated is that, statistically speaking, fewer contributors are more likely to end up producing a greater number of competing work packages than when wider collaboration is in play.
The chance that the backlog becomes very top-heavy with many competing MUST, URGENT, CRITICAL, HIGH priority items… is almost guaranteed!
So, with this in mind, off I go to start designing a Product Development Methodology meant to cover the pre-engineering lifecycle of a new feature with some core principles of “Design Thinking” being put in place:
- processing of an idea (insights, user research, prototyping, etc.)
- articulation of a feature request along with epics and/or stories and relevant info
- finalisation of development tickets effectively prioritised
An intrinsic objective was then also to make the top-down pull of Product Backlog items into the Engineering Sprint backlogs or Kanban boards more simple.
Prioritisation
A key “make or break” detail was going to be setting up a more effective prioritisation model that would minimise the likelihood of competing/conflicting backlog items and, in the process, help to remove as much ambiguity as possible for the priority of tickets pending transfer to engineering work.
It had be something more tailored than what the likes of Atlassian or Productboard (just to mention a few ALM providers) advertise in terms of their off-the-shelf support for prioritising tickets (primarily thanks to a more-or-less wide choice of known techniques); I’m afraid that these are often too abstract, especially when different areas of expertise are contributing to the prioritisation effort… and many years of field-experience have taught me that.
First and foremost, the assigned priority level should be defined by a numeric score, given that numbers provide the clarity that a high, medium, low rating can’t because the thresholds for each can be very subjective; I’m sure we all agree that 1 and 2 are far less ambiguous than, for example, urgent and critical, hence why tackling ambiguity is so crucial for prioritisation!
The solution is then to rate a few key variables through no more than a handful of options and, for each of these, have numeric values associated to them ready to be used in a calculation that will follow; without getting into the detail of things, I defined a 1st round of prioritisation scoring based on how the Product team members categorise the product feature in question (#1 variable) by adopting similar notions of the “Purpose Alignment Model”; a 2nd round then takes place with the same Product team members, this time together with other stakeholders potentially involved in the priority scoring, having their say on the added value brought by the feature request (#2 variable) while relying on the “MoSCoW technique”.
Outcome
The priority score ultimately associated to a feature request would then be calculated thanks to a specific formula which returns a number that could even be expressed with a decimal place, all this to ensure that the overall Product backlog is kept nicely prioritised top-down with far less competing tickets and ambiguity.
When earlier I mentioned “other stakeholders potentially involved…”, my point was that this methodology, coupled with the specific scoring model, can flawlessly adapt to different organisations with different team settings, so in defining product development priorities, feedback can also be considered from people operating in Customer Success/Support, Pre-Sales and Sales with insights from end-users and customers can also be taken into consideration (one of the key Agile assumptions).
After setting up a s/sheet with calculated formulas to primarily speed up the scoring work with an additional bonus being the injection of some asynchronous feeds of Jira tickets to provide the necessary context for each item, this is now the tool that allows a priority (custom) score field in each Jira ticket to be populated during backlog review sessions.
Already providing a more efficient, precise and engaging experience when the backlog review sessions are due, a further outcome of my work is the transparency in the decision-making thanks to the adoption of a more collaborative approach.
If what you’ve read here reflects similar challenges that you are facing in your organisation, or you’d like to learn more about some of “behind-the-scenes” details, don’t hesitate to drop me a line via LinkedIn or over an email.