I am reading Scrum and XP from the Trenches, by Henrik Kniberg. This is the online edition, written couple of years ago, so I am catching up on some reading what can I say?. Anyways, in a forward for the book, Jeff Sutherland conveys a story from a London conference where there was a discussion about Google’s implementation of Scrum. By the sounds of this discussion this is not Google specific, as there were others who must have participated in this discussion. Jeff had asked questions relating to how many were doing Scrum and really iterative development by Nokia standards. This standard apparently asserts, amongst other things:
- The Product Owner must have a Product Backlog with estimates created by the team
- The team must have a Burndown chart and know their velocity
These two must haves in Scrum as measured by the iterative Nokia standard are interesting, and I will address these in 2 parts, as they open up the following questions:
- When is the Product Backlog considered to be true backlog given that estimates from the team are required to make it so?
Surely the role of the Product Owner is responsible in developing a meaningful product backlog that has been prioritized. When I say meaningful, it means this has been written in a form of User stories (including epics). This backlog is prioritized based on the following needs:
- expressed by customer(s)
- identified as gaps in the market place that not only differentiates the product amongst its competitors, but critically appear on the product roadmap as having strategic value
In a sense the backlog is providing the scope horizon for the product. Clearly in the A, B, C of steps to follow, the items of highest priority on the product backlog will have been reviewed by the team, broken down further if need be and sized with estimates on an arbitrary number scale that the team has agreed on to represent as a common measure. This is a relative scale uniquely representative for this team, it expresses the mix of skills, experience and appetite for risk that this team has – which by the way is unlikely to be represented exactly in the same way by another team. So, during the sizing exercise, which usually takes place during formal planning meetings but there is no reason why a Scrum team can’t decide to size a story one or or more (based on some agreed rule by the team) per week during the current sprint. When ever needed I encourage this practice with the mindset that “a story or two sized keeps long planning sessions at bay” – okay it isn’t as good as an apple a day keeps a dentist away.
Based on my experience to date I have found this practice has been a sustainable approach to having a backlog that is primed for the next sprint. Don’t get me wrong, I am not advocating a long head with detailed planned backlog, clearly this would not be lean not to mention as a process it doesn’t bind the team with commitment in mind either. Lets say, just do enough planning to help prime part of the next sprint. In the mean time the Product Owner, who after all has been interfacing with the customers and has identified what is being valued and needed in the market place can seed the remaining backlog with estimates for size. This helps express the target of what can be expected that is of value for the next release, and in turn allowing other stakeholders from such quarters as engineering, operations, support, sales and marketing to influence and help refine this target. One can expect to have discussions, where you have multiple product teams, around cross dependencies that are going to impact teams and as a result having to reset priorities or refine the product backlog in order to mitigate the impact.
When it comes to estimates for size, it is well known from traditional Project Management techniques that using single point estimates for sizing just doesn’t work. It is far better to make use of statistical estimation, using 3 point estimates and if need be Z scores for % of confidence level to support the basis of the estimate. In the 3 point estimate one can cover the bases of pessimistic, optimistic and most likely estimates accounting for the expert viewpoint, the individual who is committing to do the work and the voice of caution addressing the risks be it in terms of resources available to accomplish all the testing that may be needed to verify and validate the functionality.
So really the conclusion I, and really many others have long ago come to is that the Product backlog is to be imagined as a pebble being dropped in a pool of water. When this occurs, it creates a set of waves some of these are of relative high amplitude, marking close proximity to the center, and then others of relative low amplitude further away you are from the center. The larger ripples closer to the point of drop means greater information being made available, this information has higher fidelity of knowledge supporting it and in contrast to ripples at the edge of the echo where information is imprecise and of low fidelity. Yes the Product backlog is a mix, with higher priority items floating to the top and as these float up they are adorned with more information, including decomposing epic level backlog into component backlog as well as having attributes such as acceptance criterion’s that customers will measure their need being satisfied to.
As for the second must have I will address this in the follow on article.