We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.
Hey Mike - thoroughly enjoy reading your articles.
In regards to the team following commitment driven sprint planning, do you have in your mind the number of hours available at planning?
For example, our standard working day is 7.5hrs so 75hrs per 10 day sprint - but with meetings, team discussions, making cups of coffee and general outbreaks of morale(!) this tends to be lower.
How do we factor in these 'unknown' distractions?
The best approach is to just take a guess the first sprint and then adjust from there. So maybe say 5 hours, plan that, run the sprint and then see how far off that was and adjust. Within 2 sprints, most teams figure out a reasonable average amount of work to bring in.
Also note that it won’t be the same for each person. For a variety of reasons, you might be able to commit to 6 while I commit to 4.
Great - thanks for the heads up.
My team are going to experiment with this approach for our next sprint planning. Will keep you posted on our experience....
Great. I look forward to hearing how it goes.
Mike - here's the results of our experiment
We have run it for 6 sprints now and have some measurements that show that we've had great success with this approach. We have improved our predictability (a measure of stuff done against stuff undone) and velocity. Predictability has gone up from 60% to 73% and velocity from 22 to 63. Whilst I wouldn't attribute these improvements solely to this style of planning I am convinced that it has made a significant contribution.
Wow, Darren. Thank you for sharing this. That’s fantastic and I’m glad you tried and especially glad you compared. Thanks!
In commitment-driven approach , tasks are estimated in hours only ?
I also prefer this commitment-driven approach, based more on the gut feeling of the team rather than velocity data (which is a good insight but not absolutely accurate) progressing one by one instead of taking a set of item.
Thank you Mike for this post,
Do you feel the sum of hours from previous sprint planning sessions can present the same risk of anchoring as velocity numbers from previous sprints? For example, a team that normally tasks out to about 250 hours would likely be hesitant to take an additional story once they've reached that 250 hours. Much like a team that has an average velocity of 20 would likely be hesitant to take an additional story once they've reached the 20 mark. I'm guessing that you feel the exercise in tasking and estimating in hours is more precise than the story point estimates and therefore provides less risk in having anchoring becoming an issue?
Not really, Michael. Anchoring is using irrelevant information to make a decision. Sure, a team can be overly influenced by their previous 250, but I’ve rarely seen a team overly or unduly tied to their previous or average number.
HI Mike,This was a great read and thanks for that. I enjoyed the discussions below the most. My team has been working together for about the last 6 months and have been doing commitment based sprint planning and consistently missing their commitments. Which unfortunately has caused the team a lot of stress in the last few months.So we are trying to introduce velocity based sprint planning to give us a better judgement of how much work we should take in to the sprint. Before reading your article I was quite convinced that Velocity based sprint planning was the way to go but after reading it I am starting to understand that there is no right or wrong and we just need to try what we can. I am all about being effective whilst having a great time at work and I think meeting commitments is a part of all of it. So just need to figure out the best way to start meeting commitments whilst meeting the customer expectations.
When teams miss more than once, I stress that they reduce the amount they commit to in the next sprint to prevent it becoming a habit.
Great read and i agree with most of your comments, particularly theres "no one size that fits all" approach. We're also finding that velocity driven approach works once you've collected enough appropriate data for team members (Average Story Points Completed per Sprint), and can then act upon that data meaningfully.
My question is slightly tangential, but curious what you've seen other teams do if team members miss their commitments consistently?
If a team repeatedly fails to finish what they plan to do in a sprint, I have them dramatically reduce the plan for the next sprint. I never want that to happen twice in a row. It’s one of the worst habits a Scrum team can get into. It’s discussed in a couple of videos in my Scrum Repair Guide at https://www.mountaingoatsof... <https: www.mountaingoatsoftware.com="" training="" courses="" scrum-repair-guide="">
Hi Mike, your reply here caught my attention as I myself try to avoid overcommitment of the team. Mostly for the huge effect on morale that underdelivery is causing. That, however, leads to the following side effect: we are undercommitting and the team is done with the scope e.g. 2 days earlier. They are half expecting they would be able to do anything they want (pure R&D stories for instance) for the remaining of the Sprint - I, of course, am pushing more items into the Sprint Backlog to have more value delivered.I wonder if you discussed a similar situation in any of your blog posts here? (I only joined recently) What to do when we are ready earlier? I couldn't find a straightforward answer in a Scrum Guide - nothing to support "let's do more stories" answer, which I feel is only natural :-)
I think one reason you can't find much written about bringing in more stories is because that is the obvious thing to do. I bet the "Handbook for Waiters" probably doesn't spend a lot of time saying "if a diner drops a fork, bring a new one."
If bringing more stories (work of value to the product owner) were not the right thing, sprint planning would become a game for the team of trying to do as little of that pesky "work" as possible so that they have as much time for their own "fun stuff" as possible.
So, yes, if a team undercoats in planning, they bring more in as soon as that is discovered during the sprint (which is likely before they are actually all the way out of work).
Thanks a lot, Mike. Sounds very reasonable!PKD
Awesome, thanks Mike!
velocity is long term predictor and should be used for release planning. I don’t buy into argument of “With commitment based planning we are estimating things twice using two different scales, once for longer term planning (and prioritisation) in story points and then task based estimates in hours for short term iteration planning.”. These two level of planning happen on two different levels: one for release, one for sprint. I would say that team either spend time on splitting user stories of similar sizes or discussing/defining/estimating task for user stories. It doesn’t matter which way. I certainly very much recommend capacity driven iterationplanning for Scrum teams especially new ones. Team velocity is just sanity check for team to check about over/under commitment.RegardsZoran
It sounds like you and I agree completely—I’m glad!
In my Scrum experience we always used the velocity-driven approach. I think the commitment-driven approach definitely has value. Although I agree with you Mike on the planning approach how velocity is helpful from that angle. As we discuss this with many clients they always want estimates and plans and we need some way of providing these. The anchoring issues you mention are interesting to see how it affects the team. Even if we point these out the team will still fall back on these numbers.Thanks,Tom
I think it is quite feasible to provide estimates to clients (and others) about the medium term. It does require a few things—including more upfront thinking about the product backlog than many of us would like. But if someone wants a “guarantee” then additional upfront thinking is required and that client is making a decision to trade off a bit of agility for a bit of predictability. In some cases, that could be a very reasonable tradeoff to make. While I prefer a commitment-driven approach to sprint planning, I believe I remain the world’s biggest fan of velocity and story points for planning over the medium to longer terms. That is, when a team has at least a handful of sprints, the variability in velocity will average out and very reliable forecasts can be made.
Hey Mike, I would agree with your assessment. The key is the tendency for teams to become in keeping the same number of story points per sprint. Commitment-driven allows for the team to challenge themselves. The second point is commitment-driven allows your team to self manage their workloads, and a key of Scrum is for team to be self organizing.
Good points. Thanks for raising them.
Hi Mike - this has been the subject of some debate since I started as a scrum master in a new organisation.
I have always been in the velocity driven camp although I'm very careful to not let velocity rule the scope of a sprint but only use it as a starting point. If our velocity indicated we should pull in 20 story points but that only accounted for 70% of our capacity once we broke out and estimated tasks we'd bring in another story. We've always established a stable velocity after 5 or 6 sprints which has helped us with longer term indicators of when stories will be developed. I've never really seen any evidence of anchoring - maybe I've just been lucky or I've been blind to it, but I think the key i that our past velocity is the starting point for planning, not the target for the sprint.
I sat in on a sprint planning session with another scrum master on my first day and his sprint planning session was very brief - the team picked stories off the wall, discussed them and put those they felt they could achieve on the table. After about 20 mins the meeting finished and the table was declared the scope of the sprint. I spoke to the scrum master and asked how he related the content of the sprint to their velocity and he looked at me like I was a loon. "Velocity is all about history... its nothing to do with what you will achieve". Hmm.. not so sure about that - "yesterday's weather" and all that. "So when do you break it down into tasks?" Apparently not - the team feels that they can deliver the stories and that's good enough - they commit to deliver the work. So for me they are taking "commitment based planning" literally but without the necessary reality check of whether their gut feel is achievable. I investigated further and looked at the burndowns from their previous sprints, and the number of story points planned per sprint swung wildly - 60, 110, 90, 140, 50, and the actual delivered story points varied +/- 40% from the "commitment". I don't see how this velocity can be used in any meaningful way for release planning (which the team is doing)
I realise that they weren't really doing commitment based planning as you describe it, but it highlights why I think that velocity based planning is actually better for new teams, and commitment based planning may be better for more experienced teams. By missing off the rigour of a dual reality check of "is this reasonably consistent with what we've done in the past?" and the more detailed breakdown of "so this is what we think we need to do to deliver it - do we have enough time?" an inexperienced team is more likely to come up with unrealistic sprint scope. Once teams are more experienced then is the time you can rely more on intuition and streamline the process.
Nice stories. Thanks for sharing these.
The most interesting thing to me is that you conclude velocity-based is best for new teams, but if you read other comments here you’ll see others saying the exact opposite! It’s not an easy issue. Either can be good. When I train on this topic I state my preference for commitment-driven planning (as I describe that) and make a few of the arguments here to support that claim. But I’m also always careful to point out that if velocity-driven planning is working for a team then don’t change it just because *I* have a preference for one approach. There are teams who can avoid any anchoring bias and there are teams for whom it’s appropriate. I think we can both agree, though, that a bad approach is the one you describe sitting in on. While I don’t expect velocity to be the exact same from sprint to sprint (that can indicate problems, too), I certainly wouldn’t expect it to bounce between 50 and 140 as in your example. Any time I see that it is indicative of a team not really grasping the true purpose of sprint planning, which is to figure out some set of items they can realistically expect to complete and have a general idea of how to go about achieving that. Thanks for your comments.
I normally start teams with commitment based planning but when teams are moving from Shu to Ha level, understand the rituals, turning into a performing team and have an established velocity I seed the idea for them to consider velocity based planning (it's always the teams call).
Estimation is a wasteful activity, it does not directly contribute to creating valuable working software. With commitment based planning we are estimating things twice using two different scales, once for longer term planning (and prioritisation) in story points and then task based estimates in hours for short term iteration planning. I see shifting to velocity based planning as a journey towards in the end potentially losing estimates altogether, and potentially the timebox (in a shift to kanban) and forecasting based on cycle time.
In this interim model (Scrumban) we have used stretch stories to reflect the variability in velocity.
Thanks for your comments.
I don’t see how a team becoming better at Scrum (Ha level, in your terms) helps them get past the problems with velocity-driven planning. Velocity is still too variable to be a reliable short-term indicator. As for waste: If estimating is waste then so is any other time spent thinking. Programmers should be able to build programs without having to think about them first. This is undeniably true, but it’s also ridiculous. The pursuit of eliminating “waste” is overstated. Software development is about as far from factory work as we can get and creativity can occur in time that could be considered waste. Software development is knowledge work. Creating knowledge is rarely waste. I will agree that estimating *can be* waste if no actionable decisions are made based on the estimates. But, to the extent that actionable decisions are made based on estimates then the knowledge created by estimating is not waste. If no one does anything with the knowledge created by estimating then the estimates were waste. But if a team selects a different set of product backlog items because of the estimating, the estimating was not waste. If the team has a better idea of how to accomplish a set of product backlog items during a sprint, the estimating was not waste. But when a boss asks for estimates just to have something to hold against the team if they miss, that is waste.
So I take it by that response you're not in the #noestimates camp :-)
I don't think we have to estimate to create knowledge. We have to think about work, break it down, consider design etc. You could argue that process happens through estimating, it could equally happen without providing an estimate. We estimate in story points to avoid estimation in ideal days which we consider takes us longer, we always want to spend less time estimating (referring to your estimation value curve). Wouldn't we be quicker if there was no estimate... just a thought...
Anyway, I think there are both positives + negatives so as always the strategy is normally "it depends..."
What's your view on cycle time though? If we break things down sufficiently (for short release planning horizons) and we have empirical evidence (metrics) on our cycle time we could simply count the number of backlog items. That has a certain appeal I think.
I’m definitely not in the no estimates camp. I’ve asked those who are to provide a list of company executives who agree with them. So far, that list is empty. I get why programmers don’t like estimating but wonder how they would feel about executives pushing a #nobugs movement. Well your point about doing everything involved in estimating but not coming up with a number is a pretty picky point. If I do everything you list, I could spend 2 more seconds to write down a number. OK, that was 2 seconds of waste in your terms. Also, my argument would be that it’s not the estimate I care about but the knowledge created getting there but that the pursuit of an estimate leads to the discussions I want. If you’d been in my CSM class today you would have heard me tell them exactly that. I said I don’t care about the estimate but the estimate is what guides the discussion. And, I don’t advocate estimating in points to “avoid estimation in ideal days.” I advocate points because ideal days lead to an intractable problem of you think it will take 5 days and I think 10 and we are both right. That cannot be solved with ideal days. It can with points but I won’t go into that in a blog comment since I have entire posts already devoted to that. I think cycle time is a key metric. I’ve written about it in at least one of my books. I’ve advocated putting a datestamp on all product backlog items so that we can track how long each takes to go through the development process. That doesn’t mean, though, that I want to break all items down into similarly sized chunks. That has a big problem: to think two things are the same size I must make at least quick implicit estimates of their sizes (and I’ve heard somewhere that estimating is waste ;) Velocity can achieve the same things as a pure cycle time calculation but can more easily accommodate units of work of varying sizes. Rather than saying “we code 1 widget a day” we say “we have a velocity of 10 per two-week sprint.”
"I get why programmers don’t like estimating but wonder how they would feel about executives pushing a #nobugs movement."
I just tweeted this, with your name attached. This is the best response to the #NoEstimates movement I've ever read, and it deserves to be shared.
We boost performance through group accountability for results and shared responsibility for team effectiveness.
Does group accountability mean you don't hold people individually accountable for the quality of their work?
Oh no, now I’m going to get more hate mail from the “no estimate” crowd. ;) Oh well, they probably have a #nomikecohn movement going too. ;)
Okay, I understand your point of view, enlightening so thanks :-)
The one thing I would say is that I have seen teams spend less time in Iteration Planning Meetings when they switched to velocity based planning (NB. only after becoming "better @ scrum") as they did not have to break Stories to Tasks and estimate again in hours. Their Stories were broken down small enough so they travelled through the board quickly so task cards and a secondary form of estimating was not required (the discussions can still occur, JIT and incrementally as required in a co-located workspace around a whiteboard within the iteration). Empirical evidence showed that this worked for this team.
I like cycle time as a metric to support an eventual shift to Kanban and continuous flow. My experience is often that teams are in flow in mid-iteration but the timebox of the sprint/iteration gets in the way as we shift in and out of flow when finishing and then preparing the next batch (sprint backlog) for the next timebox.
Absolutely: teams doing velocity-driven sprint planning spend less time in sprint planning. However, with the cost of having a less reliable sprint plan. I agree: sprint boundaries can be a problem for some teams. However, sprints also provide focus and a sense of urgency, which can be beneficial to some teams. All of this is why there isn’t one perfect one-size-fits-all agile approach.
Thanks for sharing your preference here.
From your descriptions of the two approaches, it seems to me that they can both come down to more or less the same thing:-
If a team uses a velocity-driven approach and includes the optional steps 3 and 4, they may well be using velocity to provide an initial set of items for the sprint, but they then 'identify tasks' and 'estimate the tasks'. As they're encouraged to add or remove items on the basis of this estimation, any items they don't remove they are implicitly committing to complete.
Am I understanding correctly? If so, does the main difference between the approaches come down to whether you actively try to avoid the anchoring effect? Or are you mainly considering the 'velocity-driven' approach as one where the task-level estimation would be skipped?
The end result can be the same—for a team doing velocity-driven planning and going all the way to hours. In both cases the team will end up with a sprint backlog as a list of tasks and some estimates to go with each task. However, it’s unlikely that the same tasks will have been identified and that the same estimates put on them. In my experience, teams doing velocity-driven planning do not take have a step of asking “can we commit to these product backlog items” but rather have a slightly different “does this seem like the right amount of work?” That’s a slightly different question and of course a lesser commitment to finishing the work identified. BTW: I am not making a judgment *in that instance* that committing is better than saying it seems like the right amount of work. Either could be better given the full context in which the team works. That is, some teams are under too much pressure already and a looser “this seems like what we can” attitude could be beneficial in those contexts. Where I am making claims is that commitment-driven is better IF it is important that a team be more likely to finish all they bring into a sprint (that isn’t always the case).