This is a repost of a series of article I originally published for Songbird.
The first step was to add cost to everything. We introduced a new cost field in Bugzilla (our issue tracking system) and put a cost value on everything according to our new scale of 1, 2 and 3 points. With costing in place, we were in a position to compute how many points the team was able to complete in a typical work week. That total, normalized per work day became our team velocity.
Below is a chart representing our velocity over many one week iterations during the 0.3 release (code name Bowie). The blue line is the number of points the engineering team completed, averaged per work day. The red line tracks the cost of new things being introduced, normalized per work day. The green line tracks the net velocity.
It quickly became apparent that as the team took things off the pile, new work was being identified and added. We had to keep track and take this into account. We named this intake to globally represent new functionality, regressions and newly discovered bugs introduced during a cycle.
The net velocity gave us an indication on how well we were doing overall. When it gets in negative territory, we are losing ground.
You can see below 3 events that had clear impact on intake, namely scope creep (some features were not well defined upfront), and bug intake due to public feedback from a blessed build and a release candidate.
Also noticeable was a week when the team was not as productive as usual. With that information in hand, we were able to have open conversations about the state of progress and try to determine the cause for it. Sometimes, a low velocity is simply because work gets accumulated in a week and does not get checked in until the next. We named this carry over, which smoothes out over time. Other time, there were some inevitable distractions such as a move, interviews, equipment failure, etc. In other cases, the team was just having a bad week.
With this in hand, I was able to better understand the rate at which the team was completing work. I created a burn down chart that tracked the total points of known work left. Using the team velocity, I could forecast an expected release date with a simple best fit projection. When the line cross the X-axis, we’d be done and would ship.
If you compare the two charts, you can see how much influence intake had on the release. This became a key component to take into account in our planning. By budgeting points towards intake, it allowed us to reserve some engineering capacity to take change into account upfront and have a more realistic schedule.
Where Does Intake Come From?
By formally tracking our intake, we were able to better characterize the nature of change. Most of our intake comes from change introduced when features start to materialize in the product and can be tested. This is a desirable effect of adopting an Agile practice. Other contributors are defects being reported by existing users, which is a benefit of early releases. Less desirable intake come from regressions or new tasks that resulted from bad assumptions or misunderstandings. Those can usually be mitigated by increasing unit tests coverage and more upfront detailed planning.
Full Agile Cycle
Let’s take a look at another release cycle. Dokken was our second release in which we used the new process from inception. The charts below represents velocity and burn down respectively. Note that the scale on the velocity has increased. There is more dynamic range. Because the release started from day 0, we noticed a ramp up in the velocity. We learnt not to be necessarily alarmed by this. In a new release cycle, the team needs some time to “prime the pump” of development. As the release progressed and we got better visibility of the team progress, we decided that we should defer feature work that was identified as nice-to-have. This is another thing we did during planning. We prioritized work in must have for the release, hope to have for the release and nice to have - mainly cosmetic changes, low risk bugs - buckets. This gave us a pre-negotiated way to easily shift features as the release progressed.
By actively tracking and managing the intake, we were able to steer the release and deliver within weeks of the projected date. Not great, but better than our previous releases. Dokken was a particular difficult release as we undertook a lot of device support work, which can lead to nasty device compatibility problems.
The next release, named Eno presented some interesting characteristics. The intake was relatively high thru the release but the team was also maintaining a higher velocity. This is a good example of the team achieving a good level of agility. Change is being introduced throughout the release and the team is well prepared to tackle it. Also notice that the spike in intake due to feedback from release candidate has a corresponding level of increase in team velocity. This is due to the shortening of feedback loop. With proper ownership, the developers are able to respond in real time to issues being discovered.
Notice how the burn down trend is converging almost linearly. This means that the team is achieving a sustainable pace, leading to more predictable ship date.
Achieving High Velocity
Our latest release, Fugazi is another example of a successful cycle. This was the shortest release cycle the team ever undertook, with only 4 weeks of planned development work and 3 weeks of QA. In order to maintain our original release date, we had to defer some lower priority work in iteration 3. Despite the shorter release period, the team actually completed more points per iteration than any other release. We maintained an unprecedented high velocity, which in turn allowed for a high level on intake to be absorbed, so all the planned must have featured could be kept in the release and still hit our original target date.
In the last part of the series, we'll cover the tools we created to make the tracking easier and how they are being used on a daily basis to help us steer the release. Stay tuned.