Wednesday, December 14, 2011

Before the First Scrum: Velocity

Before an organisation goes Scrum, they're most likely to be following a waterfall paradigm and using Function Points, Lines of Code (LOC), Analogy, etc., for software estimation in hours per task. They'll probably also use Earned Value or Microsoft Project as a means to measure their progress and ultimately, their profitability. In Scrum, story points are generally used as the estimation mechanisms and Velocity is used to measure progress and, indirectly, profitability.

Velocity In Hours

If the Scrum development team chooses to use hours as their estimation mechanism then velocity is easily arrived at: (number of team members * effective hours/day) * number of days in sprint. Using hours has a certain appeal to teams unfamiliar with Scrum and story points but using hours implies a estimation precision that simply doesn't (and never did) exist. Another issue with using hours is the 'User Stories' can read more like the waterfall-ish requirements that preceded them and be of a technical nature. Although these may capture the basic functionality, they may fail to grasp the end users' point of view, give little or no hint of usability, and tend to read like development tasks; telling the development team how to implement the feature.

Unlike estimating in hours, story points are at best only an approximation of the effort required. The Scrum Master should steer the team toward using story points as soon as practical.

Velocity In Story Points

Prior to the first sprint, have the development team define a reference user story. A reference user story is one were the entire development team can reach an agreement on the effort involved in developing the feature, the complexity of developing it, and the risk inherent in it. This can be one of the user stories the product owner has written for the team or can be some small bit of previously implemented functionality that is familiar to the team. In either case, the reference story is usually something that can be completed by the whole team within a day or two. Once the reference user story is established, give this story 2 story points. All other user stories are estimated compared to this 2-point reference story. A story that is assigned a 4 should be twice as much as a 2-point reference user story. A story assigned 1 is one-half as much as the 2-point reference user story. The actual length of time eventually becomes unimportant. Planning Poker is probably the best estimation technique to use when estimating user stories.

All stories considered for the first sprint are estimated by comparison to the reference story. The team selects and estimates stories they believe can be completed to the satisfaction of the team's Definition of Done in the first sprint (see Definition of Done: How to Ensure Compliance). The total count of story points becomes the team's initial estimated velocity.

Velocity Is For The Scrum Team

Once the team has arrived at an estimated velocity, the Scrum Master can then begin using this as a KPI for the team. However, care should be taken to ensure that velocity is not used to compare between teams (see Velocity as a KPI in Scrum). The Scrum Master can track the team's velocity and over time, measure the team's performance (see  Increasing Velocity Vs Stable Velocity).


Saturday, November 12, 2011

Before the First Scrum: Management

You're a manager, a process person, developer, or product manager and your business success has been OK but you think it could be [much] better if your development strategy switched from waterfall to Scrum. How will you begin the transition? This entry will provide some advice on what needs to happen to convince a reluctant management team of the advantages of adopting the Scrum framework.

First Steps For Management

You've heard it over and over again that management needs to support the philosophy and practical applications of Scrum in order for it to work and be successful.  In my situation, the R&D Manager and Product Management Manager were both experienced with Agile and were eager to experiment  with Scrum to determine if the results would be better than the waterfall methodology then currently followed. The key point was that it was an experiment and not a full commitment; Agile would have to prove itself and that proof would be measured in the quality of the product and the speed at which the delivery was made. The business itself was not fussed on the means but were very keen on the results. Having the two department managers who are directly responsible for new product development supporting Scrum was, I believe, crucial for our ultimate success of adopting Scrum.

But what if there's no management support or worst, opposition to change i.e., adopting Scrum.

I worked at a defence company in the early 80's and the company's policy was no one would be hired as an engineer unless they had a 4 year college degree. My manager wanted to bring in someone as an engineer but without the necessary 4 year degree. The manager ended up writing a memo (remember those?) and eventually got the person in as an engineer. He later told me that the length of the memo was directly related to the hurdles before him and to get the person hired as an engineer required an eight page memo. The point of course is that the bigger the obstacle blocking your goal, the bigger the effort in making your case to reach that goal. To convince management that Scrum is a better alternative to current development methodologies, usually waterfall, you need to make the business case that Scrum can improve both quality and productivity; get better products out to the paying customers more quickly.

You must sell the idea of Scrum to management starting the first day and every day following without letup. Management needs to learn that through Scrum, there's higher productivity, greater innovation, quick reaction to customer or competitive changes, high visibility to progress (or lack of progress), higher product quality, and more potential evaluation deliveries to customers after each sprint. At a training session given by Jeff Sutherland, he stated that even if you do nothing else but have a daily scrum meeting, you should see a 20-30% increase of productivity. Jeff Sutherland has lots of practical information and statistics that can be used to help convince your management that Scrum can help your business. There are also many, many resources on the web that have anecdotes and other stories that can help bolster your arguments for Scrum adoption. You will need to understand what exactly the manager's opposition to Scrum is and address those issues directly.

I also think that trialling Scrum is a better approach than trying to completely change the way things are done all at once. By trialling, management is more likely to allow a single team to adopt Scrum without, in their minds, risking everything. When my company first trialled Scrum, only one team was formed. After the team was able to deliver a new product in only three months, a feat that had never happened before with waterfall, the CEO asked why all of development wasn't doing Scrum. 

If you have any stories on how you succeeded in winning over management to use Scrum, please let me hear from you.

Tuesday, October 11, 2011

The Self-organizing Agile Team's Scope of Power and Authority

The means for the Development Team to fully realize the power and authority granted them by Agile is through self-organization.

From the Agile Manifesto:
  • “The best architectures, requirements, and designs emerge from self-organizing teams."
 The Scrum Guide defines the power and authority of the team in terms of self-organization:
  • defining the Scrum Team: “Scrum Teams are self-organizing and cross-functional. Self-organizing teams choose how best to accomplish their work, rather than being directed by others outside the team. Cross-functional teams have all competencies needed to accomplish the work without depending on others not part of the team.”
  • defining the Development Team role: “They [Development Team] are self-organizing. No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality.”
  • defining the Scrum Master role: "The Scrum Master serves the Development Team in several ways, including: Coaching the Development Team in self-organization and cross-functionality."
  • defining the Sprint Planning Meeting: "The Development Team self-organizes to undertake the work in the Sprint Backlog, both during the Sprint Planning Meeting and as needed throughout the Sprint...By the end of the Sprint Planning meeting, the Development Team should be able to explain to the Product Owner and Scrum Master how it intends to work as a self-organizing team to accomplish the Sprint Goal and create the anticipated Increment."
  • defining the Daily Scrum: "Every day, the Development Team should be able to explain to the Product Owner and Scrum Master how it intends to work together as a self-organizing team to accomplish the goal and create the anticipated increment in the remainder of the Sprint."
It's clear the Scrum Guide expects the Scrum Team, and more specifically, the Development Team, be a self-organizing body. However, the Scrum Guide gives no hints on how self-organization is achieved and maintained.

What is a Self-organized Team?

In his book, "The Leader's Guide to Radical Management: Reinventing the Workplace for the 21st Century", Stephen Denning defines a Self-organizing team as a team that can "decide how the work will be organized, who will do what, and in what order; select leaders and spokepersons; and can decide who should be a member of the team" *.

Qualities of a Self-organizing Team

Self-organizing teams are given the freedom to find the best solution to the problems given them. The Development Team is in full control of finding a solution to a given business problem and is not directed to a specific solution from outside the team. This means that although outsiders may provide solicited or unsolicited advice to the team, the team has the freedom to ignore that advice and follow their own instincts and intuition.

Self-organizing teams can react to problems quickly. Instead of waiting for a manager’s approval, the team has the authority to take necessary actions by itself. The team 'owns' the solution and Scrum provides the team all the latitude necessary to achieve it.

Self-organizing teams are cross-functional. The Development Team does not have dependencies on others outside the team to accomplish their work. The team doesn't have divisions of work based on specialized skills within the team e.g. the whole team is responsible for testing, not just the "tester". By being cross-functional, the team has the ability to organize themselves to respond to changes in project, customers, technologies, and requirements. Being a cross-functional team does not mean team members have equal knowledge and talent. Creativity and innovation largely result when team members of differing knowledge and talents debate possible solutions from their respective view-points.

Self-organizing teams value self-improvement and continuous learning. The team actively pursues self-improvement; to find better and innovative ways to work smarter and to satisfy the customer. For example, in order for the team to be cross-functional, they need to learn from each other; the "tester" on the team is teaching all the team members about testing. Furthermore, teams need to learn about the customer's needs and get frequent feedback from customers. Self-organizing teams are given the freedom to chose their own team goals every iteration to facilitate and implement self-improvement actions.

Self-organizing teams have a common focus. The self-organized team collectively understands the goal of the iteration, release or project and has a shared understanding of the solution. The team needs a compelling goal; one which the team feels is worthwhile to achieve. The team needs to re-enforce the understanding of the goal throughout the iteration.  This is done through the inspect and adapt Daily Stand-up meeting.

Self-organizing teams have mutual trust and respect. Self-organizing team members are always open and honest, have the ability to voice ideas and opinions without fear of ridicule or rejection, and knowing others will accomplish the tasks assigned to them. Self-organizing team members value other team members for their unique contributions and opinions.

The Role Of Managers and Customers On The Self-organizing Team

In a recent InfoQ interview, Rashina Hoda says two environmental factors need to be in place to enable self-organization to emerge. These are:
  1. Senior management at the teams' organization must be able to provide freedom to the teams so that they can self-organize themselves.
  2. Customers must support the teams by being actively involved in the development process through providing regular requirements, clarifications, and feedback as required.
Senior management

"Self-organizing Agile teams ... require organization structures that are informal in practice, where the boundaries of hierarchy do not prohibit free flow of information and feedback. In an informal organizational structure, the senior management is directly accessible by all employees (maintaining an ‘opendoors’ policy), and accepts feedback—both positive and negative."
- from "Supporting Self-Organizing Agile Teams What’s Senior Management Got To Do With It?" by Rashina Hoda, James Noble, and Stuart Marshall, 

The Customer's (or customer surrogate's) task is to set the priorities, decide when each requirement is satisfied, and write stories and acceptance/functional tests together with the team. Self-organizing teams are responsible for collaborating effectively and often with the customers to elicit and understand the product requirements. (See "Agile Undercover: When Customers Don’t Collaborate" for more on how customers can help or hurt a team's ability to self-organize.)

Roles Facilitating Self-Organizing Agile Teams

In the paper, "Organizing Self-Organizing Teams", Rashina Hoda, James Noble, and Stuart Marshall concluded that Scrum Teams adopt 6 roles to facilitate their team’s self-organization (see the table below). Although any Scrum Team member could do any of the 6 roles, it was generally the Scrum Master (agile coach) and Product Owner (business analyst) who take on the roles. If the Scrum Team doesn't have the capacity or ability to fulfill these roles or someone outside the Scrum Team is occupying one of these roles, it may indicate the team is not yet self-organizing.


RoleDefinitionPlayed by
Mentor Guides and supports the team initially, helps them become confident in their use of Agile methods, and encourages continued adherence to Agile practices. Agile Coach
Coordinator Acts as a representative of the self-organizing Agile team to coordinate communication and change requests from customers. Developer, Business Analyst
Translator Understands and translates between the business language used by customers and the technical terminology used by the team, in an effort to improve communication between the two. Business Analyst
Champion Champions the Agile cause with the senior management within their organization in order to gain support for the self-organizing Agile team. Agile Coach
Promoter Promotes Agile with customers and attempts to secure their involvement and collaboration to support the efficient functioning of the self-organizing Agile team. Agile Coach
Terminator Identifies team members threatening the proper functioning and productivity of the self-organizing Agile team and engages senior management support in removing such members from the team. Agile Coach



*Although Stephen Denning states that the self-organizing team "can decide who should be a member of the team", I would suggest looking at Mike Cohn's removing team members blog and the role of leaders on a self-organizing team blog. There may be times when the Scrum Master or management juggle the team make-up to help foster self-organizational improvements.

Friday, September 30, 2011

Increasing Velocity vs. Stable Velocity

The question often comes up whether a Development Team can reach a point where their sprint velocity becomes stable. I think that velocity may tend to stabilize but it won't be stable until the Develop Team executes its sprint perfectly and nothing new exists for the team to experiment with. Here are some ideas that I believe can help the Scrum Team achieve a stable velocity.

If the Development Team is doing:
  • Product Backlog grooming (stories and acceptance criteria are understood and broken down to an appropriate size),
  • Small User Stories (each story taking 1 or 2 days depending on how many people work on it),
  • Defining acceptance tests before the start of the sprint (understand the acceptance criteria, the scope of testing, and the most probable approach to testing),
  • No "unplanned" work in the sprint (e.g. Support calls),
  • No unnecessary "manual" work (identify and analyze manual work to see if it's cost effective to automate),
  • Pair Programming and/or following a Coding Standard for code review,
  • Test Driven Development (TDD),
  • Acceptance Test Driven Development (ATDD),
  • Continuous integration (new/modified tested code is integrated daily for automatic component/functional/system/regression testing),
  • Code Refactoring (to improve the code quality and the overall design),
  • Automated Unit Tests,
  • Automated Integration/Component Tests,
  • Automated Functional Tests,
  • Automated User Story Acceptance Tests,
  • Automatic Regression Tests (Team is probably not doing good regression testing unless these are automated).
And the Development Team is:
  • Cross-functional (has the necessary expertise among its members to take a user story from its initial concept to a fully deployed and tested piece of software within one sprint),
  • Working "normal" work hours (Development Team is working at a "sustainable" pace),
  • Happy as individuals and as a team (no one "dreads" coming to work),
  • Not interrupted or distracted by anyone or anything outside the team (Scrum Master stand guard if necessary),
  • Identifying impediments to the Scrum Master quickly (hopefully before there's any impact),
  • Not distracted by email (turn off the mail server),
  • Not distracted by the phone (have a phone available to the team in a separate room),
  • Not adding to its Technical Debt during sprints (adopt a zero introduced defects as part of the definition of done),
  • A cohesive entity (has a shared view on the development objectives, the design ideas in the code and what makes for good code),
  • Self-organizing (always on the lookout for ways to improve).
And the Scrum Master and Product Owner are:
  • Shielding the Development Team from "unplanned" work,
  • Shielding the Development Team from outside distractions,
  • Removing the Development Team's impediments in a timely manner,

If all the items above are being done then I would say that the Team's velocity will probably be fairly stable.

However, the Scrum Master is responsible for helping the Development Team to improve even when it appears that the Team is doing everything as efficiently as it can. From the Scrum Guide: “The Scrum Master encourages the Scrum Team to improve, within the Scrum process framework, its development process and practices to make it more effective and enjoyable for the next Sprint.” The point being that trying to improve the team's velocity should be the norm – not standing still. To achieve improvements, the Scrum Master should ask the question:  Is the Scrum Team trying new things every sprint to help improve themselves? The Scrum Team should be able to identify how it can improve its velocity by working smarter, not harder.

Thursday, September 22, 2011

Definition of Done: How to Ensure Compliance

Near the end of the Scrum Guide, there’s a section on the Definition of “Done”:

When the Product Backlog item or an Increment is described as “Done”, everyone must understand what “Done” means. Although this varies significantly per Scrum Team, members must have a shared understanding of what it means for work to be complete, to ensure transparency. This is the “Definition of Done” for the Scrum Team and is used to assess when work is complete on the product Increment.

Who is everyone? Everyone includes the Product Owner, Development Team, and Scrum Master although key customers, sales, product management, product support, and managers should also be aware of what the team understands 'Done' to mean.
 
The Scrum Master ensures that a definition of done exists for the scrum team.

How to Define 'Done' for Your Scrum Team

Chris Sterling of Sterling Barton, (www.gettingagile.com), has written an excellent paper on "Building a Definition of Done". Chris Sterling has identified a four step process toward getting a definition of done. These are:
  1. Brainstorm – write down, one artifact per post-it note, all artifacts essential for delivering on a feature [user story], iteration/sprint, and release
  2. Identify Non-Iteration/Sprint Artifacts – identify artifacts which cannot currently be done every iteration/sprint
  3. Capture Impediments – reflect on each artifact not currently done every iteration/sprint and identify the obstacle to its inclusion in an iteration/sprint deliverable
  4. Commitment – get a consensus on the Definition of Done; those items which are able to be done for a feature [user story] and iteration/sprint
Let’s assume you have your definition of ‘Done’ documented in a team charter or some other means and have it properly displayed on your scrum board for all to see.
 
Now What?
 
Complying With The Definition of 'Done'
 
In the sprint planning meeting, the Scrum Master should get the Scrum Team to re-confirm their commitment to the team’s definition of ‘Done’. This ensures that everyone on the Scrum Team, which may include new people to the team, is aware and understands what ‘Done’ means. It’s possible that the definition of ‘Done’ had changed during the last sprint’s Retrospective, maybe to tighten up on the quality goals. The Scrum Master goes through the definition of ‘Done’ with the Scrum Team and everyone re-affirms their commitment to it.
 
During the sprint, everyone on the Scrum Team is responsible to follow and adhere to the definition of ‘Done’. It is the Scrum Master’s responsibility to ensure the other members of the Scrum Team (Developers and Product Owner) have been coached and trained to do this. If the Development Team and/or Product Owner are not following the definition of ‘Done’, we hold the Scrum Master accountable.
 
What should happen if the Scrum Team cannot follow one of their definitions of ‘Done’ to the letter? Let’s assume for the moment that a Scrum Team has the following definition of ‘Done’ for a user story:
  • Acceptance tests written covering all acceptance criteria.
  • Acceptance tests and regression tests run and pass.
  • Acceptance tests results reviewed by the Product Owner.
  • No known introduced bugs.
  • Product demo’ed with Product Owner.
  • Product documentation, technical and user, updated.
The Scrum Team works hard following the definition of ‘Done’ and is successful except that they’ve introduced some new bugs, violating the ‘no introduced bugs’ clause of the definition and the sprint ends today. What should happen? This is where the Scrum Master might need to hold back the torrent of people, (managers, sales, product management), all of whom want those stories now! Strictly speaking, the user story or stories that introduced the new bugs cannot be considered ‘Done’ and the story(s) are not demo’ed in the sprint review. However, during the sprint review when discussing what didn’t get ‘Done’, the offending user story bugs might be discussed and stakeholders and Scrum Team alike may determine that the bugs are of little importance to the end-users. (Why this didn’t happen before the end of the sprint is another matter that the Scrum Team would analyse in the Retrospective.) The user story would be added to the next sprint to be integrated into the ‘Release’ branch, probably re-tested, and the insignificant bugs closed as ‘not a problem’. It’s most likely that the Product Owner will have the user story ‘Done’ within 24 hours of the sprint review and can then release it at their leisure. If the bug is significant but the stakeholders and Scrum Team determine that the bug is of little importance to the end-users now, the bug could become a new user story to be addressed later. (You'll note that this second scenario is actually adding to technical debt.)
 
The whole point here is that the definition of ‘Done’ should stand uncompromised throughout the sprint. If there’s a problem with the definition of ‘Done’ that came out during the sprint, it should be addressed during the sprint Retrospective and any changes to the definition made and agreed to there. However, the Scrum Team should heed the words in the Scrum Guide which say, “As Scrum Teams mature, it is expected that their Definition of ‘Done’ will expand to include more stringent criteria for higher quality”. This means that the definition of 'Done' should be strengthen rather than weaken as time and sprints go by.

Sunday, June 26, 2011

Scrum Metrics

There are lots of metrics that could be collected to assess a software development team's competency, success, inventiveness, quality, and quantity of work. To get an idea of what metrics one could collect, take a look at the 185 practices in CMMI for Development. Agile, unlike CMMI, doesn't require "evidence" that engineering practices are being followed and therefore has few metrics that a Scrum Team may collect to measure the success of each sprint. Below are 9 metrics that a Scrum Team might consider using.
  1. Actual Stories Completed vs. Committed Stories
  2. Technical Debt Management
  3. Team Velocity
  4. Quality Delivered to Customer
  5. Team Enthusiasm
  6. Retrospective Process Improvement
  7. Communication
  8. Scrum Team's Adherence to Scrum Rules & Engineering Practices
  9. Development Team's Understanding of Sprint Scope and Goal
To answer the question of who should be collecting metrics and measuring the Scrum Team's success, consider who in Scrum is responsible for the team's success. In describing the role of ScrumMaster, the Scrum Guide states, "The ScrumMaster teaches the Scrum Team by coaching and by leading it to be more productive and produce higher quality products." Clearly it's the responsibility of the ScrumMaster to measure the success of the team if only to increase the team's productivity and product quality. The ScrumMaster could use a spider chart as shown below to track the Scrum Team.

Using a spider chart is an easy way for the ScrumMaster to track and compare results from sprint to sprint.














Below are short descriptions of each metric, how it can be measured, and some of the issues that could lead to a low score. Any additional metrics or comments on these 9 would be most welcomed.

1.  Actual Stories Completed vs. Committed Stories
This metric is used to measure the development team's ability to know and understand their capabilities. The measure is taken by comparing the number of stories committed to in sprint planning and the number of stories identified in the sprint review as completed. A low score may indicate any of the following may need special attention:
  • Team does not have a reference story to make relative estimates (see Team Velocity),
  • Not every Team member understands the reference story (see Team Velocity),
  • Product Owner isn't providing enough information to the development team (see Communication, Development Team's Understanding of Sprint Scope and Goal),
  • Requirements scope creep (see Development Team's Understanding of Sprint Scope and Goal),
  • Team is being disrupted (see Scrum Team's Adherence to Scrum Rules & Engineering Practices, Team Enthusiasm).
Even if the Team completes the stories they committed to, there are a few things the ScrumMaster should be looking for:
  • Team under-commits and works at a slower than 'normal' pace (see Team Velocity),
  • Team has one or more 'hero' (see Team Enthusiasm),
  • Story commitment is met but the product is 'buggy' (see Technical Debt Management, Quality Delivered to Customer).

2.  Technical Debt Management
This metric measures the team's overall technical indebtedness; known problems and issues being delivered at the end of the sprint. This is usually counted using bugs but could also be deliverables such as training material, user documentation, delivery media, and others. A low score may indicate any of the following may need special attention:
  • Customer is not considered during sprint (see Quality Delivered to Customer),
  • Weak or no 'Definition of Done' which includes zero introduced bugs (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Management interfering with Team-forcing delivery before the Team is ready (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Team working multiple stories such that team compromises quality to complete stories as end of sprint nears (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Team is creating bugs that reflect their opinion on how things should work rather than listening to the Customer (see Scrum Team's Adherence to Scrum Rules & Engineering Practices, Quality Delivered to Customer).
Even if technical debt didn't increase, here are a few things the ScrumMaster should be looking for:
  • Team is not documenting problems found or fixed (see Communication, Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Team is not engaging with Customer/Product Owner during user acceptance testing (see Communication, Scrum Team's Adherence to Scrum Rules & Engineering Practices)

3.  Team Velocity
The velocity metric measures the consistency of the team's estimates from sprint to sprint. Feature story estimates are made relative to estimates of other feature stories, usually using story points. The measure is made by comparing story points completed in this sprint with points completed in the previous sprint, +/- 10% (the Nokia Test uses +/- 10%). A low score may indicate any of the following may need special attention:
  • Product Owner isn't providing enough information to the development team (see Communication, Development Team's Understanding of Sprint Scope and Goal),
  • Team size is changing between sprints (generally, the core team must be consistent; allowances should be made for absences e.g. vacation, sick),
  • Team doesn't understand the scope of work at the start of the sprint (see Development Team's Understanding of Sprint Scope and Goal, Communication, Actual Stories Completed vs. Committed Stories),
  • Team is being disrupted (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Team's reference feature stories are not applicable to the current release  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Team is doing very short release cycles (< 3 sprints) or doing maintenance work (the Team might consider Kanban or XP over Scrum under these circumstances).
Even if the Team seems to have a consistent velocity, there are a few things the ScrumMaster should be looking for:
  • Team under-commits and works at a slower than 'normal' pace (see Actual Stories Completed vs. Committed Stories),
  • Team has one or more 'hero' (see Team Enthusiasm),
  • Velocity is consistent but the product is 'buggy' (see Technical Debt Management, Quality Delivered to Customer).

4.  Quality Delivered to Customer
In most companies, delivering a quality product to the customer and keeping the customer happy is your only reason for being. Scrum attempts to have the outcome of every sprint provide value to the customer i.e. a 'potentially releasable piece of the product'. This is not necessarily a product that is released but a product that can be shown to the customers as a work in progress, to solicit the customers' comments, opinions, and suggestions i.e. are we building the product the customer needs. This can be best measured by surveying the customers and stakeholders. Below shows a spider chart survey of customers made after the Sprint Review.

The ScrumMaster can document all customer or stakeholder opinions using an easy to read spider chart.


















A low score may indicate any of the following may need special attention:
  • Product Owner doesn't really understand what the customer wants and needs (see Communication),
  • The Product Owner is not adequately communicating the customer needs to the development team (see Communication, Development team's Understanding of Sprint
  • Scope and Goal),
  • Customers are not involved in the development of stories (see Communication),
  • Customer is not involved with defining story acceptance criteria (see Communication)
  • Bugs are delivered with the product (see Technical Debt Management).
Even if the customer is satisfied with the sprint product, there are a couple things the ScrumMaster should be looking for:
  • The product is 'buggy' (see Technical Debt Management),
  • Not all customers and stakeholders are invited, show up, or participate in the sprint review (see Scrum Team Adherence to Scrum Rules & Engineering Practices, Communication).

5.  Team Enthusiasm

Enthusiasm is a major component for a successful Scrum Team. If they don’t have it, no process or methodology is going to help. There are many early warnings signs that left unchecked, can lead to loss of 'enthusiasm'. It's up to the ScrumMaster to recognize these symptoms and take appropriate action for each particular circumstance if the ScrumMaster is to keep the Team productive and happy. Measuring team enthusiasm can be very subjective and is best accomplished through observations during the various sprint meetings; planning, review and daily stand-up. However, the simplest approach is to ask the direct question of the Scrum Team, "Do you feel happy?" and "How motivated do you feel?" A low score may indicate any of the following may need special attention:
  • ScrumMaster is taking too long to remove impediments,
  • The number of impediments during the sprint was high,
  • The team is being interrupted by managers or the Product Owner (see Actual Stories Completed vs. Committed Stories),
  • Some team members can't contribute in certain product areas i.e. lack of cross-functional training,
  • Team members are working long hours i.e. not working at a sustainable pace (see Actual Stories Completed vs. Committed Stories),
  • Internal conflicts between Product Owner and Team,
  • Internal conflicts between Team members,
  • Team is repeating same mistakes (see Retrospective Process Improvement),
  • Team is continually grumbling, complaining, or bickering,
  • Team member(s) don't have passion for their work,
  • The Team is not being creative or innovative.

6.  Retrospective Process Improvement

The Retrospective Process Improvement metric measures the Scrum Team's ability to revise, within the Scrum process framework and practices, its development process to make it more effective and enjoyable for the next sprint. From the Scrum Guide, "The purpose of the [Sprint] Retrospective is to inspect how the last Sprint went in regards to people, relationships, process and tools. The inspection should identify and prioritize the major items that went well and those items that-if done differently-could make things even better. These include Scrum Team composition, meeting arrangements, tools, definition of 'done,' methods of communication, and processes for turning Product Backlog items into something 'done.'" This can be measured using the count of retrospective items identified, the number of retrospective items the Team had committed to addressing in the sprint, and the number of items worked/resolved by the end of the sprint. A low score may indicate any of the following:
  • Team doesn't identify any items for improvement due to feeling there's nothing they can do or it's out of their control i.e. Team not self-organizing and managing,
  • During sprint, Product Owner, ScrumMaster, or management discourages work on self-improvement at the expense of feature stories,
  • During the sprint, the Team discourages work on self-improvement at the expense of feature stories,
  • Team doesn't look inward at their own performance and environment during the retrospective,
  • Team is not acknowledging or addressing repeated mistakes (see Team Enthusiasm).

7.  Communication

The Communication metric is a subjective measure of how well the Team, Product Owner, ScrumMaster, Customers, and stakeholders are conducting open and honest communications. The ScrumMaster, while observing and listening to the Team, Product Owner, Customers, and other stakeholders throughout the sprint, will get indications and clues as to how well everyone is communicating. A low score may indicate any of the following:
  • The Customer is not actively involved with feature story development (see Quality Delivered  to Customer),
  • The Customer is not providing acceptance criteria for stories (see Quality Delivered  to Customer, Technical Debt Management),
  • The Team is not providing the acceptance tests to the Customer for review and comment (see Quality Delivered  to Customer, Technical Debt Management),
  • The Team and Customer are not running acceptance tests together (see Quality Delivered  to Customer, Technical Debt Management),
  • The Customer(s) and other stakeholders are not invited/present at the Sprint Review (see Quality Delivered  to Customer),
  • The ScrumMaster isn't ensuring Customers are surveyed after each Sprint Review (see Quality Delivered  to Customer),
  • The Product Owner isn't available some portion of each day for scrum team for collaboration (see Scrum Team Adherence to Scrum Rules & Engineering Practices),
  • The Product Owner stories do not address features from the Customers' perspective e.g. stories are implementation specific  (see Scrum Team Adherence to Scrum Rules & Engineering Practices, Quality Delivered  to Customer),
  • Product Owner isn't providing information on the Customer 'pain' or needs to the Team (see Actual Stories Completed vs. Committed Stories),
  • Team is not documenting problems found or fixed (see Technical Debt Management),
  • Product Owner doesn't really understand what the customer wants and needs (see Quality Delivered  to Customer),
  • The Product Owner is not adequately communicating the customer needs to the development team  (see Quality Delivered  to Customer, Actual Stories Completed vs. Committed Stories)
  • Team is not conducting daily meeting  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not conducting release planning   (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not conducting sprint planning  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not conducting a sprint review  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not conducting a retrospective  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not visibly displaying release burndown chart  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not visibly displaying sprint burndown chart  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not visibly displaying stories and acceptance criteria  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices),
  • Scrum Team is not visibly displaying non-function requirements that apply to entire sprint or release  (see Scrum Team's Adherence to Scrum Rules & Engineering Practices).

8.  Scrum Team's Adherence to Scrum Rules & Engineering Practices

The rules for scrum are defined in the Scrum Guide by Ken Schwaber and Jeff Sutherland. Although scrum doesn't prescribe engineering practices as does XP, most companies will have several of these defined for software engineering projects. The ScrumMaster is responsible for holding the Scrum Team accountable to these rules and any engineering practices defined. This metric can be measured by counting the infractions that occur during each sprint. A low score may indicate any of the following:
  • Management or Product Owner interfering with Team-forcing delivery before the Team is ready (see Technical Debt Management),
  • Weak or no 'Definition of Done' which includes zero introduced bugs (see Technical Debt Management),
  • Customer is not considered during sprint (see Technical Debt Management),
  • Team is creating bugs that reflect their opinion on how things should work rather than listening to the Customer (see Technical Debt Management),
  • Team is not documenting problems found or fixed (see Technical Debt Management)
  • Team is not engaging with Customer/Product Owner during user acceptance testing (see Technical Debt Management),
  • Team is being disrupted (see Actual Stories Completed vs. Committed Stories, Team Velocity),
  • ScrumMaster is not protecting Team from disruptions,
  • Team's reference feature stories are not applicable to the current release (see Team Velocity)
  • Not all customers and stakeholders are invited, show up, or participate in the sprint review (see Quality Delivered to Customer),
  • ScrumMaster is not ensuring that the Scrum Team adheres to Scrum values, practices, and rules,
  • ScrumMaster is not ensuring that the Scrum Team adheres to company, departmental, and Scrum Team engineering rules and practices,
  • ScrumMaster is not teaching the Scrum Team by coaching and by leading it to be more productive and produce higher quality products,
  • ScrumMaster is not helping the Scrum Team understand and use self-organization and cross-functionality,
  • ScrumMaster is not ensuring the Scrum Team has a workable 'Definition of Done' for stories,
  • ScrumMaster is not ensuring the Team has a daily meeting,
  • ScrumMaster is not ensuring the Team has the release burndown chart is prominently displayed,
  • ScrumMaster is not ensuring the Team has the sprint burndown chart is prominently displayed,
  • ScrumMaster is not ensuring the Team has the sprint stories and acceptance criteria are prominently displayed,
  • ScrumMaster is not ensuring sprint planning meeting is held,
  • ScrumMaster is not ensuring the sprint review meeting is held,
  • ScrumMaster is not ensuring the sprint retrospective meeting is held.

9.  Development Team's Understanding of Sprint Scope and Goal

The Team Understanding of Sprint Scope and Goal metric is a subjective measure of how well the Customer, Product Owner, and Team interact, understand, and focus on the sprint stories and goal. The sprint goal is broad and states the general intention of the sprint. The goal is usually an abstraction and is not testable as such but is aligned with the intended value to be delivered to the Customer at the end of the sprint. The sprint scope or objective is defined in the acceptance criteria of the stories. The stories and acceptance criteria are 'placeholders for a conversation' and will not be incredibly detailed. The Product Owner and Customer define the acceptance criteria but the Product Owner alone defines the sprint goal. The ScrumMaster can use the scoring from "Actual Stories Completed vs. Committed Stories" and "Team Velocity" as indications of problem understanding the stories and goal but is best determined through day-to-day contact and interaction with the Scrum Team. A low score may indicate any of the following:
  • Product Owner isn't providing enough information to the development team i.e. needs of the Customer (see Actual Stories Completed vs. Committed Stories, Team Velocity, Communication),
  • Product Owner is not writing a sprint goal,
  • Product Owner doesn't understand what an incremental development approach means,
  • Requirements scope creep: Product Owner is adding to the scope of the stories or adding additional acceptance criteria that were not agreed to during planning (see Actual Stories Completed vs. Committed Stories, Team Velocity, Communication),
  • Team doesn't understand the scope of work at the start of the sprint (see Team Velocity, Communication).

Tuesday, May 24, 2011

Velocity as A KPI In Scrum

A key performance indicator (KPI) is a measure of performance. Such measures are commonly used to help a teams or organizations define and evaluate how successful they are, typically in terms of making progress towards its long-term goals. Velocity for a Scrum team is a key planning tool and describes the quantity of work a team can be expected to deliver at the end of a sprint. From this velocity, the product owner can do release planning to predict when a feature or multiple features can be delivered to within, say, a month. Jeff Sutherland said in his blog, "Story Points: Why are they better than hours?":

"The management metric for project delivery needs to be a unit of production. ... The metric of important is the number of story points the team can deliver per unit of calendar time. The points per sprint is the velocity."

It makes sense that a product owner would use velocity as a means to predict the amount of work a team can deliver in a sprint, assuming the measure (i.e. story point) remains constant and core development team doesn't change. However, velocity is not and should not be the only metric used to measure success of a Scrum team. Some other measures might include:

  • Cycle Time (or Lead Time) - The time from when a customer request comes into a process to a time when it has actually delivered to a customer. Good way to measure the whole process, not only development efficiency. With this, you can check how good your flow is.
  • Rework - Things which are coming back to the process after they have declared done and delivered i.e. was reviewed during sprint review. Because those things are defects like bugs and misunderstandings in customer requirements, this is a measurement for quality and communication.
  • Customer Satisfaction - This may be from the company's customer satisfaction surveys but should also include internal sources e.g. Product Support, Sales, Marketing, and Product Management.

Starting Off With Velocity

When a new Scrum team first starts working together or when a seasoned Scrum team begins a new project, velocity will most likely oscillate, sometimes wildly. This can be traced to the many 'unknowns' that exist. For a new Scrum team, team members will take some time to acclimate themselves to Scrum, user stories, understanding each other, new technologies, and accepting the reality that they can't do nor are they expected to do precision estimates. For an experienced Scrum team on a new project, the unknowns are usually related to technology and the vagueness of the customer requirements. In either of these cases, using velocity as an indicator of when the team becomes 'predictable' can be very useful, especially to the product owner but also to the team when it comes time to commit doing specific user stories in a sprint.

After a few sprints the team's velocity should stabilize and trend upward over time. But what if it doesn't stabilize? Is is because the user stories are 'bad'? The team members are changing? Or is there some other 'smell' getting in the way? If the velocity of the Scrum team is fluctuating beyond that expected for the particular team, I would suggest no more than 15% after 3-5 sprints, the team should use their sprint retrospective to do some analysis to figure out the root cause and identify actionable improvement measures that it can implement in the next sprint.

I would think a flat velocity is better than one trending down but unless there’s no room for improvement, i.e. the team is perfect in every way possible, then I think the velocity trend should be up over time and this is propably how the velocity KPI should be implemented.

The Scrum team should be managing their velocity to ensure its accuracy throughout the project. The Scrum team, product owner, developers, testers, scrum master, might use Mike Cohn's Velocity Calculator Tool which calculates a confidence interval around the median using the Binomial distribution to determine a 90% confidence interval for 'n' sprint velocity values. The target a Scrum team may wish to grow toward is to have the deviation from the mean, average, sprint velocity not to exceed 10%. This is well in line with question 6 of the Nokia Test: Estimate error < 10%.


Resistance to using velocity as a KPI

As with any changes to the development team and product owners, expect some resistance to having Scrum team KPIs including one on velocity. Team members may feel having a KPI that is based on team and not individual performance puts their current situation at risk. The scrum master and managers need to ensure the Scrum team understands that the team as a whole is greater than the sum of its parts and team KPIs measure the strength of the team. Some things that can help the Scrum team accept Team KPIs and a velocity KPI in particular are:
  • Allow the Scrum team to set the velocity goals. Usually no specific target velocity is mentioned although there may be words to the effect that the velocity will gradually increase over time. By allowing the Scrum team to set a velocity goal, they'll feel more empowered and in control. For example, a Scrum team doing 18 +/- 10% story points per sprint may target 20 story points per sprint.
  • Allow the Scrum team to remove early sprint velocities. When a team first begins using Scrum or when a new project is introduced to the team, sprint velocities may fluctuate greatly. The Scrum team should have the opportunity to remove aberrations to help 'smooth' out their velocity chart.
  • Make it clear to each Scrum team that velocities are not comparable between teams. Velocity is very much a localized measure to a specific Scrum team. In addition to different team members with different team 'personalities', projects typically possess unique characteristics in terms of estimating techniques, detail process, technology, customer involvement, etc. As a result, this can make organization-wide analysis very inaccurate.

When is using velocity as a team KPI not advised?

You might want to reconsider using velocity as a KPI in the following circumstances:
  • XP: Unlike Extreme Programing (XP), Scrum teams do not allow changes into their sprints; XP welcomes changes to the current iteration which makes using velocity as a team KPI in XP impractical. XP teams would use velocity at the project or release level but not as a metric from iteration to iteration.
  • When the Scrum team is doing maintenance work (bug fixing): Unless the Scrum team is quite small, the team will rarely work one bug at a time. Teams doing full time maintenance work should probably look at Kanban rather than Scrum although the daily stand-up meeting should still happen.
  • When dissimilar projects are very short: If the Scrum team is working on disimilar projects, i.e. projects where technology or product domain are not related, and are only 1 - 3 sprints long, it's not likely that the velocity would ever stabilize or the the velocity established on the previous project would be applicable. However, very short projects might indicate that the product owner or company doesn't have any long term objectives, especially in highly volatile markets. If this is the case, maybe XP or even Kanban is a better choice.
  • When not everyone in the Scrum team is measured: If the scrum master, product owner, developers and testers all on one team are not measured the same then this can lead to either perceived or real conflicts of interest. Product owners write the user stories, developers and testers implement and test against the Scrum teams definition of done, and the scrum master is responsible to teach the Scrum team by coaching and by leading it to be more productive and produce higher quality products. All these factors affect velocity and these people need to be equally accountable.

Monday, May 9, 2011

Acceptance Tests in Scrum

"Acceptance tests are the tests that define the business value each story must deliver."

-       Lisa Crispen & Janet Gregory, "Agile Testing: A Practical Guide For Testers and Agile Teams"

 "Acceptance testing is the process of verifying that stories were developed such that each works exactly the way the customer team expected it to work."

-       Mike Cohn, "User Stories Applied For Agile Software Development"

 Acceptance tests, derived from the User Stories’ acceptance criteria, serve three purposes:

1.    Acceptance tests are used by the product owner to know that the acceptance criteria have been met and the User Story is “Done” to their satisfaction.
2.    Acceptance tests help drive development in an “Acceptance Test Driven Development” environment.
3.    Acceptance tests serve as the source of product regression tests.

Acceptance criteria are a list of the specified indicators or measures used in assessing the ability of the product to perform its intended function and add real business value to the product. Acceptance tests are written based on the acceptance criteria and validate that the product has the expected business value i.e., functionality, intended by the User Story. 
Acceptance test cases further serve as a source of product regression tests.  It’s to the company's advantage to have in-place a collection of tests covering all functionality and acceptance tests are the means to getting that coverage.  Once User Stories are "Done" (acceptance testing being a significant contributor), the User Story is delivered to the "always releasable" branch of the configuration management system.  Along with the delivery of the User Story, the acceptance tests are “delivered” to the configuration management system.

Some of the issues that may arise when adopting acceptance testing as a measure of “Done” are:

1.    Developers on the Scrum Team feel it’s “the testers” responsibility to write and run functional tests.

One of my favourite comments early in the company’s adoption of Scrum occurred when the ScrumMaster was saying unit testing wasn’t enough and the Scrum Team needed to validate the new functionality by running tests from the end-user’s perspective. One of the team members suggested that the company “hire more testers” if functional testing was required. This is not an uncommon reaction from developers especially when, prior to going Agile, developers were not held directly responsible for quality testing and the test team had been a separate entity in the organization.

To help the team recognize their testing responsibilities, the ScrumMaster began to specifically ask during the Daily Scrum meeting if functional testing had been done by the developer and one other member of the team before a requirement (the team wasn’t doing user stories yet) was considered “Done”. This made the Daily Scrum meeting slightly longer but it quickly sank into the team that they were responsible for functional testing. It wasn’t long before a new team member was prevented from completing a requirement because the team was concerned that the appropriate testing had not been carried out.

2.    When there is a “tester” in the Scrum Team, the tester is the only person writing and running Acceptance Tests.

In this scenario the Scrum Team has more or less continued the pre-agile practice of having a separate test team. This practice virtually guaranteed that functional testing would occur very late in the iteration resulting more often than not with User Stories not being “Done” at the end of the sprint.

The behaviour Agile encourages is the development team, as a whole, to feel responsible for the quality of the product and for meeting the User Stories’ acceptance criteria. Having a professional tester in the team when first adopting Scrum can prove too tempting for the team to heap all functional testing upon this individual. When I was in defence, test was a separate entity from development and this worked quite well owing to the detail of the well documented software requirements. Agile doesn’t share the same enthusiasm for documenting every detail but rather prefers face to face communication. The best scenario is for the tester to share their test philosophy with the developers and help build their competencies for functional testing.

Eventually all professional testers were pulled out the Scrum Teams to focus their efforts on testing the releases and creating automated functional tests for regression testing. The immediate result was the Scrum Team no longer felt they had “dedicated testers” on the team. However, during the sprint planning meeting the Scrum Team (developers and product owner) spent more time discussing how they would test the User Stories. During these discussions, acceptance criteria might be added, modified, or removed but in the end, the Scrum Team had a much clearer understanding of the User Stories.

Jeff Sutherland had this to say about the Nokia Test question on dedicated testers:

“Testing within a sprint - there must be testers on the team that complete testing at the feature level during the sprint, i.e. the acceptance tests defined by the Product Owner and enhanced by the testers are completed. Only stories with completed tests are DONE and count towards team velocity. On some teams developers test (but not the same developer that wrote the code). For other teams, it is essential to have professional testers when the consequences of failure are high - a production system for treating patients in a hospital, for example.”

3.    The User Stories are written at an implementation level e.g. As a product owner, I want the SQL database to have a new table with fields x and y.


The thing we did short of having the product owners going on a User Story writing course was to have the development teams set the expectations using the INVEST qualities of a User Story – Independent, Negotiable, Valuable, Estimable, Small, Testable (I believe these were first defined by Bill Wake). And it wasn’t only the product owners put on the spot; the development team’s also needed to actively contribute to bettering the quality of User Stories.

4.    The Acceptance Tests are written after the code is completed and validate the result rather than the intent.

Following a “Test Last” approach is generally not a good choice. There are a couple of potential problems with testing last including:

·         Dropping acceptance test during crunch time. This could occur any time but is more likely to happen in a test last scenario.
·         The tests may be biased toward the solution rather testing the requirements.

The easiest way to avoid these pitfalls is to:

1.    Adopt a test first approach – write acceptance tests based upon the acceptance criteria before any code is written.  This will

·         ensure the acceptance tests  are written,
·         serve as an aid to design for the development team,
·         automated tests can be run daily with the daily build i.e. there would failures at first but when the tests all pass, the User Story is complete, and
·         give the product owner plenty of opportunity to review and comment on the test early in the User Story implementation.

2.    Get the product owner to review and approve the acceptance tests before writing any code. 

5.    Acceptance Tests are not automated.

It is sometimes difficult to convince the development team and product owner that automating acceptance tests (feature, function, or user tests) are worth the effort. This has been a common position taken by the Scrum Teams and although there is evidence showing that running regression tests would have caught problems later caught by customers, the Scrum Teams have remained unconvinced. The most common excuse I’ve heard is that it is “too much work”.  The reality is that teams do not re-run the manual product acceptance tests to ensure that previously implemented functionality still works so maybe the thinking goes, why automate tests that will never be run again.

Automated acceptance tests help increase quality by catching errors early in the development cycle. The product is improved by creating a "safety net" of tests which can be run during the daily build. The value of this is obvious, but what is not as obvious is that automated acceptance tests also reduce the time to market and cost of development by shortening the development time. This is accomplished by decreasing a developer's time in debugging loops by catching errors in the safety net of tests.

There are a couple of measures that could be taken to encourage Scrum Teams to develop automated acceptance tests and include:

·         Having a KPI that measures the percentage of automated tests per iteration.
·         Having the development team sit on the product support desk for two or three months doing root cause analysis on the customer issues. This may help them see where additional automated testing would be most helpful.

Once, when a delivery was made to a customer site and some fundamental functionality didn’t work, root cause analysis found that the problem had slipped through code reviews, unit tests and acceptance tests. The new functionality tested fine but in the process of implementing the new functionality, previously working functionality had unknowingly ceased working. Because any regression testing would have been manual and time to release was critical, no regression testing was identified. Had functional tests been automated as part of the daily build, no one would have been required to make the decision to skip regression testing and the problem would have been caught prior to release. 

 Summary

Acceptance testing is a critical component to the definition of “Done” for Agile User Stories and is the product owners’ best means to ensure the desired functionality of the User Story is the development team’s primary focus. Acceptance testing, or more accurately, the writing of acceptance tests, can help the development team understand the software requirements of the User Story before any code is written. Acceptance tests also provide a legacy of regression tests that can be used to ensure the product hasn’t regressed in other areas when new User Stories are implemented.