Monday, January 19, 2009

No-Brainer Self-Management Practices in Agile Part 2


Whole Teams Improve Coordination and Communication
A whole team is a small group of people (no more than 12) that work together towards a common purpose, primarily spend their time as part of the team, and as a team have all of the skills they need in order to be self-sufficient. These skill sets may include server side programmer, web designer, tester, technical writer, scrum master, product owner, etc. The whole team practice contributes to self-management by reducing the amount of coordination required in order to achieve a goal. For instance, instead of having to coordinate with a QA group in order to find somebody that can test a piece of software, having a QA person as part of the team means that there is always somebody familiar with the software available to test it.

Another benefit of having a whole team is that it makes it more obvious if you have resource imbalances. If the QA folks in the team can’t keep up with the developers, then you either have too many developers on the team or too few QA folks. If your front end developers can’t keep up with the folks working on the back end, then you have a resource imbalance. You either need to add more folks with the right skill set, or you need to reduce the amount of work being done on the back end.

When you have a whole team, you spend less time waiting for other groups and bringing part-time participants up to speed, you lose less time due to communication delays, and individuals spend less time multi-tasking between multiple projects. Taken together, the benefits of whole teams can have a significant impact on reducing management complexity which makes it easy for teams to do more self-management.

Collocation Further Improves Coordination and Communication
Collocation is closely related to whole teams. Collocation is simply having everybody in close proximity to each other. This compounds the coordination benefit of whole teams.

If you have a large group of people that are not all near each other but they are all working towards a single deliverable, collocation may seem impractical on the surface. However, collocation at the whole team level is more important than collocation of multiple teams. Focus on creating whole teams that can collocate rather than collocating the whole group of people.

The challenge with both whole teams and collocation is that they often require either relocating people, changes to management structures, or both. However, the benefits of having a tightly coordinated team working towards a common goal far outweighs the difficulties associated with creating collocated whole teams.

6 comments:

Anonymous said...

The problem we are dealing with is whether or not to include our dedicated QA resources as part of the scrum team for the sprint, or separate testing from development.

Including them in the team (which is what we've been doing so far) means that developers have to stop development "early" in order to guarantee QA have time to complete their testing tasks. This can lead to development being compressed to between 65% and 75% of the time we're assuming that we'd be allowed when some says "4-week sprint".

The alternative, moving testing out of the sprint, is not finding much favour, however. It is feared a "them and us" mentality might result, plus if testers are raising defect reports concerning the previous sprint and the team's already started on the current one then when what do you do with them? Putting them on the backlog for the sprint after next results in a two-month delay before sprint A's deliverable is actually up to quality (as the fixes won't be applied until the result of sprint C). Fixing bugs during the current sprint means a lot of unplanned work and can dramatically reduce velocity, meaning the agreed deliverables don't get delivered.

Any ideas? We're quite new to this (and we've very quickly wandered away from "pure scrum", which may not be helping)

Damon Poole said...

Rob,

This is a fairly common thing to run into. I'm going to make a couple of assumptions in my reply, if you provide more details I'll be happy to do a follow-up.

It takes some time to really solve this problem, but there's really only two concepts involved. The first concept is to break your testing up into unit testing, story testing (aka acceptance testing), and all other testing. The "other" category includes things like usability, performance, stress, or anything that cannot be done easily and automatically in a short period of time. The first two kinds of testing, unit and story, should be done within the iteration and the rest should be done when it makes sense. For instance, after the last sprint for the release.

The other concept is "one piece flow." This is a concept from Lean, but Lean is really the backdrop for Scrum. In any case, the idea is that each story is done as if it was the only thing in the release. When development starts on a story, QA should be creating test cases for that story (story testing) at the same time. When development is done, QA should then automate the test cases and make sure they run.

During development, the developer should be creating unit tests.

So, from the perspective of a story, it is started, and completely finished in a short period of time. QA should not be waiting until all stories are done. Conversely, developers should not have lots of stories in progress that all finish together which keeps QA from getting involved.

You will know that you are succeeding when stories are being started and then are completely ready to go on an individual basis within days with all unit and story tests written, automated, and passing.

If developers are not yet writing unit tests, I would highly recommend starting. There are lots of great resources out there on unit testing. It does take a while for it to become second nature, but it is absolutely worth the investment.

Anonymous said...

Hi, Damon,

Thanks for the info (sorry for the delay in replying).

We are doing unit tests for some of the development components and the rest are introducing them gradually, so we're winning (slowly) there.

We also have testers scripting while developers are designing/writing tests/code. Our problem comes when the test team say "we can't accept another drop of the software, because we won't have time to re-test everything" - sometimes this is a pre-decided date (e.g. 3-5 days before the actual sprint end), sometimes it becomes an ad-hoc decision.

It sounds like we're almost there, but we need to push for more test automation (to reduce that lag) and to push things like performance and regression testing out of the sprint and do it less often, e.g. as part of a "release" sprint.

That's pretty much what I (and others) have been thinking so if that's what you're suggesting, then I'll just bask in a warm rosy glow of righteousness for a bit, if you don't mind :-)

Our product management team do like to get each "improved" release put on our demonstration systems whenever we complete a sprint, though, so there'll be some "managing up" to be done, I think!

Rob.

Damon Poole said...

Hi Rob,

I just noticed that you said "wandered away from pure scrum." What does that mean?

Glad to hear about the unit tests, that will definitely contribute more and more to the solution as it (hopefully) spreads.

I am definitely saying that more automation is good if possible. I don't know if this applies or not, but the more unit testing you do, the less integration and system testing you should need to do. That is, some integration and system testing may be made redundant by unit testing. If so, that would provide QA more bandwidth for automation. You may find something useful in "The Role of QA in Agile".

It sounds like what you are really running into here is the pressure being applied by product management. I don't mean that in a bad way, that pressure could actually result in a lot of good effects on the system. It is just something that needs to be managed in one way or another.

That pressure could be just as beneficial as doing frequent releases. The reason I say beneficial is that a) get more feedback via prod man and b) it forces thinking about what to do to satisfy both the needs of engineering and prod man. For instance, it can be used to motivate creative process improvements, increased automation, faster test machines, etc.

Balancing the needs of prod man and engineering can be accomplished in a couple of ways. One way is just focusing on the above. Another aspect is managing when and where "slow" activities are done. For instance, perhaps you can move some activities outside of the sprint, but that doesn't necessarily mean that prod man can't use the results for demos.

Something that I recommend and has worked very well is to break your testing regimen into two pieces. The first level of testing is to achieve a low bar, such as "good enough for prod man demos" and everything else. You can then defer the "everything else" to later. That said, having that later be done in parallel and as automated as possible is good.

For instance, if there is a standard demo, part of the reduced testing could be to just run through the standard demo with the caution that "outside of this path has not been tested." PM can then do additional testing if they want to show somebody something outside of the beaten path.

In my experience, if you are really doing one piece flow where stories are completed and then immediately tested, you have good unit testing and good automated testing, the number of problems that further testing will find will be minimal, especially considering that the amount of work done in each sprint is small.

As for what to do about problems found after a sprint is started, I suggest "taking your medicine." That is, don't let concern about "reducing velocity" get in your way. Finding a defect and fixing it in the same sprint but having that practice cause problems is no different than pushing things out of the next sprint!

Following scrum, doing good engineering practices, and doing continuous process improvement will do more to increase your velocity than trying to do work-arounds. :-)

In summary, don't get a sunburn from all that well-deserved basking in righteousness, but do what you can for product management too. :-) Their instincts are also good.

İnanç Gümüş said...

Hello Damon,

"Something that I recommend and has worked very well is to break your testing regimen into two pieces. The first level of testing is to achieve a low bar, such as "good enough for prod man demos" and everything else. You can then defer the "everything else" to later. That said, having that later be done in parallel and as automated as possible is good."

Do you mean?: Create your tests first w/happy paths for demos. And then wait for a release to add sad paths. Please correct me.

Damon Poole said...

Inanc,

I was referring to the timing of running existing tests. That is, run the tests which are along the most used paths first. That way, if something is broken that affects lots of people (such as other developers) is found ASAP.

A lot of testing is around paths that are not exercised very often. That testing still needs to be run, but it doesn't need to be run first or even as often as the frequent paths.

Cheers,

Damon