Posts Tagged ‘team performance’

Testing re-Revisited

Monday, November 26, 2012

Last week I went to SCNA: first time, but good conference overall. I may post separately about it, but no promises.

Anyway, Michael Feathers gave a talk titled Testing Revisited in the afternoon of the first day. The basic message of the talk is that we shouldn’t become hostage of our tests: they are just a means to an end and at every step we should be able to evaluate the value that they provide and even (GASP!) delete them when they are not needed anymore. In principle, this is a valid message. When you’re new to something you get to exaggerate and stick to a new principle or to a newly learned technique a bit too much. It may also be a matter of staying in the comfort zone. In fact, every team that successfully transitions to automated testing may be afraid of losing confidence in its code base by reducing the test suite.

Feathers motivations are mainly fed by the idea that he has met several teams reporting to him that at a certain point they felt that their test base was slowing them down. That is to say that the developers had a clear impression that without tests, or with less tests, they would have moved faster. This is already controversial, but there are some elements presented by Feathers which are interesting nonetheless:

  • First of all, as a team we should always be able to evaluate the actual utility that every test brings to the table. Tests that used to serve a purpose but not anymore should be deleted from the suite. An example could be a regression test originally written to reproduce a bug at  a big scale, slow and clumsy => if the bug cannot happen anymore by design, then there’s no reason to keep that test in the suite.
  • Secondly we should be able to reason about the risk involved in removing any test, especially before scaling a build system that cannot accommodate an infinite growth. More importantly, by reasoning about the risk and the utility of every test we can decide to rewrite them, maybe in a lighter way.

Let’s get to the point of this blog post then. Feathers said something very specific, which was: “maybe we rely too much on automated tests”.

I think that this is a dangerous and ambiguous message. In my experience, the only way to be slowed down by tests is always, absolutely always, no exceptions, because of the code base that is under test, not because of the tests themselves.

What follows is what I tweeted the day of the conference:

While I appreciate some elements of @mfeathers talk I disagree about the sentence: “maybe we rely too much on automated tests” #scna
12-11-09 5:45 PM
Teams who deliver good code and have good practices are the ones writing more tests and suffering less from them. /cc @mfeathers #scna
12-11-09 5:48 PM
Teams who suffer from legacy code, with devs not up to the par & suffering from tests are the ones who need them the most. @mfeathers #scna
12-11-09 5:48 PM
Going after tests seem to me going after the symptoms, not the cause, hence perpetuating the problem. @mfeathers #scna
12-11-09 5:48 PM

And then the day after:

@yehoram @mfeathers I think that the main problem is sending a somewhat mixed message to teams which don’t have consolidated practices yet
12-11-10 9:33 AM
@yehoram @mfeathers While it’s true that we should always evaluate the risk which is covered by tests and ultimately their actual utility
12-11-10 9:34 AM
@yehoram @mfeathers Team which work better are the ones having the least amount of problems with their tests.
12-11-10 9:35 AM

What I’m saying here is that in my experience the teams that are truly slowed down by their tests are in reality slowed by the bad quality of the code under test, not the tests themselves. In practice, I’ve seen cumbersome and brittle tests due to the complexity and intricate dependencies of the code under test. This can be noticed more easily with legacy code.

Some teams struggle, but they are able to navigate through these problems and radically change their systems. Some other teams don’t seem to be able to take the bull by its horns, sometimes also because they focus too much on the tests and not enough on the cause of the pain: their code.

So in the end, while I appreciate Feathers point of view and overall his talk, I think that in particular this aspect can be misunderstood and taken as an excuse to test less, so that teams won’t be slowed down. Unfortunately this is just an illusion and when it backfires it hurts.