Posts Tagged ‘legacy’

Testing re-Revisited

Monday, November 26, 2012

Last week I went to SCNA: first time, but good conference overall. I may post separately about it, but no promises.

Anyway, Michael Feathers gave a talk titled Testing Revisited in the afternoon of the first day. The basic message of the talk is that we shouldn’t become hostage of our tests: they are just a means to an end and at every step we should be able to evaluate the value that they provide and even (GASP!) delete them when they are not needed anymore. In principle, this is a valid message. When you’re new to something you get to exaggerate and stick to a new principle or to a newly learned technique a bit too much. It may also be a matter of staying in the comfort zone. In fact, every team that successfully transitions to automated testing may be afraid of losing confidence in its code base by reducing the test suite.

Feathers motivations are mainly fed by the idea that he has met several teams reporting to him that at a certain point they felt that their test base was slowing them down. That is to say that the developers had a clear impression that without tests, or with less tests, they would have moved faster. This is already controversial, but there are some elements presented by Feathers which are interesting nonetheless:

  • First of all, as a team we should always be able to evaluate the actual utility that every test brings to the table. Tests that used to serve a purpose but not anymore should be deleted from the suite. An example could be a regression test originally written to reproduce a bug at  a big scale, slow and clumsy => if the bug cannot happen anymore by design, then there’s no reason to keep that test in the suite.
  • Secondly we should be able to reason about the risk involved in removing any test, especially before scaling a build system that cannot accommodate an infinite growth. More importantly, by reasoning about the risk and the utility of every test we can decide to rewrite them, maybe in a lighter way.

Let’s get to the point of this blog post then. Feathers said something very specific, which was: “maybe we rely too much on automated tests”.

I think that this is a dangerous and ambiguous message. In my experience, the only way to be slowed down by tests is always, absolutely always, no exceptions, because of the code base that is under test, not because of the tests themselves.

What follows is what I tweeted the day of the conference:

thinkingbox
While I appreciate some elements of @mfeathers talk I disagree about the sentence: “maybe we rely too much on automated tests” #scna
12-11-09 5:45 PM
thinkingbox
Teams who deliver good code and have good practices are the ones writing more tests and suffering less from them. /cc @mfeathers #scna
12-11-09 5:48 PM
thinkingbox
Teams who suffer from legacy code, with devs not up to the par & suffering from tests are the ones who need them the most. @mfeathers #scna
12-11-09 5:48 PM
thinkingbox
Going after tests seem to me going after the symptoms, not the cause, hence perpetuating the problem. @mfeathers #scna
12-11-09 5:48 PM

And then the day after:

thinkingbox
@yehoram @mfeathers I think that the main problem is sending a somewhat mixed message to teams which don’t have consolidated practices yet
12-11-10 9:33 AM
thinkingbox
@yehoram @mfeathers While it’s true that we should always evaluate the risk which is covered by tests and ultimately their actual utility
12-11-10 9:34 AM
thinkingbox
@yehoram @mfeathers Team which work better are the ones having the least amount of problems with their tests.
12-11-10 9:35 AM

What I’m saying here is that in my experience the teams that are truly slowed down by their tests are in reality slowed by the bad quality of the code under test, not the tests themselves. In practice, I’ve seen cumbersome and brittle tests due to the complexity and intricate dependencies of the code under test. This can be noticed more easily with legacy code.

Some teams struggle, but they are able to navigate through these problems and radically change their systems. Some other teams don’t seem to be able to take the bull by its horns, sometimes also because they focus too much on the tests and not enough on the cause of the pain: their code.

So in the end, while I appreciate Feathers point of view and overall his talk, I think that in particular this aspect can be misunderstood and taken as an excuse to test less, so that teams won’t be slowed down. Unfortunately this is just an illusion and when it backfires it hurts.

If it’s for test then it has to be good

Wednesday, March 25, 2009

You are dealing with some old code: it’s not too bad, you have definitely seen much worse. There’s a class within a home-grown framework that doesn’t allow you to test some application code. You spend some time understanding how to change things to make it more testable. After a while you come up with the idea of adding a method to that class solely for testing purposes – it’s not a fantastic thing, but the alternatives are frightening and this will make a lot of code much more testable. Because you are a good person you want to make it very clear that that code is intended to be used for testing only, not in real life:

public void setMickeyMouseForTest(Class cheeseClass,
                                  String cheeseName, Mouse mouse) {
    cheeseClass = resolveAssociatedClass(cheeseClass);

    if (getLocalFactory().getCheeseName(cheeseClass) != null)
        throw new RuntimeException("Use standard setMickeyMouse instead");

    mouses.put(cheeseName, mouse);
}

This method is similar to the real thing:

public void setMickeyMouse(Class cheeseClass, Mouse mouse);

Method setMickeyMouse determines cheeseName by calling getLocalFactory().getCheeseName(cheeseClass), while method setMickeyMouseForTest assumes that cheeseName is not set (even stronger: if it is not null it throws an exception) => instead, it uses the parameter provided in the call.
In the end, the difference between the two lies in the way a certain condition is handled in the code: the method written for testing purposes is capable of dealing with an exceptional condition differently. You’ve created that method exactly to simplify the amount of setup required for tests, so that it doesn’t matter if getLocalFactory().getCheeseName(cheeseClass) is not capable of providing a cheeseName.

So, the code is committed, lots of tests are now possible, the team celebrates your genius.

Some weeks later you’re pair programming with a colleague. You’re still dealing with the same framework, but this time it’s a different corner-case. After lots of mumbling, back and forth, debugging sessions, you superficially look at the signature of that method, obviously without remembering what you have done a month before. You just vaguely remember that it was capable of dealing in a better way with a corner-case. And the method is called “…ForTest” so it has to be good, in the end we are testing indeed.

It doesn’t work. An exception is thrown. The exception hints to “Use standard setMickeyMouse instead…“. Oh gosh, let’s take a look: ops, I know what it is, we’ve forgotten to set cheeseName for cheeseClass in the local factory. Let’s do that.

<type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type> <type>

Damn, it’s still not working. What is it happening?!?

<debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug> <debug>

It’s very strange.

getLocalFactory().getCheeseName(cheeseClass) is not null, so how can it possibly say that it is null instead?!?

<more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug> <more debug>

Oh my gosh! The exception is actually thrown WHEN getLocalFactory().getCheeseName(cheeseClass) is not null, not viceversa!

What happened is that you and your colleague just wasted 20′ because you misread “!= null” (reading “== null” instead). What does that tell you?

It told me a few things:

  • Managing null values is always evil and error prone. At the very least, checking for a null disrupts the business logic and it makes the code less readable, ultimately more likely to break.
  • It’s too easy to misread a check based on != vs ==. Not only that: the fact that an exception is being thrown if the expression evaluates to true hints that the check is to verify if the value is null, because that is what you would typically do when you are adding a guard to your code. Which brings me to the next point.
  • Moreover, the intent of the code is not clear. It’s not enough to call a method “somethingMeaningfulForTest“, that is the suffix “ForTest” is misleading if the method is not truly applicable for every test. This is even reinforced by the fact that the exception message is not enough communicative.
  • As usual, test code has to follow the same standard of quality as application code, which means readability, expressiveness, simplicity, lack of duplication. This is particularly important for accessory code that is written to support tests.

So, in the end, how should that method be called? What about:

public void setMickeyMouseForTestWhenThereIsNoAvailableCheeseName(
    Class cheeseClass, String cheeseName, Mouse mouse);

The underlying precondition is that method getLocalFactory().getCheeseName(cheeseClass) returns null for this code to execute properly. This should be made clearer with a checked exception being thrown by the method, or, even better, by a custom annotation that marks the method with the defined precondition. In Java land this doesn’t go very much beyond documentation purposes for a variety of reasons:

  • First of all there are limitations on the attributes that an Annotation can exhibit. Specifically, only String, Class and primitive types are allowed, not even object version of scalar type like Integer can be used.
  • Second and foremost, annotations can be queried to provide metadata about code, but they don’t really affect code execution. An annotation is useful only to the degree it is recognized by the runner executing some code. There is in the end no native and effective support for Design By Contract in Java and there will probably never be. A useful result can be achieved by combining together AOP with Annotations, but I have mixed feelings about it => in the end I always have the impression that the mumbo jumbo typical of the supporting infrastructure it’s there just to circumvent a limitation that should be inherently addressed by the language itself.

Or shouldn’t it?


Follow

Get every new post delivered to your Inbox.

Join 258 other followers