Posts Tagged ‘TDD’

Information dissonance and TDD

Tuesday, October 11, 2011

A colleague from Italy asked me for my opinion about this piece from TechCrunch, You’ve Got To Admit It’s Getting Better. This is a post related to the now infamous rant from Ryan Dahl of Node.js fame about the status of software in general, and how different layers of code and components integrate in particular. Rather than an harmonious dance of well balanced and fine tuned libraries, Dahl is right to point out that quite often we work with layers and layers of complexity and leaky abstractions which end to expose, most of all, an annoying dissonant impedance mismatch. While I partly agree with Dahl, I have to admit that also Jon Evans of TechCrunch has a valid point: it’s getting better, not enough probably, but better.

What has caught my attention more is something else though. I’m pretty sure that my friend asked me about this because of some unkind words for TDD. So let me get the record straight:

  • First of all, it’s just wrong to say that TDD is “the notion that a development team’s automated tests are even more important than the actual software they write, and should be written first” (Jon Evans himself) => no TDD practitioner in his right mind would describe TDD in that way. In particular, nobody would consider the tests more important than the actual software. So, yes, we TDDers think that testing is important, because for us testing is first of all a design technique and tests are a nice byproduct of this technique, one that allows you to evolve your code without breaking existing behaviour in an uncontrolled way. But none of us would try to say that tests are more important, please! If you write about something that is clearly not your forte, at the very least you should not make such a gross mistake.
  • I don’t know where I’ve lived in the last 10 years, but TDD has never “seemed almost sacrosanct” (Jon Evans again). I’d argue that still only a minority of developers actually test their code in these days. TDD practitioners then are a minority of a minority. So, please, spare me the drama about the industry changing its mind about TDD.
  • With regards to the two linked posts, I have to confess that I have no idea who are the authors. That doesn’t necessarily mean anything per se, just that I don’t have any prejudice about the authors. I’ve never read anything from them before.
    • Pieter Hintjens seems to recommend that rather than testing your code, you should throw it out in the open and have people (users) testing it for you. It seems that his main argument is that, even if you think you know how to test your code, you don’t know if your code is valid in the hands of users. Apart from the fact that it doesn’t seem a very professional approach, this is mostly nonsense to me: testing and validation are two different things. Sure, you need your software to do something useful for your users, otherwise it can be correct but it would be of no purpose at all. So, assuming that your software solves the right problem, how do you know that it solves it correctly? Testing to the rescue. In the end the author even admits that he uses TDD for his APIs…
      Having said that, remember that TDD is first of all about design and evolution. So I think that this point is totally wrong.
    • Peter Sargeant rants about dogmas and Agile Testing Experts recommending to always write your tests first. The term “Agile Testing Expert” is funny enough already: there are no such people, there is no agile testing, thank God at least there is one aspect of XP that the Agile crowd has not been able to jeopardize with their brand (oh, they tried, but they also failed miserably – some people cover TDD, BDD, ATDD and other practices together under the umbrella of Agile Testing, but it seems quite fictitious to me and it has never been accepted in the community in that way). So, if you talk about Agile Testing Experts I’m already suspicious. Once again, this post seems to miss the point of TDD: it’s about design and evolution, testing is a collateral, an important one but still a collateral. Do I always write my code following TDD? No, I don’t. I rarely use TDD if I’m spiking or prototyping. I never use TDD for one off pieces of code that won’t survive to see the light of a new day. I very rarely use TDD for pure UI code, especially when it’s about HTML and views and layout and CSS… (you get what I mean I guess). And I never use TDD when I don’t know what I’m doing, when I’m exploring.
      Anyway, Peter Sargeant’s blog is in the end about promoting testing automation, so his rant is more about breaking away from dogmas and using your head to judge when to use a tool, so it doesn’t seem to me to be such a big deal.

In conclusion, yes, things may get a little better and no, I don’t think that there is a whole movement against mainstreamed (!!) TDD in the software industry.

Advertisements

Guide to Test-Driven Development on ThinkCode.TV

Wednesday, April 21, 2010

Guide to Test-Driven Development

It is done: a few minutes ago the English catalog of ThinkCode.TV has gone live! We are 2 days later on our original plan (oh boy, you have no idea of the amount of glitches, 3rd party service disruptions, personal tragedies, Murphy’s law manifestations… that we have incurred into in the last few weeks), but at this point it doesn’t matter anymore. What matters is that the catalog is out and we can finally get feedback outside of the Italian technical community.

While some videos are derived from our Italian catalog, translated and narrated by an English mother-tongue speaker (David B. Harris, who happens to be a colleague, a good friend and an outstanding author as well), there is some original content that is not available in the Italian catalog too. For example, Renzo Borgatti has updated his screencast about MacRuby and HotCocoa to the latest version and David, as hinted above, has started his course about FOSS Document Management tools.

We are proud to offer subtitles for every video. I personally know English-speaking geeks who like subtitles in any case, at the very least because they can be searched and they can provide an alternative way to browse a video. On the other hand, we are convinced that a lot of people who are not too comfortable with Enligh-only content will benefit from the possibility to read the subtitles to catch even the tiniest detail.

With regards to my stuff, the first lesson of the Guide to Test-Driven Development is available. I’m very excited to see what the reaction of the community will be. Only in the last few days we are starting to see some excellent screencasts popping here and there, but nothing yet to my knowledge that resembles a complete approach to TDD like this course. The Italian version of the guide is one of the bestsellers of the Italian catalog and I hope that it will go well in English too. There are currently 3 lessons available in Italian (with the 4th one coming very soon), so you can expect a steady flow of lessons.

There are a lot of other screencasts that I’d like to produce, touching mainly on topics like OOP, design, refactoring and testing. So many things to do, so little time. 🙂

Coming soon, apart from continuing on the courses that we have started, we will have a great introduction to HTTP, an excellent course on Smalltalk and an interesting screencast covering how to upgrade a webapp to upcoming Rails 3.0 release. So, enjoy, give us feedback, and stay tuned!

If you like Python or interested into it, please grab a “bite”

Monday, March 29, 2010

A-mazing Python: a guiding light in a labyrinth of languages

I’ve been extremely quiet in the last months, but for a good reason: together with some friends we have founded ThinkCode Labs Inc., a Canadian startup whose first project is ThinkCode.TV. Our mission is to deliver high quality technical screencasts, mostly about programming themes.

We have launched on the Italian market in November 2009 and it has been quite a success and a hell of a ride! Now we are on the verge to launch in the English speaking market as well. We will launch on April 19th.

I’m producing an 11 lessons course on TDD which is doing great with the Italian public. I’m touching on subjects like characterizing TDD’s microcycle, finding the next test, dependency injection & mock objects, responsive design, BDD and dealing with aspects like legacy code, handling time & databases. I can’t wait to see which reaction we will get with English speakers.

In the meantime, we are offering a free appetizer for anybody interested: a fantastic 19′ video about implementing a maze solver in Python. You can grab it here: please, let us know what you think about it and if you like it, by all means, spread the news.

Using functors to handle test connections

Thursday, October 9, 2008

A few days ago I was working on some functional tests that have the following structure:

      public void testSomething() throws Exception {
        // 1. istantiate the system
        // 2. interact with it through the UI
        connectDB();
        // 3. assert the state of the system by examining the DB
        relinquishDB();       

        // 4. interact a bit more through the UI
        connectDB();
        // 5. assert again the state of the system by examining the DB
        relinquishDB();        

        // etc.
    }

Some notes about them:

  • Tests are implemented with JUnit 3.8.2, but this is irrilevant to what will follow.
  • The application is a webapp implemented with a proprietary web framework.
  • Those tests are implemented running the servlet engine in process through ServletUnit.
  • The DB is interfaced through a proprietary framework and it is accessed behind a mock JDBC driver that works translating SQL statements in memory.
  • The state of the system is examined by fetching either an Entity or EntityList, abstract classes that represent respectively a single record or a collection of records in a table.
  • Methods connectDB() and relinquishDB() are used to manage the test connection, which needs to be different than the one used by the application for structural reasons.

All the tests that follow the above structure inherit from an abstract TestCase which implements its own setUp and tearDown, treated like template methods whose behaviour can be refined further (for example, choosing to install a different mock DB). connectDB() and relinquishDB() use connection parameters that are potentially established differently for every test, but in reality they are seamlessly copied over from test to test.

While this structure makes sense, I thought that repeating connectDB() and relinquishDB() several times in the test is extremely annoying. It would be nice if we can wrap some logic into a block that takes care of the repetition for us: calling connectDB() first, executing our code and then calling relinquishDB() in the end. If I was programming in ruby, that would have been extremely easy to implement with proper blocks, wouldn’t it be? Well, in java is not that difficult either, if you tolerate the syntactic noise of course. Let’s see how.

To start, we can define a Functor interface that can be used to implement a block without parameters:

public interface Functor<T> {
    T exec();
}

Now we need to wrap a bit of logic around it that takes care of connecting the DB and relinquishing it afterwards:

public class Fetcher {
    public static <T> T queryStateFor(Functor<T> f) {
        connectDB();
        T result = f.exec();
        relinquishDB();
        return result;
    }
    private static void connectDB() {
        // ...
    }
    private static void relinquishDB() {
        // ...
    }
}

The actual implementations of Fetcher.connectDB() and Fetcher.relinquishDB() are fairly trivial and the methods are in reality inlined: I’ve written them in a Compose Method style just to make them easier to understand and to trace back Fetcher structure to the test structure above.

The Fetcher can be used in the following way (assuming that it is imported statically in the test case):

    public void testSomething() throws Exception {
        // 1. istantiate the system
        // 2. interact with it through the UI
        final PurchaseOrder po = //...
        PurchaseOrderStatus status = queryStateFor(new Functor<PurchaseOrderStatus>() {
            public PurchaseOrderStatus exec() {
                return po.getLastStateChangeStatus();
            }
        });

        // etc.
    }

As you can see, this refactoring is not about saving keystrokes.

A developer was asking me which kind of advantage this style achieves over the other. In general, I think that functors are a powerful tool, often underestimated in statically typed languages like java. But these are the reasons why I think they are particularly relevant in the context of queries and tests:

  1. Avoid repeating connectDB(); … relinquishDB(); without the introduction of methods that would have to be rewritten for every test.
  2. Ensure that queries for checking purposes correctly connect to the proper DB and relinquish it in the end.
  3. Make the test more readable, without hiding relevant details (connecting and relinquishing the DB are irrelevant details to the test, only the query matters in the eyes of the reader / maintainer).
  4. In general, make the test less dependant upon lower level details whose relevance is marginal.
  5. Using a functor the query can be executed as a generified method, hence it definitely scores better in terms of type safety.

I’ve also thought about any disadvantage:

  1. The syntax looks a bit odd for the java world, but it’s not entirely a stranger.
  2. Current functor implementation can handle only one result per call, but it can certainly be extended and in general it doesn’t hurt significantly calling several functors in a row.

And you? What do you think about this way of handling a test connection to the DB?

PS: While I think that this style provides a better alternative to repeat calls to connectDB() and relinquishDB() all over the code, there’s actually a simpler way to solve the problem ==> the tests can be provided with their own connection that has nothing to deal with the connection handled by application code running in the servlet engine (credits to Mark for recognizing it). Nevertheless, functors are a powerful tool typically neglected in the java world and they definitely deserve more attention in situations like this one.

The consequences of shipping

Thursday, September 11, 2008

A friend of mine pointed me towards one of last Kent Beck’s mini-essays: Just Ship, Baby. While reading it I felt more and more surprised: I was hoping to find a controversial but sophisticated provocation directed towards those who hide behind the process and fail to deliver because of it. That can be quite common among those teams that are approaching agile methods for the first time and fail to understand and embrace the pragmatic essence of methods like XP. It’s a kind of error that can happen frequently for teams coming from a waterfallish culture and it’s always done in good faith. The rules are tools and they change over time, based on the circumstances.

Despite his good intentions, though, I think that Beck has failed to ship the right message, so to speak: the examples are quite misleading because when I’m exploring a technology that I don’t know well enough, the right way to go is spiking and prototyping until I get the answer that I’m looking for. Under these circumstances, only at a later stage it makes sense to apply TDD, typically when you have played enough with the technicalities of the specific API. Certainly, this shouldn’t happen anytime sooner than when you feel safe filling the gaps related to the expectations in your tests. So, I’d say that Beck’s problem in this case has been VIOLATING what I consider one of the implicit rules of TDD: don’t write your tests until you have enough understanding of the modelling mechanisms of the API / framework / system you are dealing with. TDD can be used to explore those mechanisms, but that’s a completely different story.

There’s actually even more there: if I’m just interested in evaluating the feasibility of something, then a throwaway prototype seems a better idea. When you have confirmed the feasibility, and only after that, then you make tabula rasa and develop your system applying TDD consistently. No shortcuts, but different approaches depending on your intent. I don’t think there’s anything new in this.

Going back to Beck’s essay original intent (as I understand it), I think that the target has been missed: the issue whether you follow the rules or you deliver the system has not been addressed. First of all, it’s a faulty dilemma if you put it like that. If there’s one thing that agile methods have taught me it’s that delivering a system and/or completing a project are the result of trade-offs, continuous negotiation, constant planning throughout the entire project life, reassessing current status and goals based on feedback. But let’s put all those considerations aside for a while and let’s consider a situation where a short-term deliverable can be achieved if some of the practices are relaxed. In this situation, the team incurs typically into some significant technical debt or, even worse, the external quality of the system is severely affected. What should you do? Well, it depends and as usual it’s a judgement call.

There are situations in which it makes perfect sense and it buys you time to clean the system immediately afterwards. I haven’t seen this happening many times, especially because it requires a perfectly healthy, open and honest environment, but I accept the idea that it can happen. On this subject, for example, there was a blog post by James Shore a couple of years ago which triggered an interesting and lengthy discussion in the XP mailing list.

There are other circumstances where the system can already be so compromised that no more design debt can be tolerated or the lack of correctness of the software can cause financial loss (or worse). My experience is that this is much more likely to happen and when it does its consequences can be devastating. Saying “Just ship, baby” can have catastrophic effects and it’s ultimately irresponsible.

It’s so annoying having to test all that code and solve those bugs. Real programmers write correct code and eventually solve bugs instead of sleeping. That’s what we are paid for, aren’t we? I fear that Beck’s post can ultimately feed the despicable culture of Macho Programming.

Delivering software is not like winning a football game: it has consequences that need to be taken into account. In sports, when a game is over, it’s over: unless you have broken the law, got some dope or other things that have nothing to deal with sport itself, every game is self-contained.

Software development though it’s different. More than often the consequences of your actions come back at you, sometimes in very unpleasant and everlasting ways.