I thought that a small project does not need testing...

(...Or: You can't over-estimate the importance of a testing infrastructure)

Over the last few weeks I had to write three small (1-2 days of work) applications. They were a command line utility, a Swing application that analyzes the complexity of Java methods, and a web-page--a single .html file--that lets it user extract some information from a REST server (Javascript, JQuery, Ajax).

I don't think that LOC is a good code metric, but just to give a feeling on the sizes of these projects, their respective LOC (including blanks, comments, unit tests, HTML) values are: 515, 1383, 664.

Reflecting on these three application I see a common theme: While I did unit test (and even TDD-ed) certain parts of the apps, I didn't put much emphasize on making the code testable. After all, these are very small apps which should not take very long to develop, so investing in testability seemed like a waste of time. I believed that the benefits of automatic testing will not outweigh the initial cost of putting the required scaffolding in place.

I was wrong. Even in web-page of 664 lines (including plain HTML), I quickly got to a position where the behavior of the app was quite intricate. In order to make sure that new functionality does not tamper with existing one I found myself repeatedly rerunning a lengthy series of manual tests. At the next round, there was even more "existing functionality" to test...

The total testing effort is actually similar to the sum of a simple arithmetic series: Sn = 1 + 2 + 3 + ... + n. In such a series the value of Sn grows ~ n^2. This means that the time needed for adding a new piece of functionality will rise as the app grows. Eventually it will reach a point where the time of implementing a feature will not be determined by the complexity of the feature but by the complexity of the app. All features, little or big, will take a lot of time to complete because the dominant cost is the testing, not the implementation.

If I decide not to (manually) test every new increment of functionality I am at the risk of not detecting bugs the moment they are introduced. This incurs significantly longer times for fixing these bugs when they are eventually detected.

Of course, at the beginning everything looked fine. However, after just a few hours the signs of a technical debt became evident: the code grew messy. I was afraid to refactor. I felt that I am coding in "extreme cautious" mode. I am no longer in control of my code. I could not move it in the direction that I wanted.

The amazing thing is the short distance that you have to walk in order for this effect to kick in. It usually took less than half a day for me to realize that manual testing slows me down.

Will I do things differently in the future? Yes. I will start with a testable skeleton of the app before adding any substantial behavior to it. The "start-from-a-skeleton" practice is already quite popular. The emphasize here is two fold:
  • It should be a testable skeleton. This will let you build testability into the system from the very start.
  • The extra cost of a testable skeleton pays off even in extra-small projects.
A thorough treatment of this topic is given in chapter Four of GOOS (by Steve Freeman and Nat Pryce) which talks about "Kick-Starting the Test-Driven Cycle". In particular, they argue that the goal of every first iteration should be to "Test a Walking Skeleton". Go read it.

3 comments :: I thought that a small project does not need testing...

  1. Who tests the tests ?

  2. it would be interesting to see such skeleton on very little project with an example where it has been saving time.

Post a Comment