JUnit Rules!

Rules are a simple, yet amazingly powerful, mechanism introduced in JUnit version 4.7. They allow developers to easily customize JUnit's behavior by exposing meta information regarding the currently executing test. This post provides a straightforward example for writing a custom rule that augments JUnit with some useful functionality.

My subject class is IntSet: A set of integers implementing the standard operations of add(), remove(), contains(), clear() in O(1) time. To make this performance guarantee the set needs to know (in advance) the range of the values (min..max) and its size limit (number of elements that it will accommodate).

All in all, IntSet looks something like this:

public class IntSet {
... // Some private fields
public IntSet(int limit, int min, int max) { ... }
public int size() { ... }
public boolean contains(int n) { ... }
public void add(int n) { ... }
public void remove(int n) { ... }

One of my unit tests specifies the behavior of IntSet when its size limit is reached. If I'm only interested in the type of the exception I can specify it via the expected attribute of the @Test annotation:

public void shouldNotExceedCapacity() {
IntSet s = new IntSet(2, -10, 100); // Set size limit to 2
s.add(50); // Insertion of the 3rd element should fail

There are two drawbacks with this test. First, It only asserts the type of the an exception. It does not check the error message specified for the exception. Second, it does not assert that the exception was triggered by the last add() call. In other words, if we have a bug and the 2nd add() call is failing - with the same type of exception - the test will still pass.

To overcome this limitation we want to check the error message of the thrown exception. Specifically, we want to verify that the execution of the method fires an exception whose error message is "Cannot insert '50' - The set is full". Clearly, the chances of such an exception being thrown by the 2nd call are pretty slim.

Extending JUnit in such a manner is pretty easy thanks to the rules mechanism:

public class IntSet_Tests {

@interface Throwing {
public String value();

public MethodRule mr = new MethodRule()
public Statement apply(final Statement base, FrameworkMethod m, Object o) {
Throwing t = m.getAnnotation(Throwing.class);
if(t == null)
return base;

final String message = t.value();
return new Statement() {

public void evaluate() throws Throwable {
try {
fail("No exception was thrown");
catch(AssertionError e) {
throw e;
catch(Exception e) {
assertEquals("Incorrect exception message", message, e.getMessage());

// All sort of @Test methods ...

// And now, a method that asserts the error message
@Throwing("Cannot insert '50' - The set is full")
public void shouldNotExceedCapacity() {
IntSet s = new IntSet(2, -10, 100);

First we define a new annotation, @Throwing. Then we define a field annotated with @Rule to provide the custom handling of this annotation. Finally, we annotate the shouldNotExceedCapacity() method with a @Throwing("Cannot insert '50' - The set is full") annotation.

The mechanism works as follows: before each test method is run, JUnit creates a Statement object which is merely a command object through which the acutal method can be invoked. JUnit passes this object along with a FrameworkMethod object (a wrapper of Java's Method) and the unit test instance to all @Rule fields defined at the test class.

A @Rule field must be public and must implement the MethodRule interface (of course, you can instead extend one of several classes conveniently defined by JUnit). In the apply() method, above, we create a new Statement object that wraps the original one. The new evaluate() method will check that if an exception is thrown its message matches the text specified by the @Throwing annotation attached to the method.

Obviously, there are other ways to do that. For instance, one can use the ExpectedException class (a predefined JUnit rule) to achieve a similar effect. The purpose of this post is to surface the (mighty) powers of JUnit meta programming.

Axiom: Instability

I had already blogged about the Axioms of programming: the fundamental rules that govern the development of every piece of (substantial) software. In this post I want to focus to on the Instability Axiom:

The external behavior of a component will need to change over time

Here's a real story. I once was involved with a very a small project, let's call it the PQR project: three people working part time for three weeks, putting together an HTTP server a web client and a GUI client - all are very simple. During these three weeks we were also learning some new technologies so actual coding time (of all developers combined) was about 20-25 days.

During these three weeks two important changes were applied to PQR's specification:
  • The technology with which the GUI client was implemented had to be changed. Instead of implementing it over Tech.A we had to switch to Tech.B.

  • The initial specs defined the data that should be persisted by the server. As we were playing with the intermediate versions of the project, we came to realize that a descent user experience requires that additional information will be persisted.

The main point of this post is not if/how we managed to support these changes. The point is that even in small projects specs are not stable. We were not successful in defining the project's goals for a three week period in a project which is as simple as industrial projects get. Of course, If the project were more complicated (more people, wider scope) then the instability was likely to be even higher.

This example indicates that a "fire and forget" type of development (AKA: "divide and conquer" ) where one breaks down the desired functionality into a few large pieces, assigns each piece to a programmer, and then lets each programmer work on his task in isolation of his peers (until an "integration" milestone is approaching) is broken.

First, external forces will change the specs, thus affecting the assigned tasks. In the PQR project, the change in client technology was due to some external factors (business/marketing constraints). Even though the initial specs were examined and audited by several layers of approvals, no one had predicted this change.

Second, feedback from working early versions of the product (even with partial functionality) will change our understanding of the product and its desired capabilities. In PQR, the change regarding which-information-should-be-persisted was driven by experimentation with early versions.

If we had taken a Fire-and-Forget approach then our ability to respond to the first change were very limited as every team member was in the middle of his large task. Also, by the time a first working version were available, very little time was left to implement significant changes.

Bottom line: Software is unstable. Breaking the effort into tiny tasks with frequent integration points (I am speaking about granularity of hours not weeks) is an excellent way to cope with this inherent instability.