Testing UI

5 comments
Whenever I hear someone saying that his team is practicing automatic testing (TDD or whatever other buzzword) I find my self asking: "But how do you test the UI?". I get all sort of answers which only prove the inherent difficulties.

So today, I tried UISpec4J. It is too early for me to comment on the overall quality of this library, but the first impression is quite good. However, their documentation fails to provide the reader with a quick, fully-working example (which is the single most important piece of information that the user of a library needs). The following fragment is my attempt to fill this gap: take it, compile it, run it.


The subject program:

import java.awt.event.*;
import javax.swing.*;

public class Win extends JFrame
{
private static final long
serialVersionUID = -9206602636016215477L;

public Win()
{
JPanel panel = new JPanel();
final JButton b = new JButton("A Button");
b.setVisible(false);

JButton t = new JButton("Toggle");
t.addActionListener(new ActionListener()
{
boolean show = false;
public void actionPerformed(ActionEvent e)
{
show = !show;
b.setVisible(show);
}
});

panel.add(t);
panel.add(b);

getContentPane().add(panel);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}


...and the test program:
import org.junit.Test;
import org.uispec4j.*;
import org.uispec4j.assertion.UISpecAssert;

public class Win_Test
{
static
{
UISpec4J.init();
}

@Test
public void test1() throws Exception
{
Panel panel = new Panel(new Win());

Button b = panel.getButton("A Button");
Button t = panel.getButton("Toggle");

t.click();
UISpecAssert.assertTrue(b.isVisible());

t.click();
UISpecAssert.assertFalse(b.isVisible());

t.click();
UISpecAssert.assertTrue(b.isVisible());
}
}


Comments:
  • The constructor of the tested widget (class Win) should not display the widget. This will make UISpec4J angry because it runs the tested window in some private, hidden, realm.
  • The test program will need both the JUnit and the UISpec4J jars to compile
  • The current version supports Java 5 but not Java 6.

You can't always get what you want

1 comments
So you're using TDD and you are test-infected and you have extremely low defect rate, the world is smiling at you and everything is super. Can you reach a point where your test suite does not need any more test cases?

Sadly, the mathematical answer is No. There's no such thing as a "big-enough" test suite.

Here is the explanation: Suppose your test suite contains n test cases. We can say that this suite is large enough if it guarantees that the n+1 test case is already covered by the existing n cases.

I claim that if you give me a class that passes these n tests I can give you a class that still passes these n test but fails for the n+1 test. In other words: passing the first n tests guarantees nothing with respect to any subsequent tests.

If you're really interested in the details, then here's how I can make the class fail in this last test: I will obtain the stacktace - new Exception().getStackTrace() - and look for the test method of that n+1 test. Of course this is a highly superficial way for a program to fail a test but it is still a bug just like any other "natural" bug (an int overflow with large inputs, a collection stored in a static field that eventually explodes, ...).

A few corollaries:
  1. Every test suite always has one more thing that it can check for.
  2. A bug that was not caught is not an indication to the test suite's lack of strength.
As you can see my point here is that blindly growing your test suite is almost a futile effort. (mathematically speaking, no matter how many tests you can write the total amount will still be just a tiny fracture of infinity). Instead of going for the masses, you should strive to make your suite "smarter". Here are three suggestions:
  • Don't treat the tested class as a black box. If you know nothing about the implementation then you have to test each method separately. On the other hand, if you allow your self to know that method f() merely calls g() with some default parameters, then you can dramatically reduce the amount of testing for f(). In theory, the Black-box approach for testing sounds like an excellent idea (you test the interface not the implementation) . The sad reality is that this approach is impractical - you number of tests you have to write is astronomical.
  • Prefer thin interfaces. If the public interface of your class has just one method, it can be tested much more effectively than a class with (say) two public methods.
  • Invest in high-level test. Unit testing is great. But higher level tests (aka: functional-, acceptance- tests) examine larger parts of your code in real-world scenarios. Thus, if you feel your suite is not strong enough, I would recommend adding more high-level tests. The return on investment (that is: the increase in the strength of your suite) will be much bigger than with adding low-level test.

Principles of Good Design

0 comments
It seems that we are on a never ending quest for seeking an ultimate answer to an immortal question: "What is a good program?". In a previous post I discussed the difficulties inherent in this question.

People are constantly trying to make progress with respect to this question. Here are several (very well put) suggestions for producing high-quality code.

Excerpt #1, “Continuous Design” / Jim Shore:

Continuous design makes change easy by focusing on these design goals:

  • DRY (Don't Repeat Yourself): There's little duplication.
  • Explicit: Code clearly states its purpose, usually without needing comments.
  • Simple: Specific approaches are preferred over generic ones. Design patterns support features, not extensibility.
  • Cohesive: Related code and concepts are grouped together.
  • Decoupled: Unrelated code and concepts can be changed independently.
  • Isolated: Third-party code is isolated behind a single interface.
  • Present-day: The design doesn't try to predict future features.
  • No hooks: Interfaces, factory methods, events, and other extensibility "hooks" are left out unless they meet a current need.

Excerpt #2, “Orthogonality and the DRY Principle” / Andy Hunt and Dave Thomas:

All programming is maintenance programming, because you are rarely writing original code. If you look at the actual time you spend programming, you write a bit here and then you go back and make a change. Or you go back and fix a bug. Or you rip it out altogether and replace it with something else. But you are very quickly maintaining code even if it's a brand new project with a fresh source file. You spend most of your time in maintenance mode.

So you may as well just bite the bullet and say, "I'm maintaining from day one." The disciplines that apply to maintenance should apply globally.

...

Most people take DRY (Don't Repeat Yourself) to mean you shouldn't duplicate code. That's not its intention. The idea behind DRY is far grander than that.

DRY says that every piece of system knowledge should have one authoritative, unambiguous representation. Every piece of knowledge in the development of something should have a single representation. A system's knowledge is far broader than just its code. It refers to database schemas, test plans, the build system, even documentation.

The 12-steps plan (for extending the Javac compiler)

1 comments
The list may seem long but its the absolute minimum (I am an XP believer so I didn't put anything unless I was sure that you will need from day one). Also, the principles implied by these steps are relevant for extending other compilers, and even for other kinds of applications.

So here is the 12-steps plan:
  1. Get the Javac source, commit these sources to to your source repository and then tag the snapshot. Thus, revision 1.1 of a file will always be the original.

  2. Reformat all source files according to your preferred coding style.

  3. Commit to the source repository and tag this snapshot. Thus, revision 1.2 of a file will be the reformatted form of the original. If something stops working, you will be able to easily diff to see what is it that you broke. Note that diffing with revision 1.1 will not be useful - the output will be cluttered with many differences due to the difference in coding styles.

  4. Compiler entry point. Create a new main class for the compiler, e.g.: MyCompiler. Your new functionality will be activated by having MyCompiler create subclasses of the original compiler modules (see the "Subclassing a module" section in this post).

  5. Execution entry point. Create a class (e.g.: MyLauncher) that will run your compiled program (a replacement for the "java" program).

    Discussion: In its simplest form MyLauncher.main() accepts a command line with a user supplied class path, the name of a main class and a vector of command line arguments. It will first create a new class-loader configured with these two locations (order is important):
    • The user supplied class-path
    • The class-path with which MyLauncher.main() was invoked: System.getProperty("java.class.path").

    The parent of this new class-loader should be the system class loader (that is: ClassLoader.getSystemClassLoader()).
    MyLauncher.main() will load the user specified main class from the newly created loader and will start the program by invoking the main() method of that class.

  6. Add a test/src/ and test/bin/ directories to the your project.

  7. Write a small Java program under test/src. For example: test/src/HelloWorld.java.

  8. Sanity test. Write a JUnit test case that uses MyCompiler to compiler test/src/HelloWorld.java, and
    then uses MyLauncher to run test/bin/HelloWorld.class. The test case should capture the output of the program and compare it with the expected output.

  9. Repeat step 9 but with a large, heavily tested, open-sourced, program. After compiling the tests case will run the test suite of the compiled program.

  10. Add a bootstrap/ directory to your project root.

  11. Bootstrapping test. write a JUnit test that uses MyCompiler to compile MyCompiler itself and places the generated class files at the bootstrap/ directory. The test will then create a new class loader, which points at the bootstrap directory, and will load the MyLauncher class from this class loader. Finally, it will run the test cases from steps 9 & 10 via that recently loaded MyLauncher class.

  12. Deploy Javac's full test environment, as described here.

That's it. You're now ready to extend the compiler.