Top Five Bugs of My Career - Solved

0 comments
Yes, I know. A long time has passed since the "top five bugs" post, leaving these bugs waiting to be resolved (to my defense let me say this is not intentional between my day job and me writing my Ph.D. dissertation I have very little energy left for blogging).

Anyway, the long wait is over. It is now solution time.

#5: Only guests are welcome - 2008, Javascript

The night before I did some refactoring. I had two methods, let's call them f1() and f2(), both taking two parameters and doing almost the same thing. So I consolidated them into one method, f(). I then searched all call sites of f1() and f2() and changed them to call f(). One of the f2() calls passed null as the second parameter, however f() was built mainly on the f1() code so it didn't know how to handle nulls.

This was frustrating because there were very few f2() calls and I went over them manually. Actually, at some point the null was just there in front of my eyes. But at that moment I didn't think that f() cannot digest nulls. I didn't bother to write unit tests for the Javascript part of the project, so I had no way to verify my refactoring. #Lazy

#4: The unimaginably slow load - 2005, Java

When I wrote the code that saves the data structure into a file (last step of the import) I chose the simplest thing that could possibly work: serialization. Later, I added a publish/subscribe functionality to the data structure, thereby allowing GUI objects to respond to changes in the data.

Guess what happens when you serialize a data structure that has a reference to a listener that is an inner class of the app's Top-level JFrame object: you serialize every piece of runtime data your program holds. Everything. Instead of the saving a minimalistic storage-oriented data structure you suddenly save caches, in-memory indexing tables, the works.

Unfortunately, all these objects were all Serializable so I didn't get any exceptions in the process, just a very slow save/load process.

#3: Modal smiles, Modeless cries - 2001, C

This is a C-specific issue. The error messages were stored in an array of structs where each struct represents a single error. There was an upper limit on the number of errors so I declared an array of 1,000 structs (which was well beyond the limit) . In order to avoid the hassle of dynamic memory management this array was actually a local, stack stored, variable.

After filling the array the code pops-up the Modal window that shows the errors by issuing a DialogBox() call. This call will return only after the window is closed. Thus, as long as the window was open execution was still sitting there at my function, and my array was safe on stack.

A Modeless window is different. The call that creates a modeless window, CreateDialog(), returns immediately while the window is still open. Execution then continues, reaches the end of my function and returns to the caller. At this point, the stack is unwound and the array vanishes into thin air. When the user clicks on an error the event handler tries to access the array (via a pointer) but the array is no longer there. It is an ex-array.

#2: The out-of-nowhere crash - 2002, C++

Again, we are in the no-garbage collection territory. In C++ you constantly write destructors that take care of releasing the resources that the object acquired. This is a safe way to avoid leaks.

What happens when you have a bug in a destructor (e.g.: null pointer error)? Right. the program crashes.

When are destructors of local variables called? At the end of the scope. Literally. At the curly brace character closing the scope.

So there you have it: the program crashes when it reaches the curly braces after the "b = true". The assignment itself went fine. The destructor was the problem. My IDE back then was not smart enough to treat closing braces as a statement. It treated it as part of the "b = true" assignment thereby misleading me to think the assignment failed.

#1: The memory monster - 2004, C#

Actually, this bug is very simple once you see the code. It is my #1 due to the long time it took to track it down. Let's take another look at the code:

for(int i = 0; i < rows.Length; ++i)
if(rows[i].isOutOfView)
rows.Remove(i);

If the isOutOfView property is true we remove the current element from the collection. Let's play with i=4. The if() is true, we call rows.Remove(4), we increase i's value to 5 and start the next pass. Alas, the rows.Remove(4) call moved every element one position towards the beginning: the element at position 5 is now in position 4. At the next pass when i=5 we examine the element that used to be at position 6.

Net result: we never checked the element at position 5. Thus, some of the elements that were needed to be removed from the collection were not removed. Later, the isOutOfView property of these elements were reset back to false. Overtime they accumulated until the heap collapsed.

That's it. Hope you enjoyed this account. Other bug stories will be happily acknowledged.

Write only Fields ?!

0 comments
Java's MutableTreeNode offers a setUserObject(Object) method. Its natural counterpart, the getUserObject() method is defined only by the implementing subclass DefaultMutableTreeNode.

This means that whenever your logic needs to work with the data that is associated with a tree node, it needs to downcast not to the interface, but - wait for it - to the actual implementation. This counters a fundamental engineering principle that Java is often trying to promote: interfaces over implementation.

Moreover, offering a setter without a getter suggests that someone in Sun thought that it might be useful to have a variable that can be written-to but not read-from. This makes as much sense as placing 'something' in a safety deposit box and then throwing away all keys. Why would you want to do that? You might just as well throw away that 'something'. Either way, you'll never see it again.

Done Ranting.

Top Five Bugs of My Career

5 comments
This time, I thought to take a different angle. Instead of reasoning about software I'll just write about specific acts of programming in specific programs. More precisely, I want to write about specific bugs that I had the (mis) pleasure to track down.

I took my first full-time programming job ten years ago, almost to the day. During this period I introduced and solved many bugs (Hopefully the difference between #introduced and #solved is not very large). My criteria for choosing which bugs will make it into this list is that of memorability - I chose the bugs that I remember most clearly. When discussion bugs, memorability is a key factor. The more painful the bug the greater the impact it will leave in your memory. Thus, in some sense, I am about to list my most painful bugs from the last decade.

To make things interesting, this post will disclose (mostly) the descriptions of the bugs. The cause/solution will be disclosed in the next post. This will give you programmer's mind something to chew on for a few days. For the same reason I also (occasionally) omitted pieces of information whenever I felt that their inclusion in the bug description will make the solution too obvious.

#5: Only guests are welcome - 2008, Javascript

So my partner and myself are about to demo a web app which we quickly prototyped in a couple of weeks. It is an on-line discussion system. Buzzwords used: database, web-server, Ajax, CSS. We also had a decent suite of tests.

It is 8:45am. The demo will start in 15 minutes. I am doing one last quick run of the demo. I log in as an anonymous guest. I browse through the pages and everything is fine. I try to log in as a user. Name: "Noga" (that's my dog's name). Password: "hithere". Oops. No response. Not even a "login failed" message.

I know that login is doing some Ajax-ing so I quickly try to bypass the Ajax mechanism by manually forming a login URL containing my credentials. It works. So I now know that the defect is Ajax related but I don't have the time to rewrite the login page such that it will not use Ajax, let alone test it.

8:55. The Anxiety-meter is screaming. We come up with a cunning solution. We start the demo with an already logged-in user (by secretly entering the login URL before the demo starts). We walk through all the pages. Then we log out and show that everything works for a guest user. Luckily, we are not asked to go back to a non-guest user. We are saved.


#4: The unimaginably slow load - 2005, Java

I am working on a Java mass analysis tool. It is a program that digests jar files and classifies the classes therein based on a set of rules. The program is a console app which supports operations such as these:
  • import: import a new jar file into the system and give it the specified name
  • load: load a previously imported jar-file into the memory
  • classify: Run the classification rules on all currently loaded jars. Saves the results into the specified file (.csv)
The import command scans the jar file and transforms it into my own data structure which is optimized for the type of analysis I need to do. The data structure is then saved to the file system. The load command reads the saved file back from into memory.

For a few days I am working on adding new classification rules. When developing new rules I tend to work with a set of small Jars. This minimizes the classification time, thus shortening the feedback cycle. From time to time I find myself also changing the implementation of the underlying data structure (fixing bugs, adding new operations which are needed by the rules, etc.).

At some point I decide that my new rules are complete and I put them to work on several real inputs: large zip files with > 100K classes. I notice that both the import and load commands run much slower than before. In fact, they run so slow that they wipe out all the benefits of having my own optimized data structures.

#3: Modal smiles, Modeless cries - 2001, C

A Win32 Content editing app. User can add, edit, search for, browse through, and delete records. There is also a Record Checker mechanism that scans all records and detects all sort of irregularities such as broken cross-record links, duplicate names, etc.

The output of the checker is a list of error messages. The messages are displayed on a new window. Clicking on a message opens the corresponding record in the main window.

Originally the error window was a modal window: it disabled the main window. Clicking on an error closed the output window and reactivated the main window. We then decided that it makes much more sense to make this a modeless one. I implemented this change and was astonished to see that the click functionality stopped working. Clicking on an error (in the modeless window) created an access violation and a total eclipse of the app. Remember, we are speaking about the C programming language, so expecting something as fancy as a stacktrace is out of the question.


#2: The out-of-nowhere crash - 2002, C++

A C++ Win32 app. I am initiating a shotgun surgery that takes a few days to complete. Back then I was unaware of refactoring/unit testing (the whole company relied on manual testing) so instead of taking baby steps, I did a massive cross-code revolution.

Having battled numerous compilation and linking errors I am finally in a position where I can run the code. The thing immediately crashes. It does not even show the window. Again, the tools that we used didn't support stacktracing. I had no idea where to start.


For those unfamiliar with the Win32 technology, here's some background. In a Win32 app all GUI events of a window arrive at a single callback function which takes four parameters: hwnd - a handle to the originating widget, msg - an int specifying event type, wparam & lparam - two ints providing event specific data.

Typically, the body of such callback functions was a long switch (on the msg parameter) with a case for each of the events that the program was interested in.


In this particular program the message-handling switch block was particularly long. The GUI was quite complicated and there were numerous events (~ 200) that had to be listened to. The callback function was more than a thousand lines long.

First, I tried to apply reasoning. I made educated guesses regarding which events are likely to be the ones causing the crash. After several hours of unsuccessful guesses I went to a more brutal approach: I commented out the whole switch block. This made the crash disappear but eradicated every functionality that the program had. Then I uncommented half of the cases inside this switch block. The crash didn't appear and some functionality went back on. This meant the the crash was due to the code that was currently commented.

I continued the comment/uncomment game using a binary-search strategy. Quite quickly I zeroed in on the problematic message. I placed a breakpoint and started stepping through/into the instruction. This particular switch invoked code on other functions. One of them looked like this:

bool b = false;
if(...) {
// many lines
b = true;
}


I started debugging this code. When I stepped over the b = true statement the program crashed. This puzzled me. b is a local variable. It is stack allocated. How can an assignment to it fail?

#1: The memory monster - 2004, C#

I joined a small team working on a C# GUI app that was due to be released soon. We had a customer already using an early access version of the product in return for doing beta testing. The #1 item on our todo list was a report from this customer saying that the program becomes non-responsive after running for several hours. This is a serious defect, a real show stopper. As you can imagine, we never managed to reproduce the problem on our machines.

The release date got nearer and we still had no clue regarding the cause of this mysterious defect. As we had no better thing to do, we kept working on other items from our todo list, which was quite pathetic as we knew we will not be able to release the software with this defect.

At some point I decided to start fresh. I made the assumption that the defect was some sort of a leak.

Side note: Programmers often believe that in a garbage-collected environment memory leaks cannot occur. That's not true. A garbage collector (GC) will find all unreachable objects and will reclaim as many of them as possible. This does not mean that it will reclaim all unreachable objects. Many GC algorithms leave some of the garbage floating around for the next collection cycle. Moreover, a GC will consider something as garbage only if it is no longer reachable from your code. Thus, if your program maintains references to objects that are no longer needed, these objects will be considered, by the GC, as non-garbage. This will turn the program into a memory-consuming monster.

Such a leak often happens if you have some (software) cache in your code. The cache will keep references to objects - thereby preventing them from being collected - even if the application code no longer references them. Thus, if you implement a cache you must always implement some cleanup strategy.


I left the program running on my machine over the weekend hoping it will help me spot the leak. Sadly, when I came back to check on it, it was running smoothly. Disappointed, I sat down with the customer's contact person trying to understand how the program is being used. This conversation made me realize that the #1 thing that they (beta users) are doing much more than us (developers) is - wait for it - scrolling.

Ctrl+Alt+Delete -> Task Manager. I fired up the app, opened a data file, grabbed the scrollbar knob and started dragging it up and down. Looking at the Task Manager window I could see the Mem Usage value climbing. Slowly, but steadily. After a few minutes memory usage exceeded the main memory, the operating system started swapping and the program practically halted. This was awesome. I managed to reproduce the bug.

I opened the code that handled scrolling events (this was a custom widget with a custom data model that we developed). My eyes zeroed in on this loop:


for(int i = 0; i < rows.Length; ++i)
if(rows[i].isOutOfView)
rows.Remove(i);


Got it? Great.
Otherwise, wait for the next post...

(To be concluded)

Hackernews discussion

The #1 Software Development Post for Sep.'09

9 comments
Best Dundies Ever - Pam (The Office)

Actually, I believe that on September 9th I read the best post for the whole year. Since I can't wait till December 31st, I decided to celebrate earlier, by handing out the "Best September Post" award. I am referring to Stephan Schmidt's The high cost of overhead when working in parallel piece.

Stephan's post argues that working on several projects in parallel is much less efficient than working one project at a time. He shows that by calculating the time spent on status meetings throughout the whole period. Typically, the scheduling of such meetings is based on calendar events (weekly, bi-weekly, monthly, etc.) and not on measured progress (because measuring productivity is hard). Thus, when working in parallel you will have more meetings for the same amount of project progress. As a direct result, you end up with a higher meetings/real-work ratio.

Stephan's bottom line, that working in parallel is less efficient than working sequentially, may seem quite trivial. Most developers already know that context-switching between projects incurs significant costs in time and mental energy. They know they are more productive when they can concentrate on one goal. They know that working on several projects is not-unlike juggling: you get to touch each one, but for very little time.

So, how come a post with a trivial conclusion is my #1 post this month? It's all about the argument. Stephan did a brilliant job taking a well known phenomena and providing a solid explanation for it. This is remarkable because most of the practices in the field of software engineering have only a vague explanation which often leads to endless, pointless, debates.

Even unit testing, which is clearly one of the most effective practices, has no real explanation. We do unit testing because it works, but there is no explanation as to how come testing your program against very few inputs dramatically improves its correctness with respect to an infinite space of "real world" inputs.

Stephan's post managed to do it. It supplies a rock solid, non disputable, easy-to-digest, explanation to a software development phenomena. This is not a trivial thing. Go read it.

My #1 Testing Pattern

5 comments
Sex... to save the friendship - Jerry Seinfeld (The Mango)

This pattern is not the one that I most frequently use. Far from it. It is just the one the I like the most because it delivers a sense of reassurance and peace.

So, sometimes you lose your confidence. You no longer believe that your test suite is doing its work. Lost faith threatens to bring down the whole development cycle: you cannot refactor/change/add because a Green bar will no longer indicate "no bugs".

This problem often happens after a long refactoring session. When you're finally done you run the tests expecting a Red - you're sure you broke something. To your surprise you get Green. You now have doubts. Is the Green really Green?

Whatever the reason for this distrust is, this is a very dangerous situation.

Luckily, there is a simple solution: Start breaking your code.

Pick a random point in your code and change it. Return null from a method. Throw an exception. Comment out an assignment to a field. Whatever. Just break the code. Now run the tests again. If your tests are OK you will get a Red bar. This will raise your confidence level: the test suite just showed you that things are under control. You should now undo this bug that you just introduced and repeat this process with a different program-breaking change. The more you repeat it, the more confidence you will gain.

On the other hand, if you get Green after breaking the code, then your distrust is justified. You do have a problem. You should write more tests, starting - wait for it - right now.

4 Reasons Why Plugin Architectures are Awesome

5 comments
It seems that we are surrounded by plugin-based programs. Examples? Firefox, Eclipse, JEdit, ForBugz, the list is long. This post enumerate the reasons why plugins are so great. Only the first point is obvious. The other are subtle but just as important. Personally, I like reason #2 the most.

1. Happier Users

With plugins, users are not limited to the functionality built into the base product. They can enhance the product/customize it to their needs by installing third-party plugins. This is the obvious reason. In fact, many people think this is the only reason.

2. Happier Developers(!)

Plugin-based programs are usually very easy to debug. Let me tell you how a typical debugging cycle works in such programs: You start (as always) by writing a test that captures the bug. Then you gradually remove plugins from the program until you arrive at the minimal program that still produces the bug. If you do it via a binary-search (over the set of plugins) the process will converges quite quickly.

Usually, you end up with a very short list of plugins. A few educated guesses will help you spot the bad plugin right away. Even if you're out of guesses, you're still in a very good position: the amount of code that needs to be scanned has been dramatically reduced.

Compare that to a program that is not plugin based. Such a program cannot function without any of its parts. Thus, you have no systematic way to reduce the amount of code involved with a bug.

3. Plugins are Agile

Breaking up a user request (story or even theme/epic) into tasks is usually a breeze in a plugin-based program. One should simply break the story functionality-wise and implement each piece as a separate plugin. This is much easier than breaking a story according to implementation-oriented tasks. Bottom line: less overhead due to iteration planning.

4. Easy Collaboration

A real story: A few months ago my team was asked to assist another team with their product. That product had plugin support so we simply implemented our additions as plugins. We were not afraid to break things because we had no access to the source code of the core (big plus). Also, the core was written in C++ but the plugin system is in Java (second big plus).


That's my top four reasons. As you can see only the first point is user-oriented. The other three points are all developer-oriented. Surprising, isn't it?

Why Aircraft Carriers are not Agile

14 comments
In Podcast #31 of Reversim, one of the podcasters (I think it is Ori) mentions this anecdote:
I once had the chance to talk with a consultant. When I asked him what he thinks about agile he said: 'Have you ever seen an aircraft carrier that was developed using an agile methodology?'

(Disclaimer: (1) The anecdote is stated in a somewhat vague manner. I took the liberty of rephrasing it into a concrete statement. Hopefully I got it right. (2) I am not saying that either Ori or Ran are in favor of the statement. They just mentioned it)

So, the topic of this post is this simple question:
Why aircraft carriers are not built using an agile methodology?

And my answer is this:
Because the engineers can know up front that it will float


That's it. That's the answer. That's the point I am making. The remainder of this post is just the rationale.

An aircraft carrier is one of the most complicated machines ever built by man. Still, there is one perspective wherein the complexity of an aircraft carrier is a mere fracture of the complexity of almost any computer program. I am speaking about the ability to reason about the machine from its blue prints.

If you take the blue prints of an aircraft carrier you can determine whether the thing will float or sink. It is a simple matter of volume vs. weight. Of course, calculating the volume or the weight from the blue prints is not always a trivial task, but it is still doable. In fact, one can use a computer to do this. Bottom line: the blue prints give a precise answer to the most fundamental question related to any vessel: will it float?

In computer programs things are different. Even if you take the actual source code you will not be able to get a precise answer to the fundamental question of programs: will it compute the correct output? It is not a difficult question to answer. It is an impossible question to answer. All the computers in the land will not help you answer this question. That is what Church/Turing had proven in the 1930s (I know I mention these guys in almost every post - but that's how I see things: all difficulties in software construction are due to the properties proven by these two).

Note that we are speaking about the source code. The blue prints (UML diagrams, design documents, whatever) provide even less information about the program than the source code. Therefore, they cannot provide any better answer.

Under these circumstances it is not surprising that agile methodologies which promote a trial-and-error type of process are on the rise. When you don't know how a machine will behave unless you build it and run it, the only viable option is to built it, run it, see what it does and make the necessary adjustments.

Going back to the original question, let's play this little experiment: Suppose we live in a universe where there is no practical way to determine whether a thing floats or sinks. There are some vague heuristic for building vessels that float, but every now and then ships sink as soon as they leave the dock. So, in such a universe

will you volunteer to go aboard an aircraft carrier that has been carefully planned but never left the dock?


Software Design Comes to Hollywood

2 comments
Don't let yourself get attached to anything you are not willing to walk out on in 30 seconds flat if you feel the heat around the corner - Heat (1995)


So Gregor Hophe is seeing design patterns in Starbucks. As for me, I'm seeing software practices (I am not using the term "Principles" because of this) in movies. The quote in the subtitle is probably my favorite. It is taken from Michael Mann's Heat where Neil McCauley (Robert De Niro) says it twice. Neil uses this line to explain how professional criminals like him should treat the relationships that ties them to the real, honest, life.

I think this goes to programmers as well. We should be willing to walk out on any piece of code the minute we understand it becomes a liability. Every line/method/class is a viable candidate to being eradicated as part of a wide scope refactoring. In other words

Don't let yourself write any code fragment you are not willing to walk out on in 30 seconds flat if you feel the technical debt payment coming around the corner

Note that this goes not only to source code. If you are doing too much up front design you are likely to get attached to diagrams/blue prints/specs. Neal Ford calls it irrational artifact attachment.

Individuals who do not follow Neil McCauley's discipline get attached to their past work. They think that some piece of code is so well written or so central to the inner workings of the program that they only allow small patches to be applied to it. They never get to see that what is really needed is a substantial refactoring. This often leads to code that is a big mess of hacks piled up on top of something they are not emotionally ready to get rid of.

There was only one principle and that was principle-22

3 comments
I think the title pretty much summarizes my approach towards software design. If I put bluntly, then I'd say that

the only universal principle in software is the one saying that there are no universal principles in software.

Hmmm.

Maybe this was a bit exaggerated. So let me rephrase:
there are very few universal principles in software. The vast majority of principles are not universal.


A careful examination of of "truths" about software would reveal that many of them are not more than a rule of thumb whose applicability is limited. Let's take the Law of Demeter: a principle that is based on a clear distinction between good and bad. In theory, this well defined order seems like an excellent vehicle for taming the usually chaotic structure of software. In practice, this principle often leads to questionable consequences. It is rear to see it widely used other than in toy examples.

[Following Yardena's comment, the OCP example was rewritten as follows]

Another example is the Open-Closed Principle: A class should be open for extension (i.e: subclassing) but closed for modification (i.e.: manipulation of the source code). Here's how Robert C. Martin summarizes this principle:
... you should design modules that never change. When requirements change, you extend the behavior of such modules by adding new code, not by changing old code that already works
Got it? good modules never change. The principle suggests that one can foresee the future to such an extent that one can develop a class up to a point where its source code will never need to change. Ever. I bet that at least 50% of all programmers do not buy this premise. Refactoring, which has gained immense popularity as a highly effective development technique, is based on a completely opposite premise.

How about the Interface Segregation principle? This allegedly universal principle has zero relevance in dynamically typed languages which have no notion of interfaces.

These are just three examples off the top of my head. Similar loopholes can be found in almost every software principle.

The funny thing is that we already know that it is practically impossible to reason about the important aspects of software. Turing and Church had already proven it. Yet, we (hopelessly) keep on trying to formulate "truths" about something that we can hardly reason about.

I think it is much more about psychology than anything else. We want to conquer new frontiers. It is in our genes. Our attempts to formulate software principles are all about taming the wild beast called "software" and making it pleasant and manageable. Sorry, it will not work. There are all sort of notions that are beyond our scope of reasoning. Software is one of them. We'd better accept this fact.

The Story of Waterfront

6 comments
I think the last straw for me, WRT to Clojure, was a conversation I had with Alex Buckley during OOPSLA'08. While speaking about JVM support for dynamic languages Alex mentioned Clojure and Rich Hickey's talk that was part of Lisp50@OOPSLA. This conversation gave me the motivation for delving into Clojure.

In a nutshell Clojure is a Lisp-dialect that runs on the JVM. Being Lispish it offers a flexible and concise notation. Being a JVM language it allows complete and straightforward interoperability with Java. In other words: In order to write serious Clojure programs you only need to learn language constructs. You don't need to get acquainted with new libraries.

Clojure comes with a REPL which lets you quickly evaluate Clojure expressions. In order to utilize the benefits of a REPL your programming work-flow should be different. Instead of the write-compile-run (or TDD's test-write-compile) rhythm, you need to switch to a more constructive mode: you write an isolated small expression, you run it, and then you add it into the program. Here's how Steve Yegge describes it:

You're writing a function, you're building it on the fly. And when it works [for that mock data], you're like, "Oh yeah, it works!" You don't run it through a compiler. You copy it and paste it into your unit test suite. That's one unit test, right? And you copy it into your code, ok, this is your function.

So you're actually proving to yourself that the thing works by construction. Proooof by construction.


After adopting this work-flow I noticed that I constantly copy code from my text editor into the REPL. It didn't take long for this sequence to start annoying me. I wanted a tighter integration. I wanted an Editor which is also a REPL. I envisioned an editor where I can right click on a piece of text, choose "Eval" and immediately see the results in an "Output" window.

This vision was (and still is) the seed of Waterfront.

I used Clojure itself as the implementation language (what better way is there to learn a language than to use it something serious?). In fact, once I had the baic functionality working, I used Waterfront for developing Waterfront. These two choices helped me in detecting all sorts of features that are needed for the Clojure developers. Here are a few examples:

First, I needed parenthesis matching. It's a must if you have a Lisp-like syntax. I wanted to be able to see the doc of Clojure functions. When manipulating Java objects I needed to see the list of methods/fields/constructors of classes. I also noticed that when I write proxies (using Clojure's proxy function), I constantly switch back and fourth between Waterfront and the Javadoc of the proxied class, copying the names of the methods I want to override.

I tried to address these needs as best as I can. Currently, Waterfront supports the following features:
  • CTRL+E: Eval current selection, or the whole file if the selection is empty

  • Edit -> Eval as you type: When turned on (default) periodically evaluates your code. Boosts productivity as many errors are detected on the spot.

  • Syntax and Evaluation errors are displayed on: (1) The Problems window; (2) The line-number panel, as red markers.

  • Source -> Generate -> Proxy: Generates a proxy for the given list of super-types, with stub implementations for all abstract methods.

  • F1: Doc or Reflection
    Shows the doc (as per Clojure's (doc x) function) of the identifier under the caret.
    Shows the synopsis of a Java class if there is a class symbol under the caret (e.g.: java.awt.Color).

  • CTRL+Space: Token-based auto completion.

  • Full parenthesis matching.

  • An extensible plugin architecture.

  • Eval-ed code can inspect/mutate Waterfront by accessing the *app* variable. For instance, if you eval this expression, ((*app* :change) :font-name "Arial"), you will choose "Arial" as the UI font.

  • Eval-ed code can inspect the currently edited Clojure program. For instance, if you eval this expression, ((*app* :visit) #(when (= (str (first %1)) "cons") (println %1))), the output window will show all calls, made by your code, to the cons function.

  • Other goodies such as undo/redo, toggle comment, recently opened files, indent/unindent, Tab is *always* two spaces, ...



In terms of implementation, Waterfront is based on the context pattern. It allows event handlers to communicate in a functional (side-effect free) manner. On top of this there is a plugin-loader mechanism which loads the plugins that are specified in Waterfront's configuration file. This means that functionality can be easily added or removed (extremely useful when debugging!).

The overall response was warm. I plan to move Waterfront into clojure-contrib in order to provide Clojure newbies with a lightweight IDE to ease the transition into the language.

I am almost done telling the story of Waterfront. The only thing missing is the answer to this question: why Waterfront?

Here's the answer: Back in the 90's there was this TV show called: Homicide: Life on the Streets, describing a homicide unit in Baltimore's police department. One of the story lines - that runs throughout all seasons - is about a Baltimore bar called "The Waterfront" which is owned and operated by three Baltimore cops: Det. Meldrick Lewis, Det. John Munch, and Det. Tim Bayliss. Needless to say, I like this show very much.

My Whereabouts this Winter

0 comments
A lot of things have happened in the last couple of months. First, I finished school. I practically finished my Ph.D. though I still need to work on a few things in the dissertation. Consequently, I started working for a big, multi-national, corporate. I got acquainted with MDD tools, Software product lines and multi-core issues. In between I discovered Clojure and initiated Waterfront, an open source, Clojure-based, lightweight IDE, for Clojure.

I announced Waterfront on the Clojure group this Tuesday, and it was received with a very warm response. A by-product of the Waterfront effort was the formulation of the Application Context Pattern. A pattern that solves the contradiction between GUI programming and functional programming.

All this stuff resulted in a lot of subjects that I'd like to blog about. The problem, like always, is time. My plans for future posts is to start by writing a bit about Waterfront, and then to fully describe the application context pattern. Hope to find the available time slots for that.