4 Reasons Why Plugin Architectures are Awesome

5 comments
It seems that we are surrounded by plugin-based programs. Examples? Firefox, Eclipse, JEdit, ForBugz, the list is long. This post enumerate the reasons why plugins are so great. Only the first point is obvious. The other are subtle but just as important. Personally, I like reason #2 the most.

1. Happier Users

With plugins, users are not limited to the functionality built into the base product. They can enhance the product/customize it to their needs by installing third-party plugins. This is the obvious reason. In fact, many people think this is the only reason.

2. Happier Developers(!)

Plugin-based programs are usually very easy to debug. Let me tell you how a typical debugging cycle works in such programs: You start (as always) by writing a test that captures the bug. Then you gradually remove plugins from the program until you arrive at the minimal program that still produces the bug. If you do it via a binary-search (over the set of plugins) the process will converges quite quickly.

Usually, you end up with a very short list of plugins. A few educated guesses will help you spot the bad plugin right away. Even if you're out of guesses, you're still in a very good position: the amount of code that needs to be scanned has been dramatically reduced.

Compare that to a program that is not plugin based. Such a program cannot function without any of its parts. Thus, you have no systematic way to reduce the amount of code involved with a bug.

3. Plugins are Agile

Breaking up a user request (story or even theme/epic) into tasks is usually a breeze in a plugin-based program. One should simply break the story functionality-wise and implement each piece as a separate plugin. This is much easier than breaking a story according to implementation-oriented tasks. Bottom line: less overhead due to iteration planning.

4. Easy Collaboration

A real story: A few months ago my team was asked to assist another team with their product. That product had plugin support so we simply implemented our additions as plugins. We were not afraid to break things because we had no access to the source code of the core (big plus). Also, the core was written in C++ but the plugin system is in Java (second big plus).


That's my top four reasons. As you can see only the first point is user-oriented. The other three points are all developer-oriented. Surprising, isn't it?

Why Aircraft Carriers are not Agile

14 comments
In Podcast #31 of Reversim, one of the podcasters (I think it is Ori) mentions this anecdote:
I once had the chance to talk with a consultant. When I asked him what he thinks about agile he said: 'Have you ever seen an aircraft carrier that was developed using an agile methodology?'

(Disclaimer: (1) The anecdote is stated in a somewhat vague manner. I took the liberty of rephrasing it into a concrete statement. Hopefully I got it right. (2) I am not saying that either Ori or Ran are in favor of the statement. They just mentioned it)

So, the topic of this post is this simple question:
Why aircraft carriers are not built using an agile methodology?

And my answer is this:
Because the engineers can know up front that it will float


That's it. That's the answer. That's the point I am making. The remainder of this post is just the rationale.

An aircraft carrier is one of the most complicated machines ever built by man. Still, there is one perspective wherein the complexity of an aircraft carrier is a mere fracture of the complexity of almost any computer program. I am speaking about the ability to reason about the machine from its blue prints.

If you take the blue prints of an aircraft carrier you can determine whether the thing will float or sink. It is a simple matter of volume vs. weight. Of course, calculating the volume or the weight from the blue prints is not always a trivial task, but it is still doable. In fact, one can use a computer to do this. Bottom line: the blue prints give a precise answer to the most fundamental question related to any vessel: will it float?

In computer programs things are different. Even if you take the actual source code you will not be able to get a precise answer to the fundamental question of programs: will it compute the correct output? It is not a difficult question to answer. It is an impossible question to answer. All the computers in the land will not help you answer this question. That is what Church/Turing had proven in the 1930s (I know I mention these guys in almost every post - but that's how I see things: all difficulties in software construction are due to the properties proven by these two).

Note that we are speaking about the source code. The blue prints (UML diagrams, design documents, whatever) provide even less information about the program than the source code. Therefore, they cannot provide any better answer.

Under these circumstances it is not surprising that agile methodologies which promote a trial-and-error type of process are on the rise. When you don't know how a machine will behave unless you build it and run it, the only viable option is to built it, run it, see what it does and make the necessary adjustments.

Going back to the original question, let's play this little experiment: Suppose we live in a universe where there is no practical way to determine whether a thing floats or sinks. There are some vague heuristic for building vessels that float, but every now and then ships sink as soon as they leave the dock. So, in such a universe

will you volunteer to go aboard an aircraft carrier that has been carefully planned but never left the dock?


Software Design Comes to Hollywood

2 comments
Don't let yourself get attached to anything you are not willing to walk out on in 30 seconds flat if you feel the heat around the corner - Heat (1995)


So Gregor Hophe is seeing design patterns in Starbucks. As for me, I'm seeing software practices (I am not using the term "Principles" because of this) in movies. The quote in the subtitle is probably my favorite. It is taken from Michael Mann's Heat where Neil McCauley (Robert De Niro) says it twice. Neil uses this line to explain how professional criminals like him should treat the relationships that ties them to the real, honest, life.

I think this goes to programmers as well. We should be willing to walk out on any piece of code the minute we understand it becomes a liability. Every line/method/class is a viable candidate to being eradicated as part of a wide scope refactoring. In other words

Don't let yourself write any code fragment you are not willing to walk out on in 30 seconds flat if you feel the technical debt payment coming around the corner

Note that this goes not only to source code. If you are doing too much up front design you are likely to get attached to diagrams/blue prints/specs. Neal Ford calls it irrational artifact attachment.

Individuals who do not follow Neil McCauley's discipline get attached to their past work. They think that some piece of code is so well written or so central to the inner workings of the program that they only allow small patches to be applied to it. They never get to see that what is really needed is a substantial refactoring. This often leads to code that is a big mess of hacks piled up on top of something they are not emotionally ready to get rid of.

There was only one principle and that was principle-22

3 comments
I think the title pretty much summarizes my approach towards software design. If I put bluntly, then I'd say that

the only universal principle in software is the one saying that there are no universal principles in software.

Hmmm.

Maybe this was a bit exaggerated. So let me rephrase:
there are very few universal principles in software. The vast majority of principles are not universal.


A careful examination of of "truths" about software would reveal that many of them are not more than a rule of thumb whose applicability is limited. Let's take the Law of Demeter: a principle that is based on a clear distinction between good and bad. In theory, this well defined order seems like an excellent vehicle for taming the usually chaotic structure of software. In practice, this principle often leads to questionable consequences. It is rear to see it widely used other than in toy examples.

[Following Yardena's comment, the OCP example was rewritten as follows]

Another example is the Open-Closed Principle: A class should be open for extension (i.e: subclassing) but closed for modification (i.e.: manipulation of the source code). Here's how Robert C. Martin summarizes this principle:
... you should design modules that never change. When requirements change, you extend the behavior of such modules by adding new code, not by changing old code that already works
Got it? good modules never change. The principle suggests that one can foresee the future to such an extent that one can develop a class up to a point where its source code will never need to change. Ever. I bet that at least 50% of all programmers do not buy this premise. Refactoring, which has gained immense popularity as a highly effective development technique, is based on a completely opposite premise.

How about the Interface Segregation principle? This allegedly universal principle has zero relevance in dynamically typed languages which have no notion of interfaces.

These are just three examples off the top of my head. Similar loopholes can be found in almost every software principle.

The funny thing is that we already know that it is practically impossible to reason about the important aspects of software. Turing and Church had already proven it. Yet, we (hopelessly) keep on trying to formulate "truths" about something that we can hardly reason about.

I think it is much more about psychology than anything else. We want to conquer new frontiers. It is in our genes. Our attempts to formulate software principles are all about taming the wild beast called "software" and making it pleasant and manageable. Sorry, it will not work. There are all sort of notions that are beyond our scope of reasoning. Software is one of them. We'd better accept this fact.