Q: Is A cucumber greener or longer?
A: Greener. It is also green sideways.
A: Greener. It is also green sideways.
Most programs can be viewed from several different perspectives. If we take a web-application build around the MVC architecture, we can view the program as an assembly of three subsystems: the model, the view, and the controller. At the same time, we can also see the program as an assembly of pieces of code each one dedicated to a web page.
This dichotomy is not uncommon. In each program one can see several orthogonal decompositions. Although the the decomposition into packages/classes is very dominant, other conceptual decompositions are just as viable.
For the rest of this post I'll concentrate on two decompositions: the architectural decomposition sees the code as a collection of subsystem, such as: data access, user interface, security, persistence. The functional decomposition sees the program as a collection of features (user-visible functionality) such as: login, register, delete, submit, search, ...
We can make the reasonable assumption that in most programs the number of features is far greater than the number of subsystems. Also, to keep things simple I will assume that each feature "touches" each subsystem.
(Although it very difficult to provide an algorithm that identifies all features or all subsystems in a program most programmers can usually recognize, by means of intuition, both the architectural and the functional decompositions of their program).
The big question is this: What's the easiest way to develop a program. Is it feature by feature, or subsystem by subsystem?
This question may seem to be making as much sense as asking if a cucumber is greener or longer. If we have 40 features, 4 subsystems, and every feature contributes 100 lines of code to each subsystem then it does not matter whether our work is split subsystem-wise or feature-wise. Either way total effort will be 40 times 4 times 100 = 16,000.
I think that the effort model employed by the last paragraph is flawed. I believe that the effort required for completing a programming task is more than the effort for completing two smaller tasks that amount to the bigger task. In computer science lingo, this means that:
the effort function is super linear
If f is a super-linear function then f(a+b) > f(a) + f(b). An obvious example is sorting. Sorting an array of 1,000 elements is more work than sorting 10 arrays of 100 elements. I believe that the development effort function is super linear. In fact, it seems to be closer to a quadratic function than to a linear function.
This means that It will take less effort to complete a program if your programming tasks follow features (many small tasks) than subsystems (few large tasks). The project will seem harder if developed according to architectural decomposition.
The reasoning suggested here is not a mathematical proof. It is merely a phenomena I witnessed. Nonetheless, the effort is super-linear thesis provides an explanation for issues such as: The success of TDD; the small-tasks practice of agile methodologies; the effectiveness of REPL tools; and more. So, although we cannot prove this thesis it may very well be one of the axioms of programming.
Features in the same system, even ones that seems unrelated, might share a significant amount of lines of code. On the other hand features that looks tightly coupled may actually share almost nothing.
So the LoC cost of feature F1 might be 100 and the LoC cost of F2 is 100 too, but since they share 50 LoCs then the aggregate cost is 150 LoCs.
The equivalent in sort might be a known sequence that repeats in some of the 10 arrays of 100. Say you know or can calculate the identical sequences fast enough in the combined array of 1000 elements. Then you can reduce the sorting time using quicksort like algorithm which is aware of the repeated sequences.
Eishay Smith
December 10, 2008 at 9:58 PMThat is a very good point. It is the dual side of the claim that in a given subsystem the code deals with the same issues (only JDBC, only UI, etc.) so that reuse opportunities are greater with architectural decomposition.
Your claim argues that similar opportunities occur also with functional decomposition. I totally agree. Wish we had some statistics about it.
Unknown
December 11, 2008 at 12:14 PMI think you are asking the wrong question.
The question you should ask is which way will have the better ROI in real life?
And a possible answer will go something like:
lets say you have 4 subsystems each will take about 3 month, so you dedicate a year for the product.
Starting out everything goes almost OK. You finished the first subsystem on time, and the 2nd and 3rd with a minimal 2 weeks delay.
You start working on the last one and holy shit you find out that you need 4 months to complete!
The result of this common scenario will be that at the end of the year you get only half the system done, since you only finished half of the last subsystem half of the features are not completely done. but you missed by only 2 out of 12 months.
doing it feature by feature would mean that the with the same numbers you will end up with about 80% of the system done!
Lior Friedman:
December 11, 2008 at 3:20 PMAs I wrote this post I thought of mentioning this angle. I left it out just to keep the post short. So, thanks for bringing this up.... :)
Unknown
December 11, 2008 at 3:37 PM