I argue that the fundamental difference is in mechanisms for reuse.
if you are to write a program the simplest program that computes n! then your code would look like this:
if(n == 1)
return 1;
if(n == 2)
return 2;
if(n == 3)
return 6;
if(n == 4)
return 1*2*3*4;
if(n == 5)
return 1*2*3*4*5;
...
This is a very simple. If you are writing computer programs like this then you are essentially using your computer as a calculator: anyone can write it because you specify the output for every possible input. It is also very easy to understand the code. However, it would take many days to write a program (using this style) that computes n! for n values between 0 and 10000. Moreover, if you need to change something (for example compute (n-1)! rather than n!) then you have to make many changes which again take a lot of time.
Fundamentally, the power of a computer program over calculators comes from loops. We can take a sequence of computations and repeatedly execute it many times
int r = 1;
for(int i = 1; i <= n; ++i)
r *= i;
return r;
Wow!. We wrote only 4 lines of code and yet we now have a program that computes n! for practically every n value (Clearly, this also implies the notion of variable - otherwise the body of he loop would compute the exact same value every time). The downside is that this program is a bit more complicated to write. It requires some skill to write and understand the code.
This loop is an example for a very simple reuse mechanism: instead of having the computation of n! duplicated for every possible n value, we repeatedly use the expression r *= i (with a different set of values in the r and i variables) such that it computed the correct result for us. This is illustrated the basic property of code reuse: write once use many times.
(Side note: The full explanation regarding the importance of loops, goes back to the seminal works of Turing and Church: a programming model cannot be Turing complete if it does not support loops or recursion.)
Of course there are many other mechanisms for reuse: functions, inheritance, mixins, traits, genericity and the likes. As soon as you start using these the complexity of the code rises. With functions you can rewrite the factorial program as follows:
function factorial(n) {
if (n==1)
return 1
else
return n * factorial(n-1);
}
Now the code is more elegant (that is: we reuse a bigger part of it) but again, it becomes more difficult to write and read. If you take someone who has just started learning programming he will probably be more comfortable with the loop version than with this recursive version.
If we are to move on to the inheritance, then we are in the object-oriented (OO) water. To explain OO-based reuse we need to understand objects, methods, overriding, which is even a greater step in terms of complexity.
My point is this: most of the complexity of programming comes from the need to reuse code, but we cannot get rid of this complexity because without reuse our programming language is reduced to a simple calculator.
This is something that should be remembered when you think about visual programming environments and other tools that are supposed to make it possible for non-programmers to write programs: many of these devices usually support only a simple form of reuse so an excessive use of these will lead to the familiar problems of duplicated code. If you try to improve the tool by introducing reuse mechanisms then it rapidly becomes so complicated that it eventually requires programming-like skills to use it, which beats it own purpose.