Main Introduction Message Board Tutorials Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 About Contact Us
V. Time Analysis
Written by Abhishek Mishra

There is always more than one way to solve a problem. While developing computer programs, we often face the classic dilemma of choosing between different algorithms. And the most important point to consider is the time taken by the algorithms.
We need to examine the following:

1. Whether the algorithm works fast enough
2. What happens to the time taken by the program if the input size increases?

Ruled out. We do not usually measure the actual time taken by the program because: The actual time taken depends on many extraneous factors such as processor speed, the number of tasks being performed by the processor etc.
So what do we do?
We estimate of course.

This is how the pros do it:

Time analysis counts the number of basic operations performed by each algorithm. A basic operation here is the “small step” in the program. Here are some examples of basic operations:

• An arithmetic operation (e.g., +, *).
• An assignment
• A test (e.g., x == 0)
• A write (of a primitive type)

#### Don’t forget the ‘Other factors’:

A program could be independent of the input size altogether.
For e.g. it could just print the input multiplied by a constant as output. Thus, Number of operations = 2 which is a constant.
Example 1:

``` Public class ConstN {
Public static void main (String[] args) {
// print input
}
}```

For most programs, however, number of operations depends on the input size. For example, a program that sorts a list of data is faster with a smaller list. Thus the time taken can be written as an expression of the input size. For example, if we are calculating the average of ‘n’ numbers, we add the ‘n’ numbers, assign it to ‘sum’, divide ‘sum’ by ‘n’ , and assign the result to ‘average’. Considering each arithmetic operation as an ‘operation’, this involves n-1 additions, an assignment, a division, and another assignment: a total of n + 2 operations.
i.e. Number of operations = n + 2 which is a linear function.
Example 2:

```…
public static void main (String[] args) {
sum = m1 + m2 +  …. + mn;
average = sum / n;
}
…```

Yet another program might involve a nested loop, with each loop executing ‘input’ number of times. Here, Number of operations = n2 which is quadratic. If there are additional methods or statements in the program, the number of operations could be something like a * n2 + b * n + c.
Example 3:

```	…
public static void main(String[] args) {
…
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
// do stuff
}
}
}
...```

When we consider the complexity of a method, we don’t really care about the exact number of operations that are performed; instead, we care about how the number of operations relates to the problem size. If the problem size doubles, does the number of operations stay the same? Double? Increase in some other way? Furthermore, we are usually interested in the worst case: what is the most operations that might be performed for a given problem size. Thus while analyzing a program involving an if-else statement, where there is a possibility of executing different number of operations each time the program is run, we would be most interested in the worst case involving more number of operations.

#### Big ‘O’ – A standard to measure the time:

As the expressions get more complex, we no longer want to know it. All that we want to know is how efficient the algorithm is: Very efficient, Efficient, not bad, not so efficient, the worst etc. This is where the big O comes in handy. Big O notation is shorthand to denote an algorithm’s efficiency. In order to achieve this simplicity, we ignore all terms but the largest in the expression.
Thus if Number of operations = a * n2 + b * n + c. The largest term is a * n2, and the algorithm is said to be “big-O of n2” or O(n2). Such an algorithm is called quadratic.
If the largest term in the formula is a constant times n, then the algorithm is said to be a “big – O of n”, or O(n). The algorithm is called linear.

#### Does it give the true picture?

Big – O expressions are crude and may not give the exact information. For example, the Big – O analysis cannot distinguish between a run time of 20n + 5 and a running time of 200n +20. But it certainly helps. If we have two algorithms with different big – O times, with sufficiently large input, the algorithm with better big – O analysis will perform better. Take a look at these graphs:

The time taken by each of the algorithms will be in the following order.
log(n) < n < n*log(n) < n*n

#### Iteration vs. Recursion:

Iteration and recursion are two different ways to perform repetitive calculations. Suppose we want to find out the value of N factorial. We could do this using loops such as this:

```...

for (int i = 1; i < (N+1); i++) {
result = result * i;
}
...```
This is the traditional iterative way.
We could also define a more elegant algorithm in which a method calls itself recursively.
```....
int getFactorial(int N) {
int result;
if (N == 1)
result = 1;
else
result = N * getFactorial(N-1);
return result;
}
....
```
Both iteration and recursion are based on a control structure: Iteration uses loops such as for, while, do/while; recursion uses a selection structure such as if, if/else or switch. Both involve a termination test to end the repetition. Iteration ends when the loop condition fails. Recursion terminates when a base case is reached. The beauty of recursive algorithm is very well demonstrated be the following example:

#### The Fibonacci Series:

The Fibonacci series begins with 1 and 1 and has the property that each subsequent Fibonacci number is the sum of previous two Fibonacci numbers.
1, 1, 2, 3, 5, 8, 13, 21.....
The series occurs frequently in nature. The ratio of successive Fibonacci numbers converges on a constant value of 1.618..., which is called the golden ratio. It is an important ratio used by architects and designers. The algorithm to calculate Fibonacci numbers can be written recursively or iteratively. Try the following programs.

The first one uses recursive algorithm while the second one uses iteration. The recursive method invokes itself repeatedly, resulting in a huge overhead of method calls. The recursive call causes another copy of the method’s variables to be created. Each invocation of the “getFib” method results in two more recursive calls to the “getFib” method. This increases exponentially. Calculating Fibonacci value of 30 requires 2,692,537 calls to the “getFib” method. We can visibly see the difference in calculation times between the two programs with an input of 40 or more.
If performance is a concern, then we should definitely avoid using such recursive algorithms in our program. But when we develop a large and complex software project, good software engineering with modular programs that are easy to understand and maintain might be more important. Thus a programmer will have to judiciously choose between performance and other software engineering issues.

Take the Quiz for This Chapter