We often hear about fast and efficient algorithms when performing a task, but how quickly and efficiently is this understood? Is it measured by the time it takes to complete that task in seconds or minutes? The answer is not!

The program on my computer runs slower than on your computer because I am using an old computer or because while running this program I also run many other programs … So, when the program on my computer runs slower than on your computer does not mean that you are using a more efficient algorithm. Therefore, when comparing two algorithms for a given task, we cannot rely on the time taken to execute that task.

To solve this problem, the Big O Notation concept was introduced to define, measure the effectiveness of an algorithm. It relies on the number of steps required to perform an algorithm for a particular task to measure the effectiveness of the algorithm. In particular, let’s take a look at the following examples:

The first example:

1 2 3 |
private int getLastElement(int[] numbers) { return numbers[numbers.length - 1]; } |

The steps in this method are:

- Get the length attribute of numbers array (1)
- Perform calculations of numbers.length – 1 (2)
- After obtaining the result in step (2), get the value at this index in the numbers array. (3)
- And finally, return the results. (4)

So, in our method, our algorithm will go through all four steps, no matter how large the number of data in the numbers array!

If we refer to the function that we have learned in high school, then we can express the algorithm as follows:

1 |
f(n) = 4 |

where n is the amount of data in our array.

Now let’s consider the second example:

1 2 3 4 5 6 7 |
public static int sum(int[] numbers) { int sum = 0; for (int i = 0; i < numbers.length; i++) { sum = sum + numbers[i]; } return sum; } |

With this example, we can determine the number of steps required for this algorithm as follows:

- Out of the for loop, we have three steps:
- Initialize the sum variable with the value 0
- Initialization of variable i with value 0
- Return the value of the sum variable

- In the for loop, for each element in the numbers array, we have the following steps:
- Get the length attribute of numbers array (1)
- Compare the results of step (1) with i variable (2)
- Get the value of the numbers array at index i (3)
- Add the sum variable to the value in step 3 (4).
- Assign the value in step (4) to the sum variable (5)
- Increase the value of i variable to 1 (6)

So in the for loop, for every element in the numbers array, we have six steps to do. n is the number of elements in the numbers array, we have 6n steps to take.

In total, we have 6n + 3 steps to perform for the algorithm in this method. As a function, we can express the following:

1 |
f(n) = 6n + 3 |

As you can see, with the first example, it is clear that the number of steps required does not depend on the amount of data we include. In the second example, the larger the input, the number of steps we need to take will increase.

And here you will see, we will have another function g(n) such that from a value of n >= n0, the value of the function g(n) multiplies by a constant c0 is always greater than the value of f(n). Then, the function g(n) is called the upper bound of the function f(n) and the function f(n) is called the Big O of the function g(n), written f(n) = O(g(n)).

In the second example, we can call g(n) = n as Big O of function f(n) = 6n + 3. Because here, when the value of n0 >= 3, c0 = 7:

f (n) = 21

c0 * g (n) = 21

The value of the function f(n) is always less than or equal to the value of the function g(n) multiplied by a constant c0.

We can define Big O Notation as follows:

1 2 |
f(n) = O(g(n)) when existing a n0 > 0 and c0 > 0 so that: f(n) <= c0 * g(n) with n >= n0. |

In the first example, f(n) is a Big O of a constant, called O(1).

In the second example, f (n) is a Big O of n, called O(n).

Thus, Big O Notation is a concept that determines the scalability of an algorithm based on the number of steps performed by that algorithm. We can predict that algorithm with increasing data will be like. And it allows us to estimate the worst case based on the upper bound of this algorithm.