Understanding Big O Notation: A Comprehensive Guide for Developers

dreamecho100

dreamecho100

Mazen Mohamed
created at: tags: big-o-notation, time-complexity, space-complexity, algorithms, data-structure, performance, optimization, computer-science, typescript
Understanding Big O Notation: A Comprehensive Guide for Developers

Welcome, In this Post, we'll dive into the important concepts of Big O notation, how it's used in programming, and how to calculate its complexity. We'll also discuss why constants are dropped in Big O calculations and the importance of considering the worst case scenario. By the end of this Post, you'll have a solid understanding of Big O notation and how to use it in your own coding projects.

What is Big O notation in programming?

Big O is a way to measure how long it takes for an algorithm or program to run. It helps us understand how the time it takes to run a program increases as the input size grows.

In computer science, Big O notation is a way to describe the complexity of an algorithm, or the amount of time and space an algorithm requires to run. It is often used to compare the efficiency of different algorithms and to analyze the performance of an algorithm in the worst case scenario.

Big O notation is a mathematical way of measuring the complexity of an algorithm. It provides a way to compare the efficiency of different algorithms by looking at how the running time or space required by an algorithm grows as the input size increases. Big O notation is commonly used in computer science to analyze and compare the performance of different algorithms. For example, if an algorithm has a running time of O(n), it means that the running time grows linearly with the size of the input. On the other hand, if an algorithm has a running time of O(n^2), it means that the running time grows quadratically with the size of the input.

Why do we use is Big O?

It gives us a rough estimate of the resources (such as time or memory) required to execute an algorithm. So it helps us determine the time and space complexity of an algorithm.

It helps us understand how well a program will scale as the input size grows. So it is a way to measure how efficient an algorithm is.

By analyzing the Big O complexity of an algorithm, we can:

  • Determine the best approach to solving a problem.
  • Make informed decisions about trade-offs between different algorithms.
  • It allows us to compare the efficiency of different algorithms and choose the most suitable one for a given problem.
  • It is a valuable tool for optimizing the performance of programs when working with large datasets or when performance is a critical factor.

we can use Big O notation to analyze the performance of functions and identify potential bottlenecks. For example, a function with a time complexity of O(n) will take longer to run as the input size increases, while a function with a time complexity of O(1) will take the same amount of time regardless of the input size. By understanding the Big O complexity of our algorithms, we can make informed decisions about which approaches are the most efficient and scalable for our specific use cases.

Example of Big O notation (time complexity)

Here is an example of Big O notation in Typescript:

1// O(1) - Constant time complexity 2function getFirstElement(array: number[]) { 3 return array[0]; 4} 5 6// O(n) - Linear time complexity 7function findElement(array: number[], element: number) { 8 for (let i = 0; i < array.length; i++) { 9 if (array[i] === element) { 10 return true; 11 } 12 } 13 return false; 14} 15 16// O(n^2) - Quadratic time complexity 17function findDuplicates(array: number[]) { 18 for (let i = 0; i < array.length; i++) { 19 for (let j = 0; j < array.length; j++) { 20 if (i !== j && array[i] === array[j]) { 21 console.log(array[i]); 22 } 23 } 24 } 25} 26 27// O(log n) - Logarithmic time complexity 28function binarySearch(array: number[], element: number) { 29 let left = 0; 30 let right = array.length - 1; 31 while (left <= right) { 32 let mid = Math.floor((left + right) / 2); 33 if (array[mid] === element) { 34 return mid; 35 } else if (array[mid] < element) { 36 left = mid + 1; 37 } else { 38 right = mid - 1; 39 } 40 } 41 return -1; 42}

In the above example:

  • The [ "getFirstElement" ] function has a time complexity of O(1), because it only performs a single operation (accessing the first element of the array).
  • The [ "findElement" ] function has a time complexity of O(n), because it performs a single operation (comparing each element to the given element) for each element in the array.
  • The [ "findDuplicates" ] function has a time complexity of O(n^2), because it performs a nested loop, with the outer loop running n times and the inner loop running n times for each iteration of the outer loop.
  • The [ "binarySearch" ] function has a time complexity of O(log n), because it reduces the search area by half with each iteration of the loop.

We will give more as we go but first let's have a refresher about ..., log?

What is log

In mathematics and programming, log is a function that represents the power to which a base must be raised to produce a certain number. For example, the log base 2 of 8 is 3, because 2 to the power of 3 is 8. In programming, log is often used to calculate the complexity of algorithms, especially when it comes to searching and sorting algorithms.

In the context of calculating Big O notation, log is often used to represent the time complexity of algorithms that involve searching or dividing a data set into smaller pieces. For example, the time complexity of a binary search algorithm is typically represented as O(log n), because the algorithm reduces the size of the data set by half on each iteration, resulting in a logarithmic growth rate.

In addition to being used to represent the time complexity of algorithms, log is also used in other areas of computer science and mathematics, such as calculating the space complexity of algorithms, representing the growth rate of certain functions, and solving equations. It is a fundamental concept in mathematics and has many practical applications in programming and other fields.

Important concepts for Big O notation

Some important concepts of Big O notation include:

  • Time complexity: This refers to the number of operations an algorithm performs as a function of the size of its input data. For example, an algorithm that performs a linear search on an array of size n would be O(n).

  • Space complexity: This refers to the amount of memory an algorithm consumes as a function of the size of its input data. For example, an algorithm that uses a recursive function to compute the nth Fibonacci number would have O(n) space complexity.

  • Asymptotic complexity: This refers to the behavior of an algorithm as the size of its input data becomes very large. For example, an algorithm that has O(n^2) time complexity would have a slower running time than an algorithm with O(n) time complexity when the size of the input data is large.

  • Best-case, worst-case, and average-case complexity: These refer to the performance of an algorithm under different circumstances. For example, a sorting algorithm may have a best-case complexity of O(n log n) if the input data is already sorted, a worst-case complexity of O(n^2) if the input data is reverse-sorted, and an average-case complexity of O(n log n) if the input data is randomly sorted.

The simplest solutions and tricks to find the complexity of Big O notation

To find the complexity of an algorithm using Big O notation, you can use the following simple tricks:

  • Count the number of basic operations (e.g. assignments, comparisons, etc.) that are performed as the size of the input data increases.

  • Identify the portion of the code that takes the most time to execute, and consider only that part when determining the complexity (So basically between loops and assignments or comparisons you will compare depending on loops).

  • Ignore constants and lower order terms, as they are not significant when the size of the input data gets very large.

More examples of Big O notation (time complexity)

1function findLargestNumber(arr: number[]) { 2 let largest = 0; 3 for (let i = 0; i < arr.length; i++) { 4 if (arr[i] > largest) { 5 largest = arr[i]; 6 } 7 } 8 return largest; 9}

In this example, the time complexity of the algorithm is O(n), where n is the size of the input array.

This is because the number of basic operations performed (i.e. the comparison in the if statement) is directly proportional to the size of the input array.

1function example(n) { 2 // The first for loop has a complexity of O(n) 3 for (let i = 0; i < n; i++) { 4 console.log(i); 5 } 6 7 // The second for loop has a complexity of O(n) 8 for (let j = 0; j < n; j++) { 9 console.log(j); 10 } 11}

In this example, the complexity of the two for loops is O(n) because the number of iterations is directly proportional to the value of n. This means that if n increases by 1, the number of iterations in both for loops will also increase by 1.

The overall complexity of this function is O(2n), which can be simplified to O(n). This means that the time complexity of this function is directly proportional to the value of n, and as n increases, the time it takes to execute the function will also increase linearly.

1function bubbleSort(arr: number[]) { 2 for (let i = 0; i < arr.length; i++) { 3 for (let j = 0; j < arr.length - i - 1; j++) { 4 if (arr[j] > arr[j + 1]) { 5 let temp = arr[j]; 6 arr[j] = arr[j + 1]; 7 arr[j + 1] = temp; 8 } 9 } 10 } 11}

In this example, we implemented a bubble sort is which an example of an O(n^2) sorting algorithm. This is because it has two nested loops, both of which iterate over all n elements of the input array. The complexity is therefore O(n^2).

1function quickSort(arr: number[]) { 2 if (arr.length <= 1) { 3 return arr; 4 } 5 6 const pivot = arr[0]; 7 const left = []; 8 const right = []; 9 10 for (let i = 1; i < arr.length; i++) { 11 if (arr[i] < pivot) { 12 left.push(arr[i]); 13 } else { 14 right.push(arr[i]); 15 } 16 } 17 18 return [...quickSort(left), pivot, ...quickSort(right)]; 19}

n this example, we implemented a quick sort, which is an example of an O(n log n) sorting algorithm. It has a recursive function that divides the input array in half at each step, so the complexity is O(log n). However, this function is called n times (once for each element in the array), so the overall complexity is O(n log n).

Quicksort is a divide-and-conquer sorting algorithm that has an average case time complexity of O(n log n). It works by selecting a pivot element from the array and partitioning the other elements into two sub arrays, according to whether they are less than or greater than the pivot. The sub arrays are then sorted recursively, and the result is combined into a sorted array.

Why practically constants are pretty important, but theoretically we drop or ignore them when calculating the Big O notation

In Big O notation, the goal is to determine the overall growth rate of an algorithm as the input size increases.

Constants are not generally considered when analyzing the growth rate because they don't affect the overall trend. For example, if an algorithm has a time complexity of O(n + 2) and another algorithm has a time complexity of O(n + 200), both algorithms have a linear time complexity of O(n). The constants 2 and 200 do not affect the trend of the growth rate, which is why they are dropped in Big O notation.

In practice, however, constants can be important because they can have a significant impact on the performance of an algorithm. For example, if an algorithm has a time complexity of O(n + 2) and another algorithm has a time complexity of O(n + 200), the algorithm with the time complexity of O(n + 2) may be much faster in practice because it has a lower constant factor. Therefore, it is important to consider constants in practice, but they are generally ignored when analyzing the overall growth rate of an algorithm in Big O notation.

consider the following two implementations of quicksort in Typescript:

1// Implementation 1 2function quicksort(arr: number[]) { 3 if (arr.length <= 1) return arr; 4 const pivot = arr[0]; 5 const left = []; 6 const right = []; 7 for (let i = 1; i < arr.length; i++) { 8 if (arr[i] < pivot) left.push(arr[i]); 9 else right.push(arr[i]); 10 } 11 return quicksort(left).concat(pivot, quicksort(right)); 12}
1// Implementation 2 2function quicksort(arr: number[]) { 3 if (arr.length <= 1) return arr; 4 const pivotIndex = Math.floor(arr.length / 2); 5 const pivot = arr[pivotIndex]; 6 const left = []; 7 const right = []; 8 for (let i = 0; i < arr.length; i++) { 9 if (i === pivotIndex) continue; 10 if (arr[i] < pivot) left.push(arr[i]); 11 else right.push(arr[i]); 12 } 13 return quicksort(left).concat(pivot, quicksort(right)); 14}

Both implementations have the same time complexity of O(n log n), but the second implementation may be faster in practice because it avoids the constant overhead of performing an extra comparison in the loop condition. This is just one example of how constants can affect the performance of an algorithm, even if they are ignored in the theoretical analysis.

This time complexity is considered to be very efficient, as it is much faster than other sorting algorithms such as bubble sort or insertion sort, which have time complexities of O(n^2). In fact, Quicksort is often used as the default sorting algorithm in many programming languages and libraries because of its efficiency and ease of implementation.

One reason why constants are ignored in the calculation of Big O notation is that they have little impact on the overall complexity of the algorithm. For example, in the Quicksort algorithm, the constant time it takes to divide the input into left and right arrays and to combine them back together at the end is insignificant compared to the time it takes to sort the input. Therefore, these constants are usually dropped or ignored when analyzing the complexity of the algorithm.

Another reason why constants are ignored in the calculation of Big O notation is that they can vary significantly depending on the specific implementation of the algorithm. For example, the constants in the Quicksort algorithm may be different if the pivot element is chosen differently (e.g. by selecting the first element in the array instead of the last element) or if the arrays are implemented differently (e.g. using a linked list instead of an array). These variations in constants can make it difficult to accurately compare the complexities of different algorithms, so they are typically dropped or ignored in the calculation of Big O notation.

Let's say we have the following function in Typescript:

1function constantExample(n) { 2 let sum = 0; 3 for (let i = 0; i < n; i++) { 4 sum += i; 5 } 6 return sum; 7}

This function has a time complexity of O(n), because the number of operations (in this case, adding each element of the array to the sum) grows linearly with the size of the input (n). Now, let's say we add a constant operation to the function:

1function constantExample(n) { 2 let sum = 0; 3 for (let i = 0; i < n; i++) { 4 sum += i; 5 } 6 for (let i = 0; i < 100; i++) { 7 sum \*= 2; 8 } 9 return sum; 10}

This function still has a time complexity of O(n), even though we added a constant operation (the second for loop). This is because the constant operation (performing 100 multiplications) is not relevant to the overall growth of the algorithm. It will always take the same amount of time, regardless of the size of the input. Therefore, we drop the constant when calculating the complexity of the algorithm.

What do we mean when we say that we consider the worst case when calculating the complexity of the Big O notation and give a code example

In the context of Big O notation, "worst case" refers to the scenario in which an algorithm performs the most number of operations, or takes the longest amount of time, to complete. This is typically used as a measure of an algorithm's performance because it represents the upper bound of the time or space required by the algorithm.

For example, consider a simple sorting algorithm that iterates through an array of numbers and swaps any adjacent elements that are out of order. In the worst case scenario, the array may be already sorted in the reverse order, meaning that the algorithm would need to perform n-1 swaps to sort the array, where n is the length of the array. The complexity of this algorithm in the worst case would be O(n), since the number of operations grows linearly with the size of the input.

Here is an example of this sorting algorithm in Typescript:

1function sort(arr) { 2 for (let i = 0; i < arr.length - 1; i++) { 3 for (let j = 0; j < arr.length - i - 1; j++) { 4 if (arr[j] > arr[j + 1]) { 5 // Swap elements 6 let temp = arr[j]; 7 arr[j] = arr[j + 1]; 8 arr[j + 1] = temp; 9 } 10 } 11 } 12}

In this example, the outer loop iterates n times, and the inner loop iterates n-i times on each iteration, for a total of n * (n-1) / 2 operations in the worst case. This results in a complexity of O(n^2) in the worst case.

What do we mean when we say growth with respect to input when calculating the complexity of the Big O notation and give a code example

In the context of Big O notation, "growth with respect to input" refers to how the complexity of an algorithm or function scales as the input size increases. For example, consider the following function in Typescript that calculates the sum of all elements in an array:

1function sum(arr) { 2 let total = 0; 3 for (let i = 0; i < arr.length; i++) { 4 total += arr[i]; 5 } 6 return total; 7}

The complexity of this function is O(n), where n is the size of the input array. This means that as the size of the input array increases, the amount of time and resources required to execute the function also increases linearly. This is because the function needs to loop through the entire array once in order to calculate the sum, and the larger the array, the more times the loop will iterate.

By contrast, consider the following function that calculates the sum of all elements in an array and also squares each element before adding it to the total:

1function sumAndSquare(arr) { 2 let total = 0; 3 for (let i = 0; i < arr.length; i++) { 4 total += arr[i] \*\* 2; 5 } 6 return total; 7}

The complexity of this function is still O(n), because the loop still needs to iterate through the entire array once in order to calculate the sum. However, the additional operation of squaring each element adds a constant amount of time to each iteration of the loop, which is insignificant compared to the overall complexity of the function as the size of the input array increases. Therefore, constants are typically dropped when calculating the complexity of an algorithm using Big O notation.