Common Time Complexities 🎯

beginner
7 min

Common Time Complexities 🎯

Welcome to a deep dive into the world of Time Complexities! This lesson will help you understand the most common time complexities and their significance in the field of algorithms and data structures.

Understanding Time Complexity 📝

Time Complexity refers to the amount of time an algorithm takes to run as a function of the size of the input. In other words, it helps us predict the efficiency of an algorithm as the input grows.

Basic Time Complexities 💡

Let's get acquainted with the basic time complexities:

  1. O(1): An algorithm with a constant time complexity performs the same number of operations, regardless of the size of the input. It's like fetching an item from an array using a fixed index.
python
def get_item_at_index(arr, index): return arr[index] # O(1)
  1. O(log n): Logarithmic time complexity refers to an algorithm that cuts the problem size in half at each step, like in binary search. It grows slower than linear time complexities, making it more efficient for large data sets.
python
def binary_search(arr, target): left, right = 0, len(arr) - 1 while left <= right: mid = (left + right) // 2 if arr[mid] == target: return mid # O(log n) elif arr[mid] < target: left = mid + 1 else: right = mid - 1 return -1

Linear Time Complexities 💡

  1. O(n): Linear time complexity algorithms perform operations linearly with the size of the input. A simple example would be iterating through an array or a list.
python
def find_sum(arr, target): for num in arr: if num + num == target: return True # O(n) return False

Quadratic Time Complexities 💡

  1. O(n²): Quadratic time complexity algorithms run operations squared with the size of the input. These algorithms can become inefficient for large data sets. A good example is a naive solution to the Knapsack problem.
python
def knapsack_naive(capacity, weights, values): n = len(weights) table = [[0 for _ in range(capacity + 1)] for _ in range(n + 1)] for i in range(n + 1): for w in range(capacity + 1): if i == 0 or w == 0: table[i][w] = 0 elif weights[i - 1] <= w: table[i][w] = max(values[i - 1] + table[i - 1][w - weights[i - 1]], table[i - 1][w]) else: table[i][w] = table[i - 1][w] return table[n][capacity] # O(n²)

Advanced Time Complexities 💡

  1. O(2ⁿ): Exponential time complexity algorithms have an increasing number of operations as the size of the input increases exponentially. Such algorithms are extremely inefficient and should be avoided in practice.

  2. O(n!): Factorial time complexity algorithms are the worst-case scenarios among time complexities. These algorithms have a huge number of operations, making them impractical for all but the smallest data sets.

Wrapping Up 💡

Understanding time complexities is crucial in optimizing your algorithms and making them more efficient. Remember, an efficient algorithm not only solves problems faster but also uses fewer system resources.

Quick Quiz
Question 1 of 1

What is the time complexity of the `binary_search` function?

Happy coding! 🌟