Big O Complexity Analysis

What is Big O?

Big O notation describes how algorithm runtime or space requirements grow as input size increases. It focuses on worst-case performance.

Complexity Growth Chart

Complexity Classes

NotationNameExamplen=10n=100n=1000
O(1)ConstantArray access111
O(log n)LogarithmicBinary search3710
O(n)LinearArray iteration101001000
O(n log n)LinearithmicMerge sort306649966
O(n²)QuadraticBubble sort100100001000000
O(2ⁿ)ExponentialFibonacci (naive)1024~10³⁰~10³⁰¹

Code Examples

O(1) - Constant Time

Runtime doesn't depend on input size

def get_first(arr):
    return arr[0] if arr else None  # O(1)

def hash_lookup(dict, key):
    return dict.get(key)  # O(1) average

O(log n) - Logarithmic Time

The key insight: Algorithm divides the problem in half (or some constant fraction) with each step.

What Does "log n" Actually Mean?

Logarithm (log) is the inverse of exponentiation:

If 2³ = 8, then log₂(8) = 3
If 2¹⁰ = 1024, then log₂(1024) = 10

log₂(n) asks: "How many times must I divide n by 2 to reach 1?"

Examples:
  n = 8:    8 → 4 → 2 → 1  (3 divisions) = log₂(8) = 3
  n = 16:   16 → 8 → 4 → 2 → 1  (4 divisions) = log₂(16) = 4
  n = 1024: 1024 → 512 → 256 → 128 → 64 → 32 → 16 → 8 → 4 → 2 → 1
            (10 divisions) = log₂(1024) = 10

Why Binary Search is O(log n)

How it works: Each comparison eliminates half the remaining elements.

Search for 42 in [1, 5, 12, 23, 42, 56, 78, 91, 105, 120]:

Step 1: Array size = 10, check middle (42 vs 42) → Found!
  (But worst case, we don't find it immediately)

Worst case (searching for 120):
Step 1: Size=10, mid=42, target > 42, eliminate left half → Size=5
Step 2: Size=5, mid=78, target > 78, eliminate left half → Size=2
Step 3: Size=2, mid=105, target > 105, eliminate left half → Size=1
Step 4: Size=1, check 120 → Found!

For n=10 elements: 4 steps maximum (⌈log₂(10)⌉ = 4)
For n=1,000,000: 20 steps maximum (⌈log₂(1,000,000)⌉ = 20)

This is why binary search is incredibly efficient - doubling input size only adds ONE more step!

The Math Behind It

After k steps of halving:
  Remaining elements = n / 2^k

When we're down to 1 element:
  1 = n / 2^k
  2^k = n
  k = log₂(n)

Therefore: Binary search takes log₂(n) steps

Code Example

def binary_search(sorted_arr, target):
    """
    How it works: Repeatedly divide search space in half

    Each iteration:
    - Check middle element
    - If target is smaller, discard right half
    - If target is larger, discard left half
    - If equal, found!

    Halving n elements log₂(n) times reduces to 1 element
    """
    left, right = 0, len(sorted_arr) - 1

    while left <= right:
        mid = (left + right) // 2

        if sorted_arr[mid] == target:
            return mid
        elif sorted_arr[mid] < target:
            left = mid + 1  # Eliminate left half
        else:
            right = mid - 1  # Eliminate right half

    return -1  # Not found

# Example trace for array [1, 3, 5, 7, 9, 11, 13, 15], target=13:
# Step 1: left=0, right=7, mid=3, arr[3]=7, 13>7, left=4
# Step 2: left=4, right=7, mid=5, arr[5]=11, 13>11, left=6
# Step 3: left=6, right=7, mid=6, arr[6]=13, Found at index 6!
# 3 steps for 8 elements (log₂(8) = 3)

Other O(log n) Examples

Common Misconception

❌ Wrong: "O(log n) means the algorithm uses logarithms"
✓ Correct: "O(log n) means the algorithm repeatedly divides the problem size"

The logarithm is just mathematical notation describing the growth rate. The algorithm itself doesn't need to calculate logs - it just needs to eliminate a constant fraction of possibilities each step.

Why O(log n) is Powerful

Comparison for large inputs:

n = 1,000,000 (1 million)
  O(n):     1,000,000 operations (linear scan)
  O(log n): 20 operations (binary search)
  50,000x faster!

n = 1,000,000,000 (1 billion)
  O(n):     1,000,000,000 operations
  O(log n): 30 operations
  33 million times faster!

This is why databases use B-trees (log n search) instead of scanning tables.

Interview Tip

Question: "How do you recognize an O(log n) algorithm?"
Answer: Look for these patterns:

O(n) - Linear Time

Must check every element

def find_max(arr):
    max_val = arr[0]
    for num in arr:  # O(n)
        if num > max_val:
            max_val = num
    return max_val

O(n log n) - Linearithmic

Efficient sorting algorithms

def merge_sort(arr):
    if len(arr) <= 1:
        return arr
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])   # Divide: O(log n) levels
    right = merge_sort(arr[mid:])
    return merge(left, right)      # Merge: O(n) per level
    # Total: O(n log n)

O(n²) - Quadratic Time

Nested loops over input

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):         # O(n)
        for j in range(n-1):   # O(n)
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    # Total: O(n²)

Space Complexity

AlgorithmTimeSpace
Binary Search (iterative)O(log n)O(1)
Binary Search (recursive)O(log n)O(log n)
Merge SortO(n log n)O(n)
Quick SortO(n log n) avgO(log n)