Execution time stays the same regardless of input size. The number of operations is fixed at some constant k.
Formula
T(n) = k (k is a constant)
→ Whether n is 1 or 1 billion, only k operations
arr[i] — Direct array index access (address calculation)
- Hash table insert/lookup (without collisions)
- Stack
push / pop operations
- Linked list head node access
Array index access is O(1) because: memory address = base_address + index × element_size — a single arithmetic operation.
Each step halves the search space. Even with n = 1 billion, only about 30 iterations are needed.
Derivation
After k steps: n × (1/2)ᵏ = 1
→ k = log₂ n
- Binary Search
- AVL Tree, Red-Black Tree search/insert
- Exponentiation by squaring (divide & conquer)
- B-Tree operations (database indexes)
For n = 1,000,000,000: log₂(10⁹) ≈ 30. Only 30 comparisons to find one element among a billion!
Traverses data from start to end once. A single loop executing n times — the most intuitive pattern.
Derivation
for i in range(n): # runs n times
single operation
→ T(n) = c × n = O(n)
- Sequential search in unsorted array
- Linked list traversal and search
- Computing sum/max of an entire array
- String search and comparison
O(n) is often the "best we can do." If every element must be examined at least once, O(n) becomes the lower bound.
Divides data into log n levels and processes all n elements at each level. The theoretical lower bound for comparison-based sorting.
Derivation (Merge Sort)
Recursion depth: log₂ n levels
Merge cost per level: O(n)
→ Total: O(n × log n)
- Merge Sort — guaranteed O(n log n)
- Heap Sort
- Quick Sort — average case
- Tim Sort (Python, Java default)
Information-theoretic proof: sorting n elements by comparison requires at least log₂(n!) ≈ n log n comparisons. This is the theoretical lower bound.
Nested loops are the classic cause: outer loop n × inner loop n = n² operations total.
Derivation
for i in range(n): # n times
for j in range(n): # n times each
operation
→ T(n) = n × n = n²
- Bubble Sort, Selection Sort, Insertion Sort
- Comparing all pairs in two arrays (brute force)
- 2D matrix traversal (n×n)
- Graph adjacency matrix search
At n = 100,000: operations = 10,000,000,000 (10 billion). At 10⁹ ops/sec, that's 10+ seconds → timeout in most competitive programming!
Each step doubles the number of branches. The recursion tree grows exponentially.
Fibonacci (naive recursion)
fib(n) = fib(n-1) + fib(n-2)
Call tree nodes ≈ 2ⁿ
fib(50) → ~1 trillion calls!
- Naive recursive Fibonacci
- Power set enumeration: all subsets of n elements
- Knapsack problem (brute force)
- Complete chess game tree search
Solution: Memoization or dynamic programming can reduce Fibonacci from O(2ⁿ) to O(n). Caching repeated computations is the key.
Enumerates all permutations of n elements. First position: n choices × second: (n-1) × ... × last: 1 = n! total.
Growth Rate
n=5 → 120
n=10 → 3,628,800
n=15 → 1,307,674,368,000 (1.3T)
n=20 → 2.43×10¹⁸
Real Time (10⁹ ops/sec)
n=12 → ~0.5 seconds (limit)
n=13 → ~6.2 seconds (timeout)
n=20 → ~77 years
n=25 → ~49 billion years (exceeds age of universe)
- Traveling Salesman (brute force)
- All permutations of n numbers
- All seating arrangements for n people
- Brute force cryptographic decryption
At n=20, no modern supercomputer can exhaustively search within any reasonable time. For NP-hard problems like TSP, approximation algorithms, heuristics, or DP+bitmask (reducing to O(n²·2ⁿ)) are the practical approaches.
Rule 1: Keep only the highest-order term
T(n) = 5n² + 3n + 100
→ O(n²) (drop constants and lower terms)
Rule 2: Ignore constant coefficients
T(n) = 50n
→ O(n) (ignore coefficient 50)
Rule 3: Sequential execution → Addition
O(n) + O(n²) = O(n²)
(the larger complexity dominates)
Rule 4: Nesting → Multiplication
for i in range(n):
for j in range(m):
→ O(n × m)
Rule 5: Distinguish Best / Average / Worst case
Quick Sort: Best O(n log n) / Average O(n log n) / Worst O(n²) (already sorted + pivot always min/max)
→ In competitive programming, worst case is typically the standard for evaluation.
// Detailed Algorithm Descriptions
Bubble Sort
How it works: Repeatedly steps through the list, compares adjacent elements, and swaps them if they're in the wrong order. Largest elements "bubble up" to the end.
✓ Advantages
- Simple to understand and implement
- O(1) space — in-place sorting
- Stable sorting algorithm
- Adaptive: O(n) when nearly sorted
✗ Disadvantages
- Very inefficient for large datasets
- O(n²) makes it impractical for n > 1,000
- Many unnecessary comparisons
- Poor cache performance
Best for: teaching purposes, very small datasets (n < 10), or nearly sorted arrays.
Selection Sort
How it works: Divides array into sorted and unsorted portions. Repeatedly finds the minimum element from the unsorted portion and places it at the beginning.
✓ Advantages
- Simple implementation
- O(1) space complexity
- Minimizes number of swaps: O(n)
- Performance doesn't depend on initial order
✗ Disadvantages
- Always O(n²), even for sorted arrays
- Not stable
- Not adaptive
- Poor performance on large lists
Best for: when memory writes are expensive (minimizes swaps), small datasets.
Insertion Sort
How it works: Builds the final sorted array one item at a time. Picks each element and inserts it into its correct position in the sorted portion.
✓ Advantages
- Efficient for small datasets
- Adaptive: O(n) for nearly sorted data
- Stable sorting algorithm
- Online: can sort as data arrives
✗ Disadvantages
- O(n²) worst and average case
- Inefficient for large datasets
- Many shift operations
Best for: small datasets (n < 50), nearly sorted data, online sorting, or as part of hybrid algorithms like Timsort.
Merge Sort
How it works: Divide-and-conquer algorithm. Recursively divides array into halves until single elements, then merges them back in sorted order.
✓ Advantages
- Guaranteed O(n log n) in all cases
- Stable sorting algorithm
- Predictable performance
- Good for linked lists; parallelizable
✗ Disadvantages
- Requires O(n) extra space
- Not in-place
- Slower than quicksort in practice
- Not adaptive
Best for: large datasets requiring guaranteed O(n log n), linked lists, external sorting, stability-critical scenarios.
Quick Sort
How it works: Picks a 'pivot' element, partitions array so elements smaller than pivot come before it and larger elements come after, then recursively sorts the partitions.
✓ Advantages
- Average O(n log n) performance
- In-place: O(log n) space for recursion
- Cache-friendly (good locality)
- Fastest in practice for random data
✗ Disadvantages
- Worst case O(n²) for sorted data
- Not stable
- Pivot selection affects performance
- Recursive: stack overflow risk
Best for: general-purpose sorting for large random datasets. Used in most standard libraries with optimizations.
Heap Sort
How it works: Builds a max-heap from the data, then repeatedly extracts the maximum element and rebuilds the heap.
✓ Advantages
- Guaranteed O(n log n) worst case
- In-place: O(1) extra space
- No worst-case degradation
- Good for memory-constrained systems
✗ Disadvantages
- Not stable
- Poor cache locality
- Slower than quicksort in practice
- Complex implementation
Best for: O(n log n) worst-case guarantee with O(1) space, embedded systems, priority queues.
Counting Sort
How it works: Non-comparison sort. Counts occurrences of each distinct element, then uses arithmetic to calculate positions.
✓ Advantages
- Linear time: O(n+k)
- Stable sorting algorithm
- Very fast for small range k
- Simple implementation
✗ Disadvantages
- Requires O(k) extra space
- Only works with integers in limited range
- Impractical if k >> n
- Not in-place
Best for: integers in a small known range (e.g., grades 0-100, ages 0-120), when k = O(n).
Radix Sort
How it works: Non-comparison sort. Processes integers digit by digit (from least significant to most), using counting sort as a subroutine.
✓ Advantages
- Linear time when d is constant: O(d×n)
- Stable sorting algorithm
- Excellent for fixed-length integers
- Can outperform O(n log n) algorithms
✗ Disadvantages
- Only works with integers/strings
- Requires O(n+k) extra space
- Sensitive to d (number of digits)
- Not efficient for variable-length keys
Best for: large sets of integers or fixed-length strings (IP addresses, phone numbers, dates).
Bucket Sort
How it works: Distributes elements into a number of buckets. Each bucket is then sorted individually (often using insertion sort or another algorithm).
✓ Advantages
- Average O(n+k) when data is uniformly distributed
- Stable sorting algorithm
- Can be very fast with good distribution
- Parallelizable across buckets
✗ Disadvantages
- Worst case O(n²) with poor distribution
- Requires O(n+k) extra space
- Performance depends on distribution
- Need to choose bucket count carefully
Best for: uniformly distributed data over a known range, floating-point numbers in [0, 1).