You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Algorithm Complexity and Big O Notation
Algorithm Complexity and Big O Notation
Understanding how algorithms perform as input sizes grow is one of the most important skills in A-Level Computer Science. Algorithm complexity analysis allows us to predict how an algorithm will behave for large datasets and to compare competing solutions objectively, without having to run them on specific hardware.
Why Complexity Matters
Imagine you have two sorting algorithms. On a list of 10 items, both finish in under a millisecond. But on a list of 10 million items, one finishes in 10 seconds while the other takes 3 hours. The difference is algorithmic complexity — the mathematical relationship between the size of the input and the number of operations the algorithm performs.
In industry and in exams, you need to be able to:
- Classify algorithms by their time complexity
- Compare algorithms using Big O notation
- Identify the best-case, worst-case, and average-case behaviour of an algorithm
- Recognise common complexity classes from code or pseudocode
Time Complexity vs Space Complexity
| Type | What it measures | Example |
|---|---|---|
| Time complexity | The number of basic operations as a function of input size n | How many comparisons a sorting algorithm makes |
| Space complexity | The amount of extra memory required as a function of input size n | How much additional storage merge sort uses for temporary arrays |
At A-Level, the focus is primarily on time complexity, but you should be aware that some algorithms trade time for space (and vice versa).
Big O Notation
Big O notation describes the upper bound of an algorithm's growth rate. It tells us the worst-case scenario — the maximum number of operations as a function of the input size n.
We write it as O(f(n)), where f(n) is a mathematical function of n.
Key Rules for Determining Big O
- Drop constant factors. O(3n) simplifies to O(n). Constants do not affect growth rate.
- Drop lower-order terms. O(n² + n) simplifies to O(n²). For large n, the n² term dominates.
- Focus on the term that grows fastest as n increases.
Common Complexity Classes
The following table summarises the complexity classes you must know:
| Big O | Name | Example Algorithm | Growth |
|---|---|---|---|
| O(1) | Constant | Accessing an array element by index | Does not grow with n |
| O(log n) | Logarithmic | Binary search | Grows very slowly |
| O(n) | Linear | Linear search | Grows proportionally to n |
| O(n log n) | Linearithmic | Merge sort, quick sort (average) | Slightly faster than quadratic |
| O(n²) | Quadratic | Bubble sort, insertion sort (worst case) | Grows rapidly |
| O(2ⁿ) | Exponential | Brute-force subset generation | Doubles with each additional element |
| O(n!) | Factorial | Brute-force travelling salesman | Becomes impractical extremely quickly |
Growth Rate Comparison
Consider how these classes scale as n increases:
| n | O(1) | O(log n) | O(n) | O(n log n) | O(n²) | O(2ⁿ) |
|---|---|---|---|---|---|---|
| 1 | 1 | 0 | 1 | 0 | 1 | 2 |
| 10 | 1 | 3.3 | 10 | 33 | 100 | 1,024 |
| 100 | 1 | 6.6 | 100 | 664 | 10,000 | 1.27 x 10³⁰ |
| 1,000 | 1 | 10 | 1,000 | 10,000 | 1,000,000 | — |
| 1,000,000 | 1 | 20 | 1,000,000 | 20,000,000 | 10¹² | — |
Exam Tip: You must be able to state the Big O complexity for every algorithm on the specification. Examiners frequently ask you to "state the time complexity" or "explain why algorithm X is more efficient than algorithm Y for large datasets."
Determining Complexity from Pseudocode
O(1) — Constant Time
FUNCTION getFirst(arr)
RETURN arr[0]
END FUNCTION
No matter how large the array, this always performs exactly one operation.
O(n) — Linear Time
FUNCTION sumAll(arr, n)
total = 0
FOR i = 0 TO n - 1
total = total + arr[i]
END FOR
RETURN total
END FUNCTION
The loop runs n times, so the number of operations grows linearly with n.
O(n²) — Quadratic Time
FUNCTION printPairs(arr, n)
FOR i = 0 TO n - 1
FOR j = 0 TO n - 1
OUTPUT arr[i], arr[j]
END FOR
END FOR
END FUNCTION
The outer loop runs n times and, for each iteration, the inner loop also runs n times. Total operations: n x n = n².
Nested Loops — General Rule
- One loop over n → O(n)
- Two nested loops over n → O(n²)
- Three nested loops over n → O(n³)
However, not all nested loops are O(n²). If the inner loop halves each time (as in binary search), the complexity is O(n log n) or O(log n).
Best Case, Worst Case, and Average Case
| Case | Meaning | Example (Linear Search) |
|---|---|---|
| Best case | The minimum number of operations | Target is the first element: O(1) |
| Worst case | The maximum number of operations | Target is last or not present: O(n) |
| Average case | Expected operations over all possible inputs | On average, checks n/2 elements: O(n) |
Big O typically refers to the worst case unless stated otherwise. This gives us a guarantee: the algorithm will never perform worse than the stated complexity.
Exam Tip: When a question says "state the time complexity", give the worst-case Big O unless the question specifies otherwise. If asked for best case, make sure you explain the scenario that produces it.
Worked Example
Question: An algorithm contains the following pseudocode. State its time complexity and justify your answer.
FOR i = 0 TO n - 1
FOR j = i + 1 TO n - 1
IF arr[j] < arr[i] THEN
SWAP arr[i], arr[j]
END IF
END FOR
END FOR
Answer: This is a selection sort. The outer loop runs n times. The inner loop runs (n-1) + (n-2) + ... + 1 = n(n-1)/2 times in total. Dropping the constant and lower-order term, the time complexity is O(n²).
Summary
- Big O notation describes the growth rate of an algorithm's time (or space) requirements as input size increases.
- Always simplify by dropping constants and lower-order terms.
- Common classes from fastest to slowest: O(1), O(log n), O(n), O(n log n), O(n²), O(2ⁿ), O(n!).
- Identify complexity by counting nested loops and how the iteration variable changes.
- Big O normally refers to the worst case unless stated otherwise.