AQA A-Level Computer Science: Algorithms and Theory of Computation
AQA A-Level Computer Science: Algorithms and Theory of Computation
Algorithms and the theory of computation sit at the heart of AQA A-Level Computer Science. These topics demand more than surface-level memorisation -- you need to understand how algorithms work step by step, why certain approaches are more efficient than others, and how theoretical models define the limits of what computers can and cannot do.
This guide covers the key algorithms and theory of computation content from the AQA specification, explains how these topics are assessed across both papers and the NEA, and provides the detail you need to answer exam questions with confidence.
How the AQA A-Level Is Structured
Before diving into the content, it is important to understand where algorithms and theory of computation appear in the qualification.
Paper 1: On-screen exam
- 2 hours 30 minutes, 100 marks (40% of A-Level)
- Tests programming skills, computational thinking, and problem-solving
- You will write and analyse code on screen, including implementations of standard algorithms
- Expect questions that ask you to trace, modify, or write algorithms in your chosen programming language
Paper 2: Written exam
- 2 hours 30 minutes, 100 marks (40% of A-Level)
- Tests theoretical knowledge across the full specification
- Questions on theory of computation -- finite state machines, regular expressions, Turing machines, BNF, and problem classification -- appear here
- You may also be asked to describe algorithms in prose or trace them on paper
Non-Exam Assessment (NEA)
- Programming project, 75 marks (20% of A-Level)
- Provides an opportunity to implement algorithms from this topic area in a practical context
Algorithms crop up across both papers. Paper 1 tends to focus on implementation and analysis through code, while Paper 2 tests your understanding of theoretical foundations and your ability to describe algorithmic processes without a computer.
Graph Traversal
Graphs are fundamental data structures that model relationships between objects. AQA expects you to know two standard graph traversal algorithms: depth-first search and breadth-first search.
Depth-First Search (DFS)
Depth-first search explores as far along a branch as possible before backtracking. It uses a stack (or recursion, which implicitly uses the call stack) to keep track of which nodes to visit next.
How it works:
- Start at the chosen source node, mark it as visited, and push it onto the stack.
- Look at the top node on the stack. Visit an unvisited adjacent node, mark it as visited, and push it onto the stack.
- If the top node has no unvisited adjacent nodes, pop it from the stack.
- Repeat until the stack is empty.
DFS is well suited to problems where you need to explore all possible paths, such as solving a maze or detecting cycles in a graph. It uses less memory than BFS on deep, narrow graphs.
Breadth-First Search (BFS)
Breadth-first search explores all neighbours of the current node before moving to the next level. It uses a queue to manage the order of traversal.
How it works:
- Start at the source node, mark it as visited, and enqueue it.
- Dequeue the front node. Visit all its unvisited adjacent nodes, mark each as visited, and enqueue them.
- Repeat until the queue is empty.
BFS is the natural choice when you need the shortest path in an unweighted graph, because it visits nodes in order of increasing distance from the source. It is also used in network broadcasting and finding connected components.
Exam tip: When asked to compare DFS and BFS, focus on the data structure used (stack vs queue), the order nodes are visited, and the suitability for different types of problem. Always state that BFS finds the shortest path in unweighted graphs -- this is a commonly tested point.
Tree Traversal
Trees are a special case of graphs with no cycles. AQA requires you to know three methods for traversing binary trees.
Pre-order Traversal
Visit the root first, then recursively traverse the left subtree, then the right subtree. Pre-order traversal is useful for creating a copy of the tree or producing a prefix expression from an expression tree.
In-order Traversal
Recursively traverse the left subtree, then visit the root, then recursively traverse the right subtree. For a binary search tree, in-order traversal visits nodes in ascending sorted order -- a point that examiners frequently test.
Post-order Traversal
Recursively traverse the left subtree, then the right subtree, then visit the root. Post-order traversal is used when deleting a tree (children before parent) and for producing postfix expressions.
When answering exam questions on tree traversal, draw the tree if one is not provided and work through the traversal methodically, writing out the order of nodes visited at each step.
Searching Algorithms
Linear Search
Linear search checks each element of a data structure in sequence until the target is found or the end is reached. It works on both sorted and unsorted data. Its time complexity is O(n) in the worst case.
Binary Search
Binary search works only on sorted data. It repeatedly divides the search space in half by comparing the target with the middle element. If the target is less than the middle element, search the left half; if greater, search the right half. Its time complexity is O(log n), making it dramatically faster than linear search on large datasets.
Binary Tree Search
A binary tree search traverses a binary search tree by comparing the target value with the current node. If the target is smaller, move to the left child; if larger, move to the right child. In a balanced tree this gives O(log n) performance, but in the worst case (a degenerate tree that resembles a linked list) it degrades to O(n).
Sorting Algorithms
Sorting is one of the most heavily examined algorithm topics. You must know bubble sort, merge sort, and quick sort -- how each works, their time complexities, and their relative strengths.
Bubble Sort
Bubble sort repeatedly passes through the list, comparing adjacent elements and swapping them if they are in the wrong order. After each pass, the largest unsorted element "bubbles" to its correct position.
Step-by-step:
- Compare the first two elements. Swap if the first is greater than the second.
- Move to the next pair and repeat.
- After one full pass, the largest element is in its correct position at the end.
- Repeat for the remaining unsorted portion.
- Stop when a complete pass is made with no swaps.
- Time complexity: O(n^2) in the worst and average case; O(n) in the best case (already sorted, with an early exit optimisation).
- Space complexity: O(1) -- it sorts in place.
- Bubble sort is simple to implement but inefficient on large datasets.
Merge Sort
Merge sort is a divide and conquer algorithm. It splits the list into smaller sublists, sorts them, and then merges them back together.
Step-by-step:
- If the list has one element (or is empty), it is already sorted -- return it.
- Split the list into two roughly equal halves.
- Recursively apply merge sort to each half.
- Merge the two sorted halves by comparing elements from the front of each and placing the smaller one into the result list.
- Time complexity: O(n log n) in all cases -- worst, average, and best.
- Space complexity: O(n) -- it requires additional space for the temporary sublists during merging.
- Merge sort is stable (it preserves the relative order of equal elements) and consistently efficient, but uses more memory than in-place algorithms.
Quick Sort
Quick sort is another divide and conquer algorithm. It selects a pivot element and partitions the list so that all elements less than the pivot are on one side and all elements greater are on the other.
Step-by-step:
- Choose a pivot element (commonly the first, last, or middle element).
- Partition the list: move elements smaller than the pivot to the left and elements larger to the right.
- Recursively apply quick sort to the left and right partitions.
- Time complexity: O(n log n) on average; O(n^2) in the worst case (when the pivot is consistently the smallest or largest element).
- Space complexity: O(log n) for the recursive call stack in the average case.
- Quick sort is generally faster than merge sort in practice due to better cache performance and lower constant factors, despite its worse worst-case complexity.
Comparing the Three
| Algorithm | Best Case | Average Case | Worst Case | Space | Stable? |
|---|---|---|---|---|---|
| Bubble sort | O(n) | O(n^2) | O(n^2) | O(1) | Yes |
| Merge sort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes |
| Quick sort | O(n log n) | O(n log n) | O(n^2) | O(log n) | No |
In exam questions that ask you to recommend a sorting algorithm, consider the size of the dataset, whether memory is limited, and whether a stable sort is required.
Dijkstra's Shortest Path Algorithm
Dijkstra's algorithm finds the shortest path from a single source node to all other nodes in a weighted graph with non-negative edge weights.
How it works:
- Assign a tentative distance of zero to the source node and infinity to all others.
- Set the source node as the current node. Mark all nodes as unvisited.
- For the current node, consider all unvisited neighbours. Calculate their tentative distance through the current node. If this is less than their currently recorded distance, update it.
- Once all neighbours have been considered, mark the current node as visited. A visited node will not be checked again.
- Select the unvisited node with the smallest tentative distance as the new current node.
- Repeat until all nodes have been visited or the target node has been visited.
Dijkstra's algorithm has a time complexity of O(V^2) with a simple implementation, or O((V + E) log V) with a priority queue, where V is the number of vertices and E is the number of edges.
Exam tip: Be prepared to trace Dijkstra's algorithm on a given graph, updating a distance table at each step. Show your working clearly -- marks are awarded for the process, not just the final answer.
A* Algorithm
The A* algorithm extends Dijkstra's approach by adding a heuristic -- an estimate of the remaining distance to the goal. At each step, A* selects the node that minimises the sum of the distance from the source and the heuristic estimate to the destination. This makes A* more efficient than Dijkstra's algorithm for single-target shortest path problems, as it avoids exploring paths that are unlikely to lead to the goal. A* is widely used in game development and route-planning applications. While it is not a core exam focus for AQA, understanding how heuristics improve path-finding demonstrates higher-level analysis.
Big-O Notation
Big-O notation describes the upper bound of an algorithm's time (or space) complexity as the input size grows. It abstracts away constant factors and lower-order terms to focus on how performance scales.
Common complexities you must know:
- O(1) -- Constant time. Performance does not depend on input size. Example: accessing an element by index in an array.
- O(log n) -- Logarithmic time. The problem is halved at each step. Example: binary search.
- O(n) -- Linear time. Performance scales directly with input size. Example: linear search.
- O(n log n) -- Linearithmic time. Efficient sorting algorithms. Example: merge sort.
- O(n^2) -- Quadratic time. Nested iterations over the data. Example: bubble sort.
- O(2^n) -- Exponential time. Doubling with each additional input. Example: brute-force solutions to certain combinatorial problems.
When comparing algorithms, always reference Big-O notation. Examiners expect you to justify algorithm choices by citing complexity classes.
Theory of Computation
The theory of computation topics are assessed primarily on Paper 2. They test your understanding of abstract computational models and formal languages.
Finite State Machines
A finite state machine (FSM) is an abstract model of computation with a finite number of states, transitions between those states triggered by inputs, and designated start and accept states.
Moore Machines produce output based on the current state only. Each state has an associated output. The output changes only when the machine transitions to a different state.
Mealy Machines produce output based on the current state and the current input. Outputs are associated with transitions rather than states, meaning a Mealy machine can produce output without changing state.
Both types of FSM are equivalent in computational power -- any Moore machine can be converted to a Mealy machine and vice versa -- but Mealy machines often require fewer states to achieve the same behaviour.
In exam questions, you may be asked to draw a state transition diagram, complete a state transition table, or trace the behaviour of an FSM for a given input string. Practice all three representations.
Regular Expressions and Regular Languages
A regular language is any language that can be recognised by a finite state machine. Regular expressions provide a concise way to describe regular languages using a combination of:
- Concatenation -- placing symbols in sequence (e.g.,
abmatches the string "ab") - Alternation -- choice between options (e.g.,
a|bmatches "a" or "b") - Kleene star -- zero or more repetitions (e.g.,
a*matches "", "a", "aa", "aaa", and so on)
Regular expressions are used in programming for pattern matching, input validation, and text processing. In the exam, you may be asked to write a regular expression for a given language or determine what language a given regular expression describes.
Backus-Naur Form (BNF) and Syntax Diagrams
Backus-Naur Form is a notation for defining the syntax of formal languages, including programming languages. A BNF grammar consists of production rules that define how valid strings can be constructed.
A typical BNF rule looks like:
<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
<integer> ::= <digit> | <digit><integer>
The left-hand side is a non-terminal symbol (enclosed in angle brackets), and the right-hand side shows what it can be replaced with. The pipe symbol | indicates alternatives.
Syntax diagrams (also called railroad diagrams) represent the same information visually. Each path through the diagram represents a valid sequence of symbols. They are easier to read for some people but harder to write in an exam setting.
BNF and syntax diagrams are used to define grammars that go beyond what regular expressions can describe -- specifically, context-free grammars. You should be able to read BNF rules, write simple BNF definitions, and interpret syntax diagrams.
Turing Machines
A Turing machine is a theoretical model of computation consisting of:
- An infinite tape divided into cells, each containing a symbol
- A read/write head that can move left or right along the tape
- A state register storing the current state
- A transition function that determines the next action based on the current state and the symbol being read
Turing machines are far more powerful than finite state machines. They can simulate any algorithm that can be computed -- which leads to their central importance in theoretical computer science.
The Universal Turing Machine is a Turing machine that can simulate any other Turing machine. It takes as input the description of another Turing machine and its input data, then simulates the behaviour of that machine. The universal Turing machine is the theoretical foundation of the modern stored-program computer -- the idea that a single general-purpose machine can execute any program.
The Church-Turing Thesis states that anything that is intuitively computable can be computed by a Turing machine. This is a thesis rather than a theorem because it cannot be formally proved -- it connects an informal concept ("what can be computed") with a formal model (the Turing machine). No counterexample has ever been found, and the thesis is universally accepted in computer science.
Classification of Problems
Understanding the limits of computation is a key part of the AQA specification.
Tractable vs Intractable Problems
- A tractable problem can be solved in polynomial time -- that is, its time complexity is O(n^k) for some constant k. These problems are considered practically solvable, even for large inputs.
- An intractable problem has no known polynomial-time solution. The best known algorithms run in exponential or factorial time, making them impractical for large inputs. The travelling salesman problem is a classic example.
Decidable vs Undecidable Problems
- A decidable problem is one for which an algorithm exists that will always terminate and give a correct yes/no answer for any input.
- An undecidable problem is one for which no such algorithm can exist. No matter how clever the approach, there will always be inputs for which the algorithm either gives the wrong answer or fails to terminate.
The Halting Problem
The halting problem is the most famous undecidable problem. It asks: given a description of a program and an input, will the program eventually halt (stop running) or will it run forever?
Alan Turing proved in 1936 that no general algorithm can solve the halting problem for all possible program-input pairs. The proof works by contradiction -- assuming such an algorithm exists leads to a logical paradox. This result has profound implications: it means there are fundamental limits to what computers can determine about their own behaviour.
In exam questions, you do not need to reproduce the full proof, but you must be able to explain what the halting problem is, state that it is undecidable, and explain the significance of this result for the limits of computation.
How These Topics Appear in the Exam
Paper 1 will test your ability to implement and analyse algorithms. You may be asked to:
- Write code for a sorting or searching algorithm
- Trace an algorithm and state the output for a given input
- Modify an algorithm to meet a new requirement
- Analyse the time complexity of a given piece of code using Big-O notation
- Implement graph or tree traversal in your chosen programming language
Paper 2 will test your theoretical understanding. Expect questions that ask you to:
- Draw or interpret finite state machine diagrams
- Write or interpret regular expressions
- Define a grammar using BNF notation or interpret a syntax diagram
- Describe the operation of a Turing machine for a given input
- Explain the significance of the universal Turing machine and the Church-Turing thesis
- Classify problems as tractable/intractable or decidable/undecidable
- Explain the halting problem and its implications
- Trace Dijkstra's algorithm on a weighted graph and record intermediate states
For both papers, precise technical vocabulary matters. Use terms like "time complexity", "non-deterministic", "transition function", and "production rule" where appropriate -- examiners reward accurate use of terminology.
Prepare with LearningBro
Strengthen your understanding of these topics with targeted practice questions and worked examples on LearningBro:
- AQA A-Level Computer Science: Algorithms -- covering graph traversal, searching, sorting, Dijkstra's algorithm, and Big-O analysis
- AQA A-Level Computer Science: Theory of Computation -- covering finite state machines, regular expressions, BNF, Turing machines, and the classification of computational problems
Consistent practice with exam-style questions is the most effective way to build the confidence and fluency you need for both Paper 1 and Paper 2. Start revising today.