You are viewing a free preview of this lesson.
Subscribe to unlock all 6 lessons in this course and every other course on LearningBro.
No measurement is ever exactly correct. Every reading in a physics lab — a length, a time, a voltage — sits somewhere within a band around the true value, and the width of that band is determined by the errors affecting the measurement. Understanding the two great families of error, systematic and random, is the first step towards designing better experiments, interpreting graphical data correctly, and arguing convincingly that an experimental result either does or does not agree with a theoretical prediction.
This lesson defines systematic and random error, gives the canonical examples (zero error, parallax, calibration drift, thermal noise, reaction-time scatter), and shows the techniques used to reduce each kind. The distinction is not just academic — exam papers regularly ask "suggest one modification that would reduce systematic error" and the answer depends on identifying the error type correctly.
Spec mapping: This lesson addresses content from AQA 7408 §3.1 on the distinction between systematic and random errors in experimental measurements, the identification of common sources, and methods to reduce each (refer to the official AQA specification document for exact wording).
Synoptic links:
- Required practicals (all twelve): every practical write-up demands explicit treatment of which errors are systematic and which random, and what was done to reduce each. Markschemes regularly award 2-4 marks for this analysis.
- Radioactivity (§3.8): background-radiation correction is a textbook example of removing a systematic error; counting-statistics scatter is the canonical random error.
- Electricity (§3.5): voltmeter loading is a systematic error that affects every potential-difference measurement; thermistor temperature drift is a slower systematic effect.
Key definition. A systematic error causes a measurement to be consistently shifted in one direction — all readings are too high, or all too low, by an amount that does not decrease when you repeat the measurement.
Key definition. A random error causes individual measurements to scatter on either side of the true value. Repeats lie above and below the mean in roughly equal numbers, and the scatter can be characterised statistically.
The crucial behavioural difference is how each responds to repetition:
graph TD
ERR[Measurement error] --> SYS[Systematic error]
ERR --> RAN[Random error]
SYS --> ZERO[Zero error<br/>instrument reads non-zero<br/>at zero input]
SYS --> PAR[Parallax<br/>line of sight not<br/>perpendicular to scale]
SYS --> CAL[Calibration drift<br/>instrument has aged<br/>or been mis-calibrated]
SYS --> METH[Method error<br/>neglected effect<br/>e.g. air resistance]
RAN --> REA[Reaction-time scatter<br/>human timing]
RAN --> RES[Resolution limit<br/>last digit fluctuates]
RAN --> THER[Thermal / electrical noise<br/>in sensitive instruments]
RAN --> ENV[Environmental fluctuation<br/>temperature, draughts]
style SYS fill:#e74c3c,color:#fff
style RAN fill:#27ae60,color:#fff
The diagram captures the typical AQA exam categorisation — most exam errors fall into one of these eight cells.
A zero error occurs when an instrument reads a non-zero value when the true input is zero. A common case is a digital balance showing "0.05 g" before any mass is placed on it: every subsequent reading is 0.05 g too high. The correction is to tare the balance (return it to true zero) before use, or to subtract the zero offset from every reading.
Spring balances suffer from a stretched-out spring producing a positive zero error; micrometers and Vernier callipers have zero errors that can be positive (jaws read closed at +0.02 mm) or negative (read −0.01 mm).
When a scale and an indicator (a pointer, a meniscus, a thread) are at different depths, looking at them from an angle gives a reading shifted to one side. The classic example is reading a mercury thermometer from below or above eye level. The correction is to position the eye directly perpendicular to the scale, often using a mirror strip mounted behind the indicator.
An instrument calibrated correctly at the factory may drift over time — voltmeter resistors heat up and change value, balance pivots accumulate dust, thermistors age. The correction is to calibrate the instrument before use against a known standard (e.g. checking a thermometer against the ice point and the steam point, or checking a voltmeter against a calibrated reference cell).
The experimental method itself can introduce a systematic shift. Examples:
| Source | Reduction strategy |
|---|---|
| Zero error | Tare / zero the instrument; check zero reading and subtract from data |
| Parallax | Eye perpendicular; use mirror strip behind pointer |
| Calibration | Calibrate against known standard before use |
| Method error | Re-design experiment to remove the neglected effect (e.g. evacuated chamber to remove air resistance) |
| Voltmeter loading | Use a high-resistance digital voltmeter; or use a potentiometer |
The general principle is that systematic errors cannot be reduced by averaging — they must be diagnosed and corrected at source.
When a human is involved in starting or stopping a timer (e.g. timing a pendulum oscillation, a trolley down a ramp), the moment at which the button is pressed has a scatter of about ±0.1–0.2 s. This affects single-period measurements heavily but is reduced by timing many oscillations and dividing: if you time 20 oscillations of a pendulum, the reaction-time scatter is the same ±0.2 s, but the period is found by dividing the total by 20, so the contribution to the period is ±0.01 s.
The least significant digit on a digital instrument fluctuates because the true value lies somewhere within the smallest resolution interval. A digital voltmeter reading "2.34 V" with a resolution of 0.01 V actually means "the voltage lies somewhere between 2.335 and 2.345 V" — the last digit cycles between 3 and 4 if you watch it long enough.
Sensitive electronic instruments (oscilloscopes, lock-in amplifiers, photodiodes) pick up thermal noise from their internal components and electromagnetic interference from the environment. This shows up as a scatter on the readout. The correction is to average over many cycles, to shield the instrument, and to use lock-in detection for periodic signals.
Temperature, pressure, humidity and air currents all change slowly during a lab session. Resistivity measurements drift as the wire heats up; capacitance changes with humidity; sensitive balances are disturbed by draughts. Random variation in these factors causes random scatter in repeated readings.
| Source | Reduction strategy |
|---|---|
| Reaction time | Time many oscillations and divide; use light gates or motion sensor |
| Resolution | Use a higher-resolution instrument (e.g. micrometer instead of ruler) |
| Thermal/electronic noise | Average over many readings; shield from interference |
| Environmental | Carry out experiments in temperature-controlled rooms |
| In general | Take more repeats — uncertainty in the mean ∝ 1/√N |
A useful examiner-style question is to look at a set of data and identify which errors are dominant.
Worked example. A student measures the time for a pendulum to complete 1 oscillation, taking 10 readings:
1.91, 1.89, 1.93, 2.05, 1.92, 1.90, 1.94, 1.88, 1.91, 1.92 (all in s).
The accepted value (calculated from T = 2π√(L/g) with L = 1.00 m) is 2.01 s.
The mean of the readings is 1.93 s. The scatter (range divided by 2) is about ±0.09 s — this is the random-error contribution.
The mean is 0.08 s lower than the accepted value. This is a systematic shift — possibly the student started timing slightly after the bob crossed the fiducial mark, or measured the string length from the top of the bob rather than its centre. The discrepancy cannot be reduced by taking more readings; it must be diagnosed.
The 2.05 s reading is anomalous (sits more than 2× the standard scatter from the mean). It would be excluded from the analysis, or repeated.
The standard error of the mean of N independent measurements decreases as 1/√N.
Worked example. A student times the period of a pendulum and the scatter on single-oscillation timings is ±0.20 s. How many oscillations should be timed so that the timing contribution to the period uncertainty is below ±0.01 s?
If the scatter on a single oscillation is 0.20 s, then timing N oscillations and dividing by N gives a period uncertainty of 0.20/N s (since the timing scatter is a single reaction-time event at start and stop). So we need 0.20/N < 0.01, i.e. N > 20.
Timing 20 or more oscillations and dividing reduces the reaction-time contribution to below 5% of the typical pendulum period and is the standard technique. (This is different from the 1/√N reduction that applies to averaging independent repeats of the whole experiment — here we are exploiting the fact that timing more cycles dilutes a single timing error.)
A common A-Level required-practical investigation is the determination of Young's modulus E for a wire by measuring its extension under known load. The formula is E = (FL₀) / (A × Δx), where F is the applied force, L₀ the original length, A the cross-sectional area and Δx the extension. The four measured inputs each carry their own dominant error type. Diagnosing which is systematic and which is random — and prescribing the correct mitigation for each — is exactly the AO2/AO3 move examiners want to see in the practical write-up. The table below works through each measurement in turn.
| Measurement | Dominant error and type | Why | Mitigation |
|---|---|---|---|
| Wire length L₀ (≈ 2 m, metre rule) | Parallax when reading the ends against the rule — systematic. | The eye is not perpendicular to the scale, so every reading shifts in the same direction. Repetition does not remove it. | Mount a mirror strip behind the rule and align the eye so the pointer overlaps its reflection. Alternatively measure with a tape measure pulled taut. |
| Wire diameter d (≈ 0.3 mm, micrometer) | Wire non-circularity / variation along length — random. | Real wire is slightly oval and has small variations in diameter along its length, so single-point readings scatter. | Take at least 5 diameter readings at different positions and orientations; take the mean. Use a digital micrometer for higher resolution. |
| Applied mass m (load hanging from wire) | Slotted-mass tolerance — systematic. | Slotted masses are typically machined to ±2% but a given set is consistently a little high or a little low; the error does not change between trials. | Calibrate the masses on a high-resolution balance before use, and apply the correction to F = mg. Use a small-tolerance laboratory mass set. |
| Extension Δx (≈ 1 mm, travelling microscope) | Reading scatter + thermal drift of wire length — random plus slow systematic. | The travelling microscope's vernier can be read to ±0.01 mm but the wire itself elongates slowly as it heats from the lab lights; small environmental fluctuations add scatter. | Repeat the extension reading at fixed times after loading (e.g. 30 s) to control the thermal drift; take 3 readings of each Δx and average; shield the wire from direct light. |
The full uncertainty in E then combines these as %ΔE = %ΔF + %ΔL₀ + 2 × %Δd + %Δ(Δx), with the diameter term doubled because A ∝ d². For typical A-Level data this is usually the dominant contributor — a 2% uncertainty on d becomes a 4% contribution to E, often outweighing all other terms combined.
The wider pedagogical point — and the move that earns AO3 marks in practical write-ups — is that the mitigation strategy must match the error type. Averaging removes random scatter but is useless against the slotted-mass systematic; calibration corrects the systematic but does not address the diameter variation. Naming the type and prescribing the matched remedy is what distinguishes a top-band practical write-up from a generic "take more readings and average" answer.
Specimen question modelled on the AQA paper format. Total: 9 marks.
"In an experiment to determine the acceleration due to gravity, a student drops a steel ball from a height of 1.50 m and uses a stopwatch to time the fall.
(a) Identify one systematic error that could affect the measurement and explain how it could be reduced. (3 marks)
(b) Identify one random error that could affect the measurement and explain how it could be reduced. (3 marks)
(c) The student wishes to reduce the total uncertainty in their value of g. State whether taking more repeated readings would reduce the systematic error, the random error, or both, and justify your answer. (3 marks)"
| Part | AO | Marks | What is rewarded |
|---|---|---|---|
| (a) | AO1 + AO2 | 3 | Naming a systematic error; explaining why it is systematic; stating a reduction method |
| (b) | AO1 + AO2 | 3 | Naming a random error; explaining why it is random; stating a reduction method |
| (c) | AO3 | 3 | Distinguishing the behaviour of the two error types under repetition |
Subscribe to continue reading
Get full access to this lesson and all 6 lessons in this course.