You are viewing a free preview of this lesson.
Subscribe to unlock all 10 lessons in this course and every other course on LearningBro.
Every measurement in physics is imperfect. No matter how careful you are, no matter how expensive your apparatus, the value you record is never exactly the "true" value of the quantity being measured. Physicists are professional realists about this: they divide the discrepancy between "measured" and "true" into two distinct categories — systematic errors and random errors — and treat each very differently.
OCR's A-Level Physics A specification requires you to: define systematic and random errors; distinguish them; give examples; understand how each can be detected and reduced; and carry these ideas forward into uncertainty propagation (Lessons 6–8). This lesson lays the conceptual groundwork.
Before we classify errors, it is essential to get the vocabulary right. OCR is strict about these definitions.
Note that "error" in physics does not mean "mistake". A measurement subject to a systematic error is not a blunder; it is a measurement carrying an unavoidable bias that needs to be identified and corrected.
Random errors are fluctuations that cause repeated measurements to scatter around a central value. They are inherent to the measurement process and arise from:
Because random errors scatter both above and below the true value, they can be reduced by averaging. If you take N repeated measurements, the standard error in the mean is approximately 1/√N of the standard deviation of the individual measurements. Doubling the number of measurements roughly halves the random uncertainty — but only approximately.
Imagine timing 10 oscillations of a pendulum with a hand-operated stopwatch. You might get:
14.5, 14.8, 14.3, 14.7, 14.6, 14.4, 14.8, 14.5, 14.6, 14.5 seconds
The values scatter around a mean of 14.57 s. No single value equals the "true" period; but the mean is a much better estimate than any individual reading. This is the classic signature of random error — the scatter averages away with repetition.
Exam Tip: When asked how to reduce random error, the magic words are "repeat and take the mean" and "use an instrument of higher resolution" or "use automated timing such as light gates". These phrases appear verbatim in many mark schemes.
Systematic errors are biases that shift every measurement in the same direction by the same (or a similar) amount. They arise from problems with the apparatus or method, not from the random limits of reading. Typical sources:
Systematic errors cannot be reduced by averaging. Repeating a measurement ten times with a faulty ammeter simply gives ten identically-biased readings. The mean is no closer to the truth than a single reading.
Suppose you measure the length of a pencil with a plastic ruler. The "0" mark on the ruler is 1 mm inside the physical edge of the ruler, due to wear. Every measurement you take will be 1 mm too long. Ten measurements averaged will still be 1 mm too long. No amount of repetition helps.
An analogue ammeter reads +0.02 A when disconnected. You measure a current and get 0.48 A. The true value is 0.48 − 0.02 = 0.46 A. The systematic error is +0.02 A, and subtracting it yields the correct answer.
Imagine shooting arrows at a target. You fire five arrows.
graph LR
A[Small random, small systematic<br/>arrows clustered at centre<br/>ACCURATE and PRECISE]
B[Small random, large systematic<br/>arrows clustered but off-centre<br/>PRECISE but NOT ACCURATE]
C[Large random, small systematic<br/>arrows scattered around centre<br/>ACCURATE on average, NOT PRECISE]
D[Large random, large systematic<br/>arrows scattered and off-centre<br/>NEITHER ACCURATE NOR PRECISE]
Subscribe to continue reading
Get full access to this lesson and all 10 lessons in this course.