Error has to do with uncertainty in measurements that nothing can be done about. A metre rule might measure to the nearest mm. If a measurement is repeated, the length obtained may differ and none of the measurements can be preferred. Although it is not possible to do anything about such errors, they can be characterized using standard methods.
Classification of Error
Generally, errors can be divided into two broad and rough but useful classes: systematic and random.
Systematic errors are errors which tend to shift all measurements in a systematic way so their mean value is displaced. This may be due to such things as incorrect calibration of equipment, consistently improper use of equipment or failure to properly account for some effect. In a sense, a systematic error is rather like a blunder and large systematic errors can and must be eliminated in a good experiment. On the other hand random errors will always be present due to inaccurate calibration or measurement.
Random errors are errors which fluctuate from one measurement to the next. They yield results distributed about some mean value. They can occur for a variety of reasons.

They may occur due to lack of sensitivity. For a sufficiently a small change an instrument may not be able to respond to it or to indicate it or the observer may not be able to discern it.

They may occur due to noise. There may be extraneous disturbances which cannot be taken into account.

They may be due to imprecise definition.

They may also occur due to statistical processes such as the roll of dice.
Propagation of Errors
Frequently, the result of an experiment will not be measured directly. Rather, it will be calculated from several measured physical quantities (each of which has a mean value and an error). What is the resulting error in the final result of such an experiment?
For instance, what is the error inwhere andare two measured quantities with errorsandrespectively?
A first thought might be that the error inwould be just the sum of the errors inandThis assumes that, when combined, the errors inandhave the same sign and maximum magnitude; that is that they always combine in the worst possible way. This could only happen if the errors in the two variables were perfectly correlated, (i.e.. if the two variables were not really independent).
If the variables are independent then sometimes the error in one variable will happen to cancel out some of the error in the other and so, on the average, the error inwill be less than the sum of the errors in its parts. A reasonable way to try to take this into account is to treat the perturbations in Z produced by perturbations in its parts as if they were "perpendicular" and added according to the Pythagorean theorem,
That is, ifandthensince
This idea can be used to derive a general rule. Suppose there are two measurements,and and the final result isfor some functionIfis perturbed bythen will be perturbed by
where is the derivative ofwith respect towithheld constant. Similarly the perturbation indue to a perturbation inis,
Combining these by the Pythagorean theorem yields
,
so this gives the same result as before. Similarly ifthen,
which also gives the same result. Errors combine in the same way for both addition and subtraction. However, ifthen,
or the fractional error inis the square root of the sum of the squares of the fractional errors in its parts. (You should be able to verify that the result is the same for division as it is for multiplication.) For example,
It should be noted that since the above applies only when the two measured quantities are independent of each other it does not apply when, for example, one physical quantity is measured and what is required is its square. Ifthen the perturbation indue to a perturbation in is