The precision is just how many decimal places are used in the calculations. It is because the end result will be a whole number that single precision is fine (ie if you wanted to figuring out how many feet of lawn edging to get to go around a tree and it's root system, multiplying the diameter by 3.14 will give you the same result as getting a scientific calculator and using 11 or 12 decimal places, since you'd round up and add a bit, anyway).
If you were doing a series of calculations and the results were not going to be rounded off, then it could be necessary to increase the 'precision' by using more decimal places (or in programming terms, using double precision).