The term standard deviation is almost always associated with the statistical process of observing the characteristic of a data. In layman’s term, it is actually the average of the distances of the mean within the group of a data. (Niles 1) In a more specific illustration, it is the “average of the averages” of the data. In most cases, the standard deviation is considered to be an appropriate measure of risks. Apparently, this concept is one of the simplest forms of statistical measurement which can readily provide the user a good picture of the behavior of the data values.
Why is this so? The first aspect of using such measurement is that a user can readily acquire information whether each value on the data is far away from the true mean of observation. If the Standard Deviation is small, then it already insinuates that the average distance of all individual values are very close to the actual population mean. In that case, a decision can already be formulated that the data is seen to be normally distributed, which is ideal. Therefore, using the same population for other statistical analyses has a very low risk of giving erroneous results.
On the other hand, if the standard deviation value is large, then most probably, there are some data values in the population which are very far from the population mean. The risk factor of doing other analyses with errors is possibly high. Using other statistical methods in measuring risks can sometimes be complicated. This is especially true for regression, logistic regression and mean comparison. However, at the very simplest level of Standard Deviation, risk factors can already be predicted because of the measure’s straightforwardness and ability to provide ready results.