Confidence Intervals
It is becoming more common in reporting statistical measurements
to use confidence intervals as a description of the
statistical determination. A range of values may be
given such that it is determined with some likelihood (often picked at 95%) that
the true values lie within that range. For expressing simple results, the confidence
interval is equivalent to providing a mean ± SD. This method becomes particularly
useful for describing results when we want to describe "no difference" in correct
statistical fashion. Here, the proper description would be that the true difference
lies in an interval less than "x" with the given statistical confidence.
In sum, for hypothesis testing we want the following:
- • Null hypothesis—e.g., mean = measure value "m"
- • Alternative hypothesis—e.g., mean = 0
- • Alpha value (chosen in advance)—e.g., alpha = .05
- • Power (chosen in advance in conjunction with a sample size calculation
so that we have enough data points)—e.g., power = 0.8
- • P value—computed for data
We can say that we have a significant difference between the null and alternative
hypotheses if P is less than alpha. If we are describing
a value, we can give it as a confidence interval, and if we are saying that there
is "no difference," we can describe it as a difference (if any) in the interval less
than some computed value.