next up previous contents
Next: Jackknife Up: Bootstrap Methods Previous: Bootstrap Methods

Error in estimates

Estimated standard error of a mean $\bar{x}$ based on n independent data points $x_1, x_2, \cdots, x_n$, is  
 \begin{displaymath}
 \sqrt{s^2 / n}\end{displaymath} (1)
where $s^2 = \sum_{i = 1}^{n} (x_i - \bar{x})^2 / (n - 1)$ and $\bar{x} = \sum_{i = 1}^{n} x_i / n$.

Under quite general conditions on F, the distribution of $\bar{x}$ will be approximately normal as n gets large, which we can write as
\begin{displaymath}
\bar{x} \sim N(\mu_F, \sigma_F^2 / n)\end{displaymath} (2)

where the first term and the second term in the parenthesis are the mean and the variance of the distribution, respectively.

Roughly speaking, we expect $\bar{x}$ to be less than one standard error (s.e.) away from $\mu_F$ about 68% of the time, and less than two s.e. away from $\mu_F$ about 95% of the time (see Figure 1).
  

Figure 1. Expectation of x: $\mu_F = \mbox{E}_F(x)$; variance of x: $\sigma_F^2 = \mbox{var}_F(x) = \mbox{E}_F[(x - \mu_F)^2]$; variance of $\bar{x}$: $\sigma_F^2 / n$.

Unfortunately for most statistical estimators other than the mean there is no formula like (1) to provide estimated standard errors. Recently, numerical techniques have been developed to obtain these estimates. In the following paragraphs we explore three of these techniques.


next up previous contents
Next: Jackknife Up: Bootstrap Methods Previous: Bootstrap Methods