info prev up next book cdrom email home

Arithmetic Mean

For a Continuous Distribution function, the arithmetic mean of the population, denoted $\mu$, $\bar x$, $\left\langle{x}\right\rangle{}$, or $A(x)$, is given by

\mu = \left\langle{f(x)}\right\rangle{} \equiv \int_{-\infty}^\infty P(x)f(x)\,dx,
\end{displaymath} (1)

where $\left\langle{x}\right\rangle{}$ is the Expectation Value. For a Discrete Distribution,
\mu = \left\langle{f(x)}\right\rangle{} \equiv {\sum_{n=0}^N...
...)f(x_n)\over \sum_{n=0}^N P(x_n)} = \sum_{n=0}^N P(x_n)f(x_n).
\end{displaymath} (2)

The population mean satisfies
\left\langle{f(x)+g(x)}\right\rangle{} = \left\langle{f(x)}\right\rangle{} +\left\langle{g(x)}\right\rangle{}
\end{displaymath} (3)

\left\langle{cf(x)}\right\rangle{} = c\left\langle{f(x)}\right\rangle{},
\end{displaymath} (4)

\left\langle{f(x)g(y)}\right\rangle{} = \left\langle{f(x)}\right\rangle{}\left\langle{g(y)}\right\rangle{}
\end{displaymath} (5)

if $x$ and $y$ are Independent Statistics. The ``sample mean,'' which is the mean estimated from a statistical sample, is an Unbiased Estimator for the population mean.

For small samples, the mean is more efficient than the Median and approximately $\pi/2$ less (Kenney and Keeping 1962, p. 211). A general expression which often holds approximately is

\mathop{\rm mean}-\mathop{\rm mode} \approx 3(\mathop{\rm mean}-\mathop{\rm median}).
\end{displaymath} (6)

Given a set of samples $\{x_i\}$, the arithmetic mean is

A(x)\equiv \bar x \equiv \mu \equiv \left\langle{x}\right\rangle{} = {1\over N}\sum_{i=1}^N x_i.
\end{displaymath} (7)

Hoehn and Niven (1985) show that
A(a_1+c, a_2+c, \ldots, a_n+c)=c+A(a_1, a_2, \ldots, a_n)
\end{displaymath} (8)

for any Positive constant $c$. The arithmetic mean satisfies
A\geq G\geq H,
\end{displaymath} (9)

where $G$ is the Geometric Mean and $H$ is the Harmonic Mean (Hardy et al. 1952; Mitrinovic 1970; Beckenbach and Bellman 1983; Bullen et al. 1988; Mitrinovic et al. 1993; Alzer 1996). This can be shown as follows. For $a,b>0$,
\left({{1\over\sqrt{a}}-{1\over\sqrt{b}}}\right)^2\geq 0
\end{displaymath} (10)

{1\over a}-{2\over\sqrt{ab}}+{1\over b}\geq 0
\end{displaymath} (11)

{1\over a}+{1\over b} \geq {2\over\sqrt{ab}}
\end{displaymath} (12)

\sqrt{ab}\geq {2\over{1\over a}+{1\over b}}
\end{displaymath} (13)

G\geq H,
\end{displaymath} (14)

with equality Iff $b=a$. To show the second part of the inequality,
(\sqrt{a}-\sqrt{b}\,)^2=a-2\sqrt{ab}+b\geq 0
\end{displaymath} (15)

{a+b\over 2}\geq\sqrt{ab}
\end{displaymath} (16)

A\geq G,
\end{displaymath} (17)

with equality Iff $a=b$. Combining (14) and (17) then gives (9).

Given $n$ independent random Gaussian Distributed variates $x_i$, each with population mean $\mu_i = \mu$ and Variance ${\sigma_i}^2 = \sigma^2$,

\bar x \equiv {1\over N} \sum_{i=1}^N x_i
\end{displaymath} (18)

$\displaystyle \left\langle{\bar x}\right\rangle{}$ $\textstyle =$ $\displaystyle {1\over N}\left\langle{\sum_{i=1}^N x_i}\right\rangle{} = {1\over N} \sum_{i=1}^N \left\langle{x_i}\right\rangle{}$  
  $\textstyle =$ $\displaystyle {1\over N} \sum_{i=1}^N \mu = {1\over N} (N\mu) = \mu,$ (19)

so the sample mean is an Unbiased Estimator of population mean. However, the distribution of $\bar x$ depends on the sample size. For large samples, $\bar x$ is approximately Normal. For small samples, Student's t-Distribution should be used.

The Variance of the sample mean is independent of the distribution.

$\displaystyle \mathop{\rm var}\nolimits (\bar x)$ $\textstyle =$ $\displaystyle \mathop{\rm var}\nolimits \left({{1\over n} \sum_{i=1}^N x_i}\right)= {1\over N^2} \mathop{\rm var}\nolimits \left({\sum_{i=1}^N x_i}\right)$  
  $\textstyle =$ $\displaystyle {1\over N^2}\sum_{i=1}^n \mathop{\rm var}\nolimits (x_i)
= \left({1\over N^2}\right)\sum_{i=1}^N \sigma^2 = {\sigma^2\over N}.$  

From k-Statistic for a Gaussian Distribution, the Unbiased Estimator for the Variance is given by
\sigma^2 = {N\over N-1} s^2,
\end{displaymath} (21)

s\equiv {1\over N} \sum_{i=1}^N (x_i-\bar x)^2,
\end{displaymath} (22)

\mathop{\rm var}\nolimits (\bar x) = {s^2\over N-1}.
\end{displaymath} (23)

The Square Root of this,
\sigma_x = {s\over \sqrt{N-1}},
\end{displaymath} (24)

is called the Standard Error.
\mathop{\rm var}\nolimits (\bar x)\equiv \left\langle{\bar x^2}\right\rangle{}-\left\langle{\bar x}\right\rangle{}^2,
\end{displaymath} (25)

\left\langle{{\bar x^2}}\right\rangle{} = \mathop{\rm var}\n...
...eft\langle{\bar x}\right\rangle{}^2 = {\sigma^2\over N}+\mu^2.
\end{displaymath} (26)

See also Arithmetic-Geometric Mean, Arithmetic-Harmonic Mean, Carleman's Inequality, Cumulant, Generalized Mean, Geometric Mean, Harmonic Mean, Harmonic-Geometric Mean, Kurtosis, Mean, Mean Deviation, Median (Statistics), Mode, Moment, Quadratic Mean, Root-Mean-Square, Sample Variance, Skewness, Standard Deviation, Trimean, Variance


Abramowitz, M. and Stegun, C. A. (Eds.). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing. New York: Dover, p. 10, 1972.

Alzer, H. ``A Proof of the Arithmetic Mean-Geometric Mean Inequality.'' Amer. Math. Monthly 103, 585, 1996.

Beckenbach, E. F. and Bellman, R. Inequalities. New York: Springer-Verlag, 1983.

Beyer, W. H. CRC Standard Mathematical Tables, 28th ed. Boca Raton, FL: CRC Press, p. 471, 1987.

Bullen, P. S.; Mitrinovic, D. S.; and Vasic, P. M. Means & Their Inequalities. Dordrecht, Netherlands: Reidel, 1988.

Hardy, G. H.; Littlewood, J. E.; and Pólya, G. Inequalities. Cambridge, England: Cambridge University Press, 1952.

Hoehn, L. and Niven, I. ``Averages on the Move.'' Math. Mag. 58, 151-156, 1985.

Kenney, J. F. and Keeping, E. S. Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, 1962.

Mitrinovic, D. S.; Pecaric, J. E.; and Fink, A. M. Classical and New Inequalities in Analysis. Dordrecht, Netherlands: Kluwer, 1993.

Vasic, P. M. and Mitrinovic, D. S. Analytic Inequalities. New York: Springer-Verlag, 1970.

info prev up next book cdrom email home

© 1996-9 Eric W. Weisstein