Table of Contents### Calculating the Variance with the `VAR()` Function

Entire Site

When computing the variability of a set of values, one straightforward approach would be to calculate how much each value deviates from the mean. You could then add those differences and divide by the number of values in the sample to get what might be called the “average difference.” The problem, however, is that, by definition of the arithmetic mean, adding the differences (some of which are positive and some of which are negative) gives the result `0`. To solve this problem, you need to add the absolute values of the deviations and then divide by the sample size. This is what statisticians call the average deviation.

Unfortunately, this simple state of affairs is still problematic because (for highly technical reasons) mathematicians tend to shudder at equations that require absolute values. To get around this, they instead use the square of each deviation from the mean, which always results in a positive number. They sum these squares and divide by the number of values