Important Assumptions in ANOVA

          The ANOVA method has some very important assumptions.  First, as with the t tests, there is an assumption of normality--the groups must be normally distributed on the "dependent" variable.  The ANOVA method is relatively robust to violations of this assumption, provided the violations are not too severe.  The best way to assess deviation from normality is to simply examine a histogram.  The issue of normality has been covered well in previous courses.

A more vexing problem in practice is the assumption of equal variances.  There is often evidence that this assumption is violated (remember the standard deviation squared is the variance, so large differences in standard deviations among groups is evidence of a problem).  In fact, the major statistical software packages provide tests for this assumption.  The most common are the Bartlett-Box test, Hartley's test, and Levene's test.  Calculating these statistics is beyond the scope of the course, but interpretation is relatively straight-forward.  They are available in the major statistical packages.  For these tests, the null hypothesis is that variances of the individual cells are equal.  When the null is rejected, the variances are unequal, and the assumption is not met for ANOVA.  However, these tests tend to be very powerful, and hence it is probably not cause for real concern until the null is rejected at less than the .001 level (sig or p < .001), especially when sample size is large. 

When variances among groups are unequal, they are referred to as heterogenous, and it is said that the homogeneity of variance assumption is violated.  When this is the case, there are ways of correcting the analysis.  We will now consider two approaches for testing the omnibus null when the homogeneity of variance assumption is untenable.  In both of these tests, modifications of the calculation of MSbg and/or MSwg are made, and, most importantly, the critical values of F are adjusted by adjusting the df for MSwg.

These methods are mathematically tedious, but not incomprehensible (make sure you understand the difference- "tedious" would take a whole day to do and would make you bored or angry, "incomprehensible" is where you couldn't do it even if you had the time).  Make sure you know what you would have to do to perform these methods, even though you do not plan to (there are ways of testing to see if you have done this!).  There is nothing in the formulas below you have not seen before.  It is just n, N, s (standard deviation), a (number of groups), etc. The first method is the Brown-Forsythe method (Brown & Forsythe, 1974).  In this method, the MSwg is modified to yield a special F statistic (F*).  The F* value is then evaluated at a special denominator df value (df*).

 

 

The second is the Welch method (Welch, 1951).   Here, a special statistic named W is computed, and it is evaluated against the F distribution at a specially computed denominator df value:




 

 


Post hoc Analyses with Heterogenous Variances

The real problem with unequal variances comes with the post hoc tests.  If you have been using SPSS, you have seen some of these tests before on the post hoc menu.  Each of these tests corrects for unequal variances with adjustments to the formulas and/or the df values for the F statistic.  Notice that the df correction in all three methods is exactly the same:

 

 

 

 

 

 

 

 

 

 

 

References

Brown, M.B. & Forsythe, A.B. (1974).  The ANOVA and multiple comparisons for data with heterogenous variances.  Biometrics, 30, 719-724.

Dunnett, C.W. (1980).  Pairwise multiple comparisons in the unequal variance case.  Journal of the American Statistical Association, 75, 796-800.

Games, P.A. & Howell, J.F. (1976).  Pairwise multiple comparison procedures with unequal N's and/or variances: A monte carlo study.  Journal of Educational Statistics, 1, 113-125.

Maxwell, S.E. & Delaney, H.D. (2004).  Designing experiments and analyzing data: A model comparison perspective.  Mahwah, N.J.: Lawrence Erlbaum.

Welch, B. L. (1951).  On the comparison of several mean values.  Biometrika, 38, 330-336.