MANOVA and MANCOVA

Multivariate analysis of variance (MANOVA) and multivariate analysis of covariance (MANCOVA) are used to test the statistical significance of the effect of one or more independent variables on a set of two or more dependent variables, [after controlling for covariate(s) – MANCOVA]. MANOVA and MANCOVA is an extension of ANOVA and ANCOVA. The major difference is that in ANOVA evaluates mean differences on a single dependent criterion variable, while MANOVA evaluates mean differences on two or more dependent criterion variables simultaneously [after controlling for continuous covariate(s) – MANCOVA] vs. on a single DV (ANOVA/ANCOVA).

The unique aspect of MANOVA/ MANCOVA is that the variate (supervariable, or a linear combination of dependent variables, Y* optimally combines multiple DVs into a single value that maximizes difference across group. In other words, a new DV (variate, supervariable, linear combination of DVs) is created and then ANOVA is performed on the newly created DV (Y*). Note that, in factorial design (more than one IV), a different linear combination of the DVs is created separately, for each main effects and interaction effects.

Why Use MANOVA?

  1. Researchers are usually interested in evaluating mean differences on several criterion variables, instead of a single criterion variable. Even if the researcher is only interested in these differences on each variable individually, MANOVA may still be the optimal technique. In this case, MANOVA is used to control the overall alpha level at the desired level (usually .05), but the researcher is interested only in the separate univariate analyses that may subsequently be performed.
  2. If the researcher wants to investigate the relationships among the variables instead of looking at each of them separately.  In another word, when the researcher wants to evaluate the mean differences on all of the dependent variables simultaneously, while controlling for the intercorrelations among them.
  3. While MANOVA may provide a more useful and valid means of analyzing data, this is not always the case. There are some situations in which MANOVA is unnecessary. If a researcher plans to only use dependent variables that are uncorrelated, there is little advantage for using MANOVA. Moreover, with uncorrelated criteria and relatively small sample size, MANOVA may be at a disadvantage to separate ANOVAs in terms of statistical power. Second, the results from an analysis using MANOVA may be more complex and difficult to interpret than those from MANOVAs. Though this complexity may accurately reflect the phenomena under study, multivariate statistics can be more difficult to understand and therefore make the interpretation more complex. The opposite situation can also be true; MANOVA may sometimes simplify the data and make them more understandable.

Assumptions of MANOVA

  • Independence of observations
  • Reliability of continuous variables
    • Multivariate Normality (MVN) – MVN is assumed, but many times hard to assess. Univariate normality does not guarantee multivariate normality, but if all variables meet the univariate normality requirement then departures from multivariate normality are inconsequential. As usual, with larger samples the central limit theorem suggests normality.
  • Linearity among all pairs of DVs – Departure from linearity reduces power as the linear combinations of DVs do not maximize the difference between groups.
  • Absence of multicollinearity and singularity among the dependent variables.
  • Equality of variance-covariance matrices – variance-covariance matrices for all groups (non-significant result from Box’s M test)-> levens
  • In sum, for the multivariate test procedures used with MANOVA to be valid:
    •  Observations must be independent.
    •  Variance-covariance matrices must be equal (or comparable) for all groups.
    •  Variables are reliable (Cronbach α > .8). (.9)
    •  DVs must have a multivariate normal distribution (MVN)
  • Homogeneity (Equality) of covariance matrices (HoV) is the multivariate version of homogeneity of variance (i.e., Levene’s test for equal error variances in ANOVA). This assumes that the variance/covariance matrix in each cell of the design is sampled from the same population (null hypothesis) so they can be reasonably pooled together to create an error term.
  • If sample size are equal in each cell, MANOVA has been shown to be robust to violation even with a significant Box’s M test. Thus, Box’s M test can be ignored. Box’s M test is

•If sample sizes are unequal then one could evaluate Box’s M test at more stringent alpha (α = .001). If significant (p < .001), it is assumed that HoV cannot be held and thus the test is questionable. oIf cells with larger samples have larger variances then the test is more likely to robust to type I error. oIf cells with fewer cases have larger variances then only null hypotheses are retained with confidence but to reject them is questionable. In such case, use a more stringent criterion for a subsequent MANOVA/ MANCOVA statistical test (e.g., use Pillai’s criteria instead of Wilk’s Lambda (Olson, 1979)).

The effect of violating the assumptions:

MANOVA Test Statistics

  • Most MANOVA packages output many of the approximate multivariate tests. The four most widely used measures for assessing statistical significance between groups on the independent variables are:
    • Roy’s Largest Root
    • Wilk’s Lambda
    • Pillai’s Criterion
    • Hotelling’s Trace

MANOVA Test Statistics – What to Use?

  • When there is only one factor with two levels, Wilks’ Lambda, Pillai’s trace, Hotelling’s trace, and Roy’s largest root are the same. The associated F might be slightly different, but the decision regarding whether effect is significant or not will be the same. 
  • When there is more than one degree of freedom for effect, Pillai’s trace is the preferred test statistic for a few researchers. But, most of researchers might rely on Wilks’ Lambda, Hotelling’s trace, and Roy’s largest root.
  • As sample size decreases, unequal n’s appear, and the assumption of homogeneity of variance-covariance matrices is violated, Pillai’s criterion is more robust.
  • In general, all four tests are relatively robust to violations of multivariate normality.
  • Here are two suggestions:
    • Roy’s root is not robust when the homogeneity of covariance matrix assumption is untenable (Stevens, 1979)
    • When sample sizes are equal, the Pillai’s trace is the most robust to violations of assumptions (Bray & Maxwell, 1985).

References:

Bray, J. H. & Maxwell, S. E. (1985). Multivariate analysis of variance. Sage university paper series on quantitative applications in the social sciences, 07-054. Newbury Park, CA: Sage.

Bray, J., & Maxwell, S. (1985). Multivariate analysis of variance (Quantitative applications in the social sciences ; 54). Newbury Park, [Calif.] ; London: SAGE.

Stevens, J. P. (1979). Comment on Olson: choosing a test statistic in multivariate analysis of variance. Psychological Bulletin, 86, 355-360.

Weinfurt, K. P. (1995). Multivariate analysis of variance. In L. G. Grimm & P. R. Yarnold (Eds.), Reading and understanding multivariate statistics (p. 245–276). American Psychological Association.

Useful Resources: