Linear Models and Analysis of
Variance:
CONCEPTS, MODELS, AND APPLICATIONS
Volume II
First Edition
David W. Stockburger
Southwest Missouri State
University
@Copyright 1993
TABLE OF CONTENTS
Title Page
EXPERIMENTAL DESIGNS.............................................................................. 132
Notation.................................................................................................... 133
Kinds of Factors........................................................................................ 134
Treatment...................................................................................... 134
Group Factors............................................................................... 135
Trials Factors................................................................................ 135
Blocking........................................................................................ 135
Unit Factors.................................................................................. 137
Error Factors................................................................................. 137
Fixed and Random Factors........................................................................ 137
Fixed Factors................................................................................ 137
Random Factors............................................................................ 138
Relationships Between Factors................................................................... 139
Crossed........................................................................................ 139
Nested.......................................................................................... 140
An Example Design................................................................................... 142
A Second Example Design......................................................................... 144
A Third Example Design............................................................................ 146
Determining the Number of Subjects and
Measures per Subject................. 148
Setting up the Data Matrix......................................................................... 148
A Note of Caution..................................................................................... 149
One Between Group ANOVA.............................................................................. 150
Why Multiple Comparisons Using ttests is NOT
the Analysis of Choice..... 150
The Bottom Line  Results and Interpretation
of ANOVA.......................... 151
HYPOTHESIS TESTING THEORY UNDERLYING ANOVA.............. 153
The Sampling Distribution Reviewed.............................................. 153
Two Ways of Estimating the Population
Parameter σ_{X}˛................... 154
The Fratio and Fdistribution.................................................................... 157
Nonsignificant and Significant Fratios....................................................... 159
Similarity of ANOVA and ttest................................................................. 162
EXAMPLE OF A NONSIGNIFICANT ONEWAY ANOVA.............. 164
EXAMPLE OF A SIGNIFICANT ONEWAY ANOVA........................ 164
USING MANOVA.................................................................................. 164
The Data....................................................................................... 165
Example Output............................................................................. 166
Dot Notation............................................................................................. 168
POSTHOC Tests of Significance.............................................................. 170
Example SPSS Program using MANOVA................................................. 173
Interpretation of Output............................................................................ 174
Graphs of Means....................................................................................... 175
The ANOVA Summary Table................................................................... 176
Main Effects.................................................................................. 177
Simple Main Effects....................................................................... 178
Interaction Effects.......................................................................... 178
Example Data Sets, Means, and Summary Tables...................................... 180
No Significant Effects..................................................................... 180
Main Effect of A............................................................................ 181
Main Effect of B............................................................................ 182
AB Interaction............................................................................... 183
Main Effects of A and B................................................................ 184
Main effect of A, AB Interaction ................................................... 185
Main Effect of B, AB Interaction.................................................... 186
Main Effects of A and B, AB Interaction........................................ 187
No Significant Effects..................................................................... 188
Dot Notation Revisited.............................................................................. 189
Nested Two Factor Between Groups Designs B(A)............................................... 192
The Design................................................................................................ 192
The Data................................................................................................... 192
SPSS commands....................................................................................... 193
The Analysis.............................................................................................. 193
The Table of Means....................................................................... 194
Graphs.......................................................................................... 194
The ANOVA Table....................................................................... 195
Interpretation of Output............................................................................. 195
Similarities to the A X B Analysis............................................................... 195
Contrasts, Special and Otherwise........................................................................... 197
Definition................................................................................................... 197
Sets of Contrasts....................................................................................... 198
Orthogonal Contrasts..................................................................... 198
Nonorthogonal Contrasts............................................................. 199
Sets of Orthogonal Contrasts......................................................... 199
Finding Sets of Orthogonal Contrasts............................................ 199
The Data................................................................................................... 201
SPSS commands....................................................................................... 202
The Analysis.............................................................................................. 202
The Table of Means....................................................................... 202
The ANOVA table........................................................................ 203
Interpretation of Output............................................................................. 203
Constants.................................................................................................. 203
Contrasts, Designs, and Effects.................................................................. 205
NonOrthogonal Contrasts........................................................................ 208
Smaller than Total Sum of Squares................................................. 208
Larger than Total Sum of Squares.................................................. 209
Standard Types of Orthogonal Contrasts.................................................... 209
DIFFERENCE.............................................................................. 210
SIMPLE....................................................................................... 210
POLYNOMIAL........................................................................... 210
Conclusion................................................................................................ 213
ANOVA and Multiple Regression.......................................................................... 214
ONE FACTOR ANOVA......................................................................... 214
ANOVA and Multiple Regression.................................................. 214
Example........................................................................................ 215
Example Using Contrasts............................................................... 216
Dummy Coding............................................................................. 216
ANOVA, Revisited....................................................................... 218
TWO FACTOR ANOVA........................................................................ 219
Example........................................................................................ 220
Example Using Contrasts............................................................... 220
Regression Analysis using Dummy Coding...................................... 221
Conclusion................................................................................................ 223
Unequal Cell Frequencies...................................................................................... 225
Equal Cell Frequency  Independence of
Effects......................................... 225
Unequal Cell Frequency  Dependent Effects............................................. 226
Solutions for Dealing with Dependent Effects.............................................. 229
UNEQUAL CELL SIZES FROM A MULTIPLE REGRESSION
VIEWPOINT 231
REGRESSION ANALYSIS OF UNEQUAL N ANOVA........................ 234
RECOMMENDATIONS......................................................................... 237
Subjects Crossed With Treatments S X A............................................................. 238
The Design................................................................................................ 238
The Data................................................................................................... 239
SPSS commands....................................................................................... 239
The Correlation Matrix.................................................................. 240
The Table of Means....................................................................... 240
Graphs.......................................................................................... 241
The ANOVA Table....................................................................... 241
Interpretation of Output................................................................. 242
Additional Assumptions for Univariate S X A
Designs................................ 244
SS, MS, and Expected Mean Squares (EMS)................................ 245
Subjects Crossed With Two Treatments S X A X
B............................................. 249
The Design................................................................................................ 249
The Data................................................................................................... 249
SPSS commands....................................................................................... 250
The Correlation Matrix.................................................................. 251
The Table of Means....................................................................... 251
Graphs.......................................................................................... 253
The ANOVA Table....................................................................... 253
Interpretation of Output................................................................. 253
Additional Assumptions for Univariate S X A X
B Designs......................... 258
SS, MS, and Expected Mean Squares (EMS)................................ 258
Mixed Designs  S ( A ) X B................................................................................. 261
The Design................................................................................................ 261
The Data................................................................................................... 261
SPSS commands....................................................................................... 262
The Table of Means....................................................................... 263
Graphs.......................................................................................... 264
Interpretation of Output................................................................. 264
Expected Mean Squares (EMS).................................................... 267
Three Factor ANOVA.......................................................................................... 268
Effects....................................................................................................... 268
Main Effects.................................................................................. 269
TwoWay Interactions................................................................... 269
ThreeWay Interaction................................................................... 271
Additional Examples.................................................................................. 272
All Effects Significant..................................................................... 272
Example 3  B, AC, and BC......................................................... 274
Two More Examples..................................................................... 275
Expected Mean Squares............................................................................ 276
Tests of Significance.................................................................................. 279
Error Terms................................................................................... 279
SPSS Output................................................................................. 279
Examples................................................................................................... 279
S ( A X B X C)............................................................................. 279
S ( A X B ) X C............................................................................ 280
S ( A ) X B X C............................................................................ 280
BIBLIOGRAPHY................................................................................................. 287
INDEX................................................................................................................. 289
Chapter
8
EXPERIMENTAL
DESIGNS
Experimental design refers to the manner in which the
experiment was set up. Experimental design includes the way the treatments were
administered to subjects, how subjects were grouped for analysis, how the
treatments and grouping were combined.
In ANOVA there is a single dependent variable or score. In Psychology the dependent measure is
usually some measure of behavior. If
more than one measure of behavior is taken, multivariate analysis of variance,
or MANOVA, may be the appropriate analysis.
Because the ANOVA model breaks the score into component parts, or
effects, which sum the total score, the one must assume the interval property
of measurement for this variable. Since
in real life the interval property is never really met, one must be satisfied
that at least an approximation of an interval scale exists for the dependent
variable. To the extent that this
assumption is unwarranted, the ANOVA hypothesis testing procedure will not
work.
In ANOVA there is at least one independent variable or factor. There are different kinds of factors; treatment, trial, blocking, and group. Each will be discussed in the following
section. All factors, however, have
some finite number of different levels.
Each level is the same in either some quality or quantity. The only restriction on the number of
levels is that there are fewer levels than scores, although in practice one
seldom sees more than ten levels in a factor unless the data set is very large. It is not necessary that the independent
variables or factors be measured on an interval scale. If the factors are measured on an
(approximate) interval scale, then some flexibility in analysis is gained. The continued popularity of ANOVA can
partially be explained by the lack of the necessity of the interval assumption
for the factors.
Notation
Every writer of an introductory, intermediate, or advanced
statistics text has his or her own pet notational system. I have taught using a number of different
systems and have unabashedly borrowed the one to be described below from Lee
(1975). In my opinion it is the easiest
for students to grasp.
The dependent variable or score will be symbolized by the
letter X. Subscripts (usually multiple)
will be tagged on this letter to differentiate the different scores. For example, to designate a single score
from a group of scores a single subscript would be necessary and the symbol X_{s}
could be used. In this case X_{1}
would indicate the first subject, X_{2} the second, X_{3} the
third, and so forth.
When it is desired to indicate a single score belonging to a
given combination of factors, multiple subscripts must be used. For example, X_{abs} would describe
a given score for a combination of a and b.
Thus, X_{236} would describe the sixth score when a=2 and
b=3. Another example, X_{413},
would describe the third score when a=4 and b=1.
Bolded capital letters will be used to symbolize
factors. Example factors are A, B,
C, ..., Z. Some factor
names are reserved for special factors.
For example, S will always refer to the subject factor, E will always be the error factor, and
G will be the group factor.
Small letters with a numerical subscript are used to
indicate specific levels of a factor.
For example c_{1} will indicate the first level of factor C,
while c_{c} will indicate a specific level of factor C, but the
level is unspecified. The number of
levels of a factor are given by the unbolded capital letter of that
factor. For example there are 1, 2,
..., C levels of factor C.
In an example experiment, let X, the score, be the dollar
amount after playing Windows^{TM} Solitaire for an hour. In this experiment the independent variable
(factor) is the amount of practice, called factor A. Let nine subjects each participate in one of
four (A=4) levels of training. The
first level, a_{1}, consists on no practice, a_{2} = one hour
of practice, a_{3} = five hours of practice, and a_{4} = twenty
hours of practice. A given score
(dollar amount) would be symbolized by X_{as}, where X_{35}
would be the fifth subject in the group that received five hours of practice.
Kinds
of Factors
Treatment
Treatments will be defined as quantitatively or
qualitatively different levels of experience.
For example, in an experiment on the effects of caffeine, the treatment
levels might be exposure to different amounts of caffeine, from none to .0375
milligrams. In a very simple experiment
there are two levels of treatment, none, called the control condition, and
some, called the experimental condition.
Treatment factors are usually the main focus of the
experiment. A treatment factor is
characterized by the following two attributes (Lee, 1975):
1. An investigator could assign any of his
experimental subjects to any one of the levels of the factor.
2. The different levels of the factor
consist of explicitly distinguishable stimuli or situations in the environment
of the subject.
In the solitaire example, practice time would be a treatment
factor if the experimenter controlled the amount of time that the
subject practiced. If subject's came to
the experiment having already practiced a given amount, then the experimenter
could not arbitrarily or randomly assign that subject to a given practice
level. In that case the factor would no
longer be considered a treatment factor.
In an experiment where subjects are run in groups, it
sometimes is valuable to treat each group as a separate level of a factor. There might be, for example, an obnoxious
subject who affects the scores of all other subjects in that group. In this case the second attribute would not
hold and the factor would be called a group factor.
Group Factors
As described above, a group factor is one in which the
subjects are arbitrarily assigned to a given group which differs from other
groups only in that different subjects are assigned to it. If each group had some type of
distinguishing feature, other than the subjects assigned to it, then it would
no longer be considered as a group factor.
If a group factor exists in an experimental design, it will be
symbolized by G.
Trials Factors
If each subject is scored more than once under the same
condition and the separate scores are included in the analysis, then a trials
factor exists. If the different scores
for a subject are found under different levels of a treatment, then the factor would
be called a treatment factor rather than a trials factor. Trials factors will be denoted by T.
Trials factors are useful in examining practice or fatigue
effects. Any change in scores over time
may be attributed to having previously experienced similar conditions.
Blocking
If subjects are grouped according to some preexisting subject similarity, then that grouping is
called a blocking factor. The
experimenter has no choice but to assign the subject to one or the other of the
levels of a blocking factor. For
example, gender (sex) is often used as a blocking factor. A subject enters the experiment as either a
male or female and the experimenter may not arbitrarily (randomly) assign that
individual to one gender or the other.
Because the experimenter has no control over the assignment
of subjects to a blocking factor, causal inference is made much more
difficult. For example, if in the
solitaire experiment, the practice factor was based on a preexisting
condition, then any differences between the groups may be due either to
practice or to the fact that some subjects liked to play solitaire, were
better at the game and thus practiced more.
Since the subjects are selfselected, it is not possible to attribute
the differences between groups to practice, enjoyment of the game, natural skill
in playing the game, or some other reason.
It is possible, however, to say that the groups differed.
Even though causal inference is not possible, blocking
factor can be useful. A factor
which accounts for differences in the
scores adds power to the experiment.
That is, a blocking factor which explains some of the differences
between scores may make it more likely to find treatment effects. For example, if males and females performed significantly different in the
solitaire experiment, it might be useful to include sex as a blocking factor
because differences due to gender would be included in the error variance
otherwise.
In other cases blocking factors are interesting in their own
right. It may be interesting to know
that freshmen, sophomores, juniors, and seniors differ in attitude toward
university authority, even though causal inferences may not be made.
In some cases the preexisting condition is quantitative, as
in an IQ score or weigh. In these cases
it is possible to use a median split where the scores above the median are placed
in one group and the scores below the median are placed in another. Variations of this procedure divide the
scores into three, four, or more approximately equal sized groups. Such procedures are not recommended as there
are better ways of handling such data (Edwards, 1985).
Unit Factors
The unit factor is the entity from which a score is
taken. In experimental psychology, the
unit factor is usually a subject (human or animal), although classrooms,
dormitories, or other units may serve the same function. In this text, the unit factor will be
designated as S, with the understanding that it might be some other type
of unit than subject.
Error Factors
The error factor, designated as E, is not a factor in
the sense of the previous factors and is not included in the experimental
design. It is necessary for future
theoretical development.
Fixed
and Random Factors
Each factor in the design must be classified as either a
fixed or random factor. This is
necessary in order to find the correct error term for each effect. The MANOVA program in SPSS does not require
that the user designate the type for each factor. If the user is willing to accept the program defaults, which are
correct in most cases, no problem is encountered. There are situations, however, where the program defaults are
incorrect and additional coding is necessary to do the correct hypothesis
tests.
Fixed Factors
A factor is fixed if
(Lee, 1975)
1. The results of the factor generalize only
to the levels that were included in the experimental design. The experimenter may wish to generalize to
other levels not included in the factor, but it is done at his or her own
peril.
2. Any procedure is allowable to select
the levels of the factor.
3. If the experiment were replicated, the
same levels of that factor would be included in the new experiment.
Random Factors
A factor is random if
1. The results of the factor generalize to
both levels that were included in the factor and levels which were not. The experimenter wishes to generalize to a
larger population of possible factor levels.
2. The levels of the factor used in the
experiment were selected by a random procedure.
3. If the experiment were replicated,
different levels of that factor would be included in the new experiment.
In many cases an exact determination of whether a factor is
fixed or random is not possible. In
general, the subjects (S) and groups (G) factors will always be a
random factor and all other factors will be considered fixed. The default designation of MANOVA will set
the subjects factor as random and all other factors as fixed.
Some reflection on the assumption of a random selection of
subjects may cause the experimenter to question whether it is in fact a random
factor. Suppose, as often happens,
subjects volunteered to participate in the experiment. In this case the assumptions underlying the
ANOVA are violated, but the procedure is used anyway. Seldom, if ever, will all the assumptions necessary to do an
ANOVA be completely satisfied. The
experimenter must examine how badly the assumptions were violated and then make
a decision as to whether or not the ANOVA is useful.
In general, when in doubt as to whether a factor is fixed or
random, consider it fixed. One should
never have so much doubt, however, as to consider the subjects factor as a
fixed factor.
Relationships
Between Factors
The following two relationships between factors describe a
large number of useful designs. Not all
possible experimental designs fit neatly into categories described by the
following two relationships, but most do.
Crossed
When two factors are crossed, each level of each factor
appears with each level of the other factor.
A crossing relationship is indicated by an "X".
For example, consider two factors, A and B,
were A is gender (a_{1} = Females, a_{2} = Males) and B
is practice (b_{1} = none, b_{2} = one hour, b_{3} =
five hours, and b_{4} = twenty hours).
If gender was crossed with practice, A X B, then both
males and females would participate in all four levels of practice. There would be eight groups of subjects
including: ab_{11}, females who
had no practice, ab_{12}, females who had one hour of practice, and so
forth to ab_{24}, males who practiced twenty hours. An additional factor may be added to the
design, say handedness (C), where c_{1} = right handed and c_{2}
= left handed. If the design of the
experiment was A X B X C, then there would be sixteen groups, including abc_{231},
lefthanded males who practiced five hours.
If subjects (S) are crossed with treatments (A),
S X A, each subject sees each level of the treatment conditions. In a very simple experiment such as the
effects of caffeine on alertness (A), each subject would be exposed to
both a caffeine condition (a_{1}) and a no caffeine condition (a_{2}). For example, using the members of a
statistics class as subjects, the experiment might be conducted as
follows. On the first day of the
experiment the class is divided in half with one half of the class getting
coffee with caffeine and the other half getting coffee without caffeine. A measure of alertness is taken for each
individual, such as the number of yawns during the class period. On the second day the conditions are
reversed, that is, the individuals who received coffee with caffeine are now
given coffee without and viceversa.
The distinguishing feature of crossing subjects with
treatments is that each subject will have more than one score. This feature is sometimes used in referring
to this class of designs as repeated measures designs. The effect also occurs within each subject,
thus these designs are sometimes referred to as within subjects
designs.
Crossing subjects with treatments has two advantages. One, they generally require fewer subjects,
because each subject is used a number of times in the experiment. Two, they are more likely to result in a
significant effect, given the effects are real. This is because the effects of individual differences between
subjects is partitioned out of the error term.
Crossing subjects with treatments also has
disadvantages. One, the experimenter
must be concerned about carryover effects.
For example, individuals not used to caffeine may still feel the effects
of caffeine on the second day, when they did not receive the drug. Two, the first measurements taken may
influence the second. For example, if
the measurement of interest was score on a statistics test, taking the test
once may influence performance the second time the test is taken. Three, the assumptions necessary when more
than two treatment levels are employed in a crossing subjects with
treatments may be restrictive.
When a factor is a blocking factor, it is not possible to
cross that factor with subjects. It is
difficult to find subjects for a S X A design where A is
gender. I generally will take points
off if a student attempts such a design.
Nested
Factor B is said to be nested within factor if each
meaningful level of factor B occurs in conjunction with only one level
of A. This relationship is
symbolized a B(A), and is read as "B nested within
A". Note that B(A)
is considerably different from A(B). In the latter, each meaningful level of A would occur in
one and only one level of B.
These types of designs are also
designated as hierarchical designs in some textbooks.
A B(A) design occurs, for example, when the
first three levels of factor B (b_{1} ,b_{3}, and b_{3})
appear only under level a_{1} of factor A and the next three levels of
B (b_{4} ,b_{5}, and b_{6}) appear only under level a_{2} of
factor A. Depending upon the labelling
scheme, b_{4} ,b_{5}, and b_{6} may also be called b_{1}
,b_{3}, and b_{3}, respectively. It is understood by the design designation that the b_{1}
occurring under a_{1} is different from the b_{1} occurring
under a_{2}.
Nested or hierarchical designs can appear because many
aspects of society are organized
hierarchically. For example
within the university, classes (sections) are nested within courses, courses
are nested within departments, departments within colleges, and colleges within
the university..
In experimental research it is also possible to nest
treatment conditions within other treatment conditions. For example, suppose a researcher was
interested in the effect of diet on health in hamsters. One factor (A) might be a high
cholesterol (a_{1}) or low cholesterol (a_{2}) diet. A second factor (B) might be type of
food, peanut butter (b_{1}), cheese (b_{2}), red meat (b_{3}),
chicken (b_{4}), fish (b_{5}), or vegetables(b_{6}). Because type of food may be categorized as
being either high or low in cholesterol, a B(A) experimental
design would result. Chicken, fish, and
vegetables would be relabelled as b_{1} ,b_{3}, and b_{3},
respectively, but it would be clear from the experimental design specification
that peanut butter and chicken, cheese and fish, and red meat and vegetables,
were qualitatively different, even though they all share the same label.
While any factor may possibly be nested within any other
factor, the critical nesting relationship is with respect to subjects. If S is nested within some
combination of other factors, then each subjects appear under one, and only
one, combination of factors within which they are nested. These effects are often called the Between
Subjects effects. If S is
crossed with come combination of other factors, then each subject see all
combinations of factors with which they are crossed. These effects are referred to as Within Subjects effects.
As mentioned earlier subjects are necessarily nested within
blocking factors. Subjects are
necessarily nested within the effects of gender and current religious
preference, for example.
Treatment factors, however, may be nested or crossed with
subjects. The effect of caffeine on
alertness could be studied by dividing the subjects into two groups, with one
receiving a beverage with caffeine and one group not. This design would nest subjects with caffeine and be specified as
S(A), or simply A, as the S is often dropped when
the design is completely between subjects.
If subjects appeared under both caffeine conditions,
receiving caffeine on one day and no caffeine on the other, then subjects would
be crossed with caffeine. The design
would be specified as S X A.
In this case the S would remain in the design.
An Example
Design
A psychologist (McGuire, 1993) was interested in studying
adults' memory for medical information presented by a videotape. She included onehundred and four
participants in which sixtyseven
ranged in age from 18 to 44 years and thirty seven ranged in age from 60
to 82 years. Participants were randomly
assigned to one of two conditions, either an organized presentation condition
or an unorganized presentation
condition. Following observation of the
videotape, each participant completed an initial recall sequence consisting of
freerecall and probed recall retrieval tasks. A probed recall is like a multiplechoice test and a freerecall
is like an essay test. Following a
oneweek interval, participants completed the recall sequence again.
This experimental design provides four factors in addition
to subjects (S). The age factor
(A) has two level a_{1}=young and a_{2}=old and would
necessarily be a blocking factor. The
type of videotape factor (B) would be a treatment factor and would
consist of two levels b_{1}=organized and b_{2}=unorganized. The recall method factor (C) would be
a form of trials factor and would have two levels c_{1}=freerecall and
c_{2}=probed recall. The forth
factor (D) would be another trials factor where d_{1}=immediate
and d_{2}=one week delay.
Each level of B appears with each level of A,
thus A is crossed with B.
Since each subject appears in one and only one combination of A
and B, subjects are nested within A X B. That is, each subject is either young or old
and sees either an organized or unorganized videotape. The design notation thus far would be S
( A X B ).
Each type of recall (C) was done by each subject at
both immediate and delayed intervals (D). Thus subjects would be crossed with recall method and
interval. The complete design
specification would be S ( A X B ) X C X D. In words this design would be subjects
nested within A and B and crossed with C and D.
In preparation for entering the data into a data file, the
design could be viewed in a different perspective. Listing each subject as a row and each measure as a column, the
design would appear as follows:
Immediate Week Later
Age Videotape Subject Free Probed Free Probed
S_{1}
Organized S_{2}
Young ...
Unorganized S_{1}
...
S_{1}
Organized S_{2}
Old ...
Unorganized S_{1}
...
In this design,
two variables would be needed. One to classify each subjects as either young or
old, and one to document which type of videotape the subject saw. In addition to the classification variables,
each subject would require four variables to record the two types of measures
taken at the two different times.
A score taken
from the design presented above could be represented as X_{abscd}. For example, the immediate probed test score
taken from the third subject in the old group who viewed an organized videotape
would be X_{21312}.
A
Second Example Design
The Lombard
effect is a phenomenon in which a speaker or singer involuntarily raises his or
her vocal intensity in the presence of high levels of sound. In a study of the Lombard effect in choral
singing (modified from Tonkinson, 1990), twentyseven subjects, some
experienced choral singers and some not,
were asked to sing the national anthem along with a choir heard through
headphones. The performances were
recorded and vocal intensity readings from three selected places in the song
were obtained from a graphic level recorder chart. Each subject sang the song four times: with a none, or a soft, medium, or loud choir accompaniment. After some brief instructions to resist
increasing vocal intensity as the choir increased, each subject again sang the
national anthem four times with the four different accompaniments. The order of accompaniments was
counterbalanced over subjects.
In this design,
there would be four factors in addition to subjects. Subjects would be nested within experience level (A), with
a_{1}=inexperienced and a_{2}=experienced choral singers. This factor would be a blocking
factor. Subjects would be crossed with
instructions (B), where b_{1}=no instructions and b_{2}=resist
Lombard effect. In addition, subjects
would be crossed with accompaniment (C) and place in song (D).
The accompaniment factor would include four levels c_{1}=soft, c_{2}=medium,
c_{3}=loud, and c_{4}=none.
This factor would be considered a treatment factor. The place in song factor could be considered
a trial factor and would have three levels.
The
experimental design could be written as S ( A ) X B X C
X D. In words, subjects were
nested within experience level and crossed with instructions, accompaniment,
and place in song. In this design, one
variable would be needed for the classification of each subject and twentyfour
variables would be needed for each subject, one for each combination of
instructions, accompaniment, and place in song. The design could be written:
No
Instructions Resist
Lombard Effect
Soft Medium Loud
None Soft Medium
Loud None
Exp S 1 2 3 1 2 3
1 2 3 1 2 3 1 2 3
1 2 3 1 2 3 1 2 3
1 1
1 2
...
2 1
2 2
...
A
Third Example Design
From the
Springfield NewsLeader, March 1, 1993:
Images of
beauty such at those shown by Sports Illustrated's annual swimsuit issue, are
harmful to the selfesteem of all women and contribute to the number of eating
disorder cases in the U. S., says a St. Louis professor who researches women's
health issues.
In a recent
study at Washington University, two groups of women  one with bulimia and one
without  watched videotapes of SI models in swimsuits.
Afterwards,
both groups reported a more negative selfimage than they did before watching
the tape, describing themselves as "feeling fat and flabby" and
"feeling a great need to diet."
The experiment
described above has a number of inadequacies, the lack of control conditions
being the most obvious. The original
authors, unnamed in the article, may have designed a much better experiment
than is described in the popular press.
In any case, this experiment will now be expanded to illustrate a
complex experimental design.
The dependent
measure, apparently a rating of "feeling fat and flabby" and
"feeling a great need to diet", will be retained. In addition, two neutral questions will be
added, say "feeling anxious" and "feeling good about the
environment." These four
statements will be rated by all subjects, thus subjects will be crossed with
ratings. The first two statements deal
with body image and diet and the last two do not, thus they will form a factor
in the design (called D). Since
the statements within each of body image factor share no similarity across
levels of D, these statements (A) are nested within D. For
example, the rating of "feeling a great need to diet" and
"feeling good about the environment" share no qualitative
relationship. At this point the design
may be specified as S X A(D).
Suppose the
researcher runs the subjects in groups of six to conserve time and effort, thus
creating a groups (G) factor.
In addition to the two groups, with bulimia and without (B),
suppose the subjects viewed one of the following videotapes (V): SI models, Rosanne Barr, or a show about the
seals of the great northwest. Assuming
that all the subjects in each level of group either had bulimia or did not,
then the design could be specified as
S(G(B
X V)).
The factor B
is crossed with V because each level of B appears with each level
of V. That is, subjects with and
without bulimia viewed all three videotapes. Because each group viewed only a single videotape and was composed
of subjects either with bulimia or without, the groups factor is nested within
the cross of B and V.
Because subjects appeared in only one group, subjects are nested within
groups.
Combining the
between subjects effects, S(G(B X V)), and the
within subjects effects, A(D), yields the complete design
specification
S(G(B
X V)) X A(D).
Determining
the Number of Subjects and Measures per Subject
It is important
to be able to determine the number of subjects and the number of measures per
subject for practical reasons, namely, is the experiment feasible? After listening to a student propose an
experiment and a little figuring, I remarked "according to my
calculations, you should be able to complete the experiment sometime near the
middle of the next century." If an
experimenter is limited in the time a subject is available, then the number of
measures per subject is another important consideration.
To determine
the number of subjects, multiply the number of levels of the between subjects
factor together. In the previous
example, S = 6 because the subjects were run in groups of six. Let G=4, or there be four groups of six each
of combinations of bulimia and videotape.
Since there were two levels of bulimia, B=2, and three levels of
videotape, V=3. Since S(G(B
X V)), then the total number of subjects needed would be S * G * B * V
or 6*4*2*3 or 144. Since half of the
subjects must have bulimia, the question of whether or not 72 subjects with
bulimia are available must be asked before the experiment proceeds.
To find the
number of measures per subject, multiply the number of levels of the within
subjects factors together. In the
previous example A(D), where A=2 and D=2, there would be A * D or
2 * 2 or 4 measures per subject.
Setting
up the Data Matrix
Columns 
1
2 ... C 1 Rows 2 . R 
A few rules
simplify setting up the data matrix.
First, each subject appears on a single row of the data matrix. Second, each measure or combination of
within subjects factors appears in a column of data. Third, each subject must be identified as to the combination of
between subjects factors which he or she appears.
1 1 1 3 5 4 3 
1 1 1 2 5 5 3 1 1 1 5 5 5 4 1 1 1 3 2 1 3 1 1 1 2 5 3 1 1 1 1 3 5 4 3 2 1 1 5 4 5 5 ... 4 2 3 3 5 5 4 
In the previous
example, since there would be 144 subjects in the experiment, there would be
144 rows of data. Each subject would be
identified as to the level of G,
B, and V to which she belonged.
For example, a subject who appeared under g_{3} of b_{1}
and v_{4} would be labelled as 3 1 4.
Since there are four measures per subject, these would appear as columns
in addition to the identifiers. An
example data matrix might appear as follows.
In this example, the level of G is in the first column, B in the second,
and V in the third. The four
combinations of within subjects factors appear next as ad_{11} ad_{12}
ad_{12} ad_{22}.
A Note
of Caution
It is fairly
easy to design complex experiments.
Running the experiments and interpreting the results are a different
matter. Many complex experiments are
never completed because of such difficulties.
This is from personal experience.
Chapter
9
One Between
Group ANOVA
Why Multiple Comparisons Using
ttests is NOT the Analysis of Choice
Group
Therapy Method _ S_{X} S_{X}˛ 
1
Reality 20.53 3.45
11.9025 2
Behavior 16.32 2.98
8.8804 3
Psychoanalysis 10.39 5.89
35.7604 4
Gestalt 24.65 7.56
57.1536 5
Control 10.56 5.75
33.0625 
Suppose a researcher has performed a study of
the effectiveness of various methods of individual therapy. The methods used were: Reality Therapy, Behavior Therapy,
Psychoanalysis, Gestalt Therapy, and, of course, a control group. Twenty patients were randomly assigned to
each group. At the conclusion of the
study, changes in selfconcept were found for each patient. The purpose of the study was to determine if
one method was more or less effective than the other methods.
At the conclusion of the experiment the
researcher organizes the collected data in the following manner:
The researcher wishes to compare the means of
the groups with each other to decide about the effectiveness of the
therapy.
One method of performing this analysis is by
doing all possible ttests, called multiple ttests. That is, Reality Therapy is first compared with Behavior Therapy,
then Psychoanalysis, then Gestalt Therapy, then the Control Group. Behavior Therapy is then individually
compared with the last three groups, and so on. Using this procedure there would be ten different ttests
performed. Therein lies the difficulty
with multiple ttests.
First, because the number of ttests
increases geometrically as a function of the number of groups, analysis becomes
cognitively difficult somewhere in the neighborhood of seven different
tests. An analysis of variance
organizes and directs the analysis, allowing easier interpretation of results.
Secondly, by doing a greater number of
analyses the probability of committing at least one type I error somewhere in
the analysis greatly increases. The
probability of committing at least one type I error in an analysis is called
the experimentwise error rate. The researcher may desire to perform a fewer
number of hypothesis tests in order to reduce the experimentwise error
rate. The ANOVA procedure performs this
function.
The Bottom Line  Results and
Interpretation of ANOVA
Results of an ANOVA are usually presented in
an ANOVA table. This table
contains columns labelled
"Source", "SS or Sum of Squares", "df  for degrees of
freedom", "MS  for mean square", "F or Fratio", and
"p, prob, probability, sig., or
sig. of F". The only columns that
are critical for interpretation are the first and the last, the others are used
mainly for intermediate computational purposes.
Source SS df MS F sig of F 
BETWEEN
5212.960 4 1303.240 4.354 .0108 WITHIN
5986.400 20 299.320 TOTAL
11199.360 24 
An example of an ANOVA table appears below:
The row labelled "BETWEEN" under
"Source", having a probability value associated with it, is the only
one of any any great importance at this time.
The other rows are used mainly for computational purposes. The researcher then would most probably
first look at the value ".0108" located under the "sig of
F" column.
Of all the information presented in the ANOVA
table, the major interest of the researcher will most likely be focused on the
value located in the "sig of F." column. If the number (or numbers) found in this column is (are) less
than the critical value (α) set by the experimenter, then the effect is said to be
significant. Since this value is
usually set at .05, any value less than this will result in significant
effects, while any value greater than this value will result in nonsignificant
effects.
If the effects are found to be significant
using the above procedure, it implies that the means differ more than would be
expected by chance alone. In terms of
the above experiment, it would mean that the treatments were not equally
effective. This table does not tell the
researcher anything about what the effects were, just that there most likely
were real effects.
If the effects are found to be
nonsignificant, then the differences between the means are not great enough to
allow the researcher to say that they are different. In that case no further interpretation is attempted.
When the effects are significant, the means
must then be examined in order to determine the nature of the effects. There are procedures called "posthoc
tests" to assist the researcher in this task, but often the analysis is
fairly evident simply by looking at the size of the various means. For example, in the preceding analysis
Gestalt and Reality Therapy were the most effective in terms of mean
improvement.

In the case of significant effects, a
graphical presentation of the means can sometimes assist in analysis. For example, in the preceding analysis, the
graph of mean values would appear as follows:
HYPOTHESIS TESTING THEORY UNDERLYING
ANOVA
The Sampling
Distribution Reviewed
In order to explain why the above procedure
may be used to simultaneously analyze a number of means, the following presents
the theory on ANOVA in relation to the hypothesis testing approach discussed in
earlier chapters.
First, a review of the sampling distribution
is necessary. If you have difficulty
with this summary, please go back and read the more detailed chapter on the sampling
distribution.
A sample is a finite number (N) of
scores. Sample statistics are numbers
which describe the sample. Example
statistics are the mean (_), mode (M_{o}), median (M_{d}), and
standard deviation (s_{X}).
Probability models exist in a theoretical
world where complete information is unavailable. As such, they can never be known except in the mind of the
mathematical statistician. If an
infinite number of infinitely precise scores were taken, the resulting distribution
would be a probability model of the population. Population models are characterized by parameters. Two common parameters are µ_{X} and σ_{X}.
Sample statistics are used as estimators of
the corresponding parameters in the population model. For example, the mean and standard deviation of the sample are
used as estimates of the corresponding population parameters µ_{X} and σ_{X}.
The sampling distribution is a distribution
of a sample statistic. It is a model of
a distribution of scores, like the population distribution, except that the
scores are not raw scores, but statistics.
It is a thought experiment; "what would the world be like if a
person repeatedly took samples of size N from the population distribution and
computed a particular statistic each time?" The resulting distribution of statistics is called the sampling
distribution of that statistic.
The sampling distribution of the mean is a special case of a sampling
distribution. It is a distribution of
sample means, described with the parameters µ_{_} and σ_{_}. These parameters are closely related to the
parameters of the population distribution, the relationship being described by
the CENTRAL LIMIT THEOREM. The
CENTRAL LIMIT THEOREM essentially states that the mean of the sampling
distribution of the mean (µ_{_}) equals the mean of the population (µ_{X})
and that the standard error of the mean (σ_{_}) equals the
standard deviation of the population (σ_{X}) divided by the
square root of N. These relationships
may be summarized as follows:

Two Ways of Estimating the
Population Parameter σ_{X}˛
When the data have been collected from more
than one sample, there exists two independent methods of estimating the
population parameter σ_{X}˛, called respectively the between and the
within method. The collected data are
usually first described with sample statistics as demonstrated in the following
example:
Group Therapy
Method _ S_{X} S_{X}˛ 
1 Reality 20.53 3.45 11.9025 2 Behavior 16.32 2.98 8.8804 3 Psychoanalysis 10.39
5.89 35.7604 4 Gestalt 24.65 7.56 57.1536 5 Control 10.56 5.75 33.0625 Mean 16.49 29.3519 Variance 38.83 387.8340 
THE WITHIN METHOD
Since each of the sample variances may be
considered an independent estimate of the parameter σ_{X}˛, finding the mean of the variances provides a method of combining the
separate estimates of σ_{X}˛ into a single value. The resulting statistic is called the MEAN
SQUARES WITHIN, often represented by MS_{W}. It is called the within method because it computes the estimate
by combining the variances within each sample. In the above example, the Mean Squares Within would be equal to
29.3519.
THE BETWEEN METHOD
The parameter σ_{X}˛ may also be
estimated by comparing the means of the different samples, but the logic is
slightly less straightforward and employs both the concept of the sampling
distribution and the Central Limit Theorem.
Sampling Distribution Actual Data 
_ _ _ _ _ _ . _ . _ . Mean µ_{_} __{_} Variance σ_{_}˛ s_{_}˛ 
First, the standard error of the mean squared
(σ_{_}˛) is the population variance of a distribution of sample means. In real life in the situation where there is
more than one sample, the variance of the sample means may be used as an
estimate of the standard error of the mean squared (σ_{_}˛). This is analogous to the
situation where the variance of the sample (s_{X}˛) is used as an
estimate of σ_{_}˛. The relationship is
demonstrated below:
In this case the Sampling Distribution
consists of an infinite number of means and the real life data consists of A
(in this case 5) means. The computed
statistic s_{_}˛ is thus an estimate of the theoretical parameter σ_{_}˛.
The relationship expressed in the Central Limit Theorem may now be used
to obtain an estimate of σ˛.

Thus
the variance of the population may be found by multiplying the standard error
of the mean squared (σ_{_}˛) by N, the size of each sample.
Since the variance of the means, s_{_}˛,
is an estimate of the standard error of the mean squared, σ_{_}˛, the variance of the population, σ_{X}˛, may be
estimated by multiplying the size of each sample, N, by the variance of the
means. This value is called the Mean
Squares Between and is often symbolized by MS_{B.} The computational procedure for MS_{B}
is presented below:
MS_{B} = N * s_{_}˛
MS_{B} = N * s_{_}˛ 
MS_{B}
= 20 * 38.83 MS_{B}
= 776.60 
The expressed value is called the Mean
Squares Between because it uses the variance between the samples, that
is the sample means, to compute the estimate.
Using the above procedure on the example data yields:
At this point it has been established that
there are two methods of estimating σ_{X}˛, Mean Squares
Within and Mean Squares Between. It could
also be demonstrated that these estimates are independent. Because of this independence, when both are
computed using the same data, in almost all cases different values will result. For example, in the presented data MS_{W}=29.3519
while MS_{B}=776.60. This
difference provides the theoretical background for the Fratio and ANOVA.
The Fratio and Fdistribution
A new statistic, called the Fratio is computed by dividing the MS_{B}
by MS_{W}. This is illustrated
below:
F_{obs} = MS_{B} / MS_{W} 
Using the example data described earlier the
computed Fratio becomes:
F_{obs} = MS_{B}
/ MS_{W}
F_{obs} = 776.60 /
29.3519
F_{obs} = 26.4582
The Fratio can be thought of as a measure of
how different the means are relative to the variability within each
sample. The larger this value, the
greater the likelihood that the differences between the means are due to
something other than chance alone, namely real effects. The size of the Fratio necessary to make a
decision about the reality of effects is the next topic of discussion.
If the difference between the means means is
due only to chance, that is, there are no real effects, then the expected value
of the Fratio would be one (1.00).
This is true because both the numerator and the denominator of the
Fratio are estimates of the same parameter, σ_{X}˛. Seldom will the Fratio be exactly equal to
1.00, however, because the numerator and the denominator are estimates rather
than exact, known values. Therefore,
when there are no effects the Fratio will sometimes be greater than one, and
other times less than one.
To review, the basic procedure used in
hypothesis testing is that a model is created in which the experiment is
repeated an infinite number of times when there are no effects. A sampling distribution of a statistic is
used as the model of what the world would look like if there were no
effects. The results of the experiment,
a statistic, is compared with what would be expected given the model of no
effects was true. If the computed
statistic is unlikely given the model, then the model is rejected along with
the hypothesis that there were no effects.
In an ANOVA the Fratio is the statistic used
to test the hypothesis that the effects are real, in other words, that the
means are significantly different from one another. Before the details of the hypothesis test may be presented, the
sampling distribution of the Fratio must be discussed.
If
the experiment were repeated an infinite number of times, each time
computing the Fratio, and there were no effects, the resulting distribution
could be described by the Fdistribution.
The Fdistribution is a theoretical probability distribution
characterized by two parameters, df_{1} and df_{2}, both of
which affect the shape of the distribution.
Since the Fratio must always be positive, the Fdistribution is
nonsymmetrical, skewed in the positive direction.
Two examples of an Fdistribution are
presented below; the first with df_{1}=1 and df_{2}=5, and the
second with df_{1}=10 and df_{2}=25.


The
Fdistribution has a special relationship to the tdistribution described
earlier. When df_{1}=1, the
Fdistribution is equal to the tdistribution squared (F=t˛). Thus the ttest and the ANOVA will always
return the same decision when there are two groups. That is, the ttest is a special case of ANOVA.
Nonsignificant and Significant
Fratios
Theoretically, when there are no real effects, the Fdistribution is an
accurate model of the distribution of Fratios. The Fdistribution will have the parameters df_{1}=a1
(where a1 is the number of different groups minus one) and df_{2}=a(N1),
where a is the number of groups and N is the number in each group. In this case
an assumption is made that sample size is equal for each group. For example, if
five groups of five subjects each were run in an experiment and there were no
effects, then the Fratios would be distributed with df_{1}=k1=51=4
and df_{2}=k(n1)=5(51)=5*4=20.
A visual representation of the preceding appears as follows:


The Fratio in the above which cuts off
various proportions of the distributions may be computed for different values
of α. These Fratios are called F_{crit}
values. In the above example the F_{crit}
value for α=.25 is 1.46, for α=.10 results in a value of 2.25, for α=.05 the value is 2.87, and for α=.01 the value is 4.43. These values are illustrated in the figure
below:
When there are real effects, that is, the
means of the groups are different due to something other than chance, then the
Fdistribution no longer describes the distribution of Fratios. In almost all cases the observed Fratio
will be larger than would be expected when there were no effects. The rationale for this situation is
presented below.
First, an assumption is made that any effects
are an additive transformation of the score.
That is, the scores for each group can be modelled as a constant ( a_{a}
 the effect) plus error (e_{ae}).
The scores appear as follows:
X_{ae} = a_{a} + e_{ae}
where X is the score, a_{a} is the
treatment effect, and e_{ae} is the error. The e_{ea}, or error, is different for each subject,
while a_{a} is constant within a given group.
As described in the chapter on
transformations, an additive transformation changes the mean, but not the
standard deviation or the variance.
Because the variance of each group is not changed by the nature of the
effects, the Mean Square Within, as the mean of the variances, is not
affected. The Mean Square Between, as N
time the variance of the means, will in most cases become larger because the
variance of the means will most likely become larger.
Imagine three individuals taking a test. An instructor first finds the variance of
the three score. He or she then adds
five points to one random individual and subtracts five from another random
individual. In most cases the variance
of the three test score will increase, although it is possible that the
variance could decrease if the points were added to the individual with the
No effects Real Effects 
Group Mean Variance
Group Mean Variance 1 µ σ˛ 1 µ + a_{1}
σ˛ 2 µ σ˛ 2 µ + a_{2}
σ˛ 3 µ σ˛ 3 µ + a_{3}
σ˛ 4 µ σ˛ 4 µ + a_{4}
σ˛ 5 µ σ˛ 5 µ + a_{5}
σ˛ Mean µ σ˛ µ σ˛ Variance σ˛/N >σ˛/N 
lowest score and subtracted from the
individual with the highest score. If
the constant added and subtracted was 30 rather than 5, then the variance would
almost certainly be increased. Thus,
the greater the size of the constant, the greater the likelihood of a larger
increase in the variance.
With respect to the sampling distribution, the model differs depending
upon whether or not there are effects.
The difference is presented below:
Since
the MS_{B} usually increases and MS_{W} remains the same, the
Fratio (F=MS_{B}/MS_{W}) will most likely increase. Thus, if there are real effects, then the
Fratio obtained from the experiment will most likely be larger than the
critical level from the Fdistribution.
The greater the size of the effects, the larger the obtained Fratio is
likely to become.
Thus, when there are no effects, the obtained
Fratio will be distributed as an Fdistribution which may be specified. If effects exist, then the obtained Fratio
will most likely become larger. By
comparing the obtained Fratio with that predicted by the model of no effects,
an hypothesis test may be performed to decide on the reality of effects. If the obtained Fratio is greater than the
critical Fratio, then the decision will be that the effects are real. If not, then no decision about the reality
of effects can be made.
Similarity of ANOVA and ttest
When the number of groups (A) equals two (2),
an ANOVA and ttest will give similar results, with t_{CRIT}˛=F_{CRIT}
and t_{OBS}˛=F_{OBS}.
This equality is demonstrated in the example below:
Given the following numbers for two
groups:
Mean
Variance
Group 1  12 23 14 21 19 23 26 11 16
18.33 28.50
Group 2  10 17 20 14 23 11 14 15 19
15.89 18.11
Computing the ttest
s_{_1_2}
= Ö (s_{1}˛ + s_{2}˛)/ 9 = Ö (28.50 + 18.11)/9 = Ö 5.18 =
2.28
t_{OBS}
= ( __{1}__{2} ) / s_{_1_2} = 18.33  15.89 / 2.28 =
1.07
t(df=16) = 2.12 for α=.05 and twotailed test
Computing the ANOVA
MS_{BETWEEN} = N * s_{_}˛ = 9 * 2.9768 = 26.7912
MS_{WITHIN} = Mean of
the Variances = ( 28.50 + 18.11 ) / 2 = 23.305
F_{OBS}
= MS_{BETWEEN}/MS_{WITHIN} = 1.1495
F(1,16) = 4.41 for α=.05  twotailed test is assumed
Comparing the results
t_{OBS}˛
= 1.1449 F_{OBS} = 1.1449
t(16)˛ = 4.49 F(1,16) = 4.49
The differences between the predicted and
observed results can be attributed to rounding error (close enough for
government work).
Because the ttest is a special case of the
ANOVA and will always yield similar results, most researchers perform the ANOVA
because the technique is much more powerful in complex experimental designs.
EXAMPLE OF A NONSIGNIFICANT ONEWAY
ANOVA
MEAN VARIANCE 
7 7 5 4 2 7 5 4 1 7 5 6 7 6 3 5 2 5 1
4 4.65 4.03 6 9 3 6 9 4 9 8 9 3 4 4 7 2 2 7 7 7 9
3 5.90 6.52 5 5 2 5 6 2 3 3 6 8 2 1 1 2 5 7 9 6 5
7 4.50 5.63 4 1 4 8 9 5 2 8 6 8 2 9 6 6 7 8 4 3 1
4 5.25 6.93 3 6 1 2 3 5 8 4 1 5 4 5 6 9 4 2 4 8 9
3 4.60 6.04 Computing
the ANOVA MS_{BETWEEN} = N * s_{_}˛
= 20 * .351 = 7.015 MS_{WITHIN} = Mean of the Variances = 5.83 F_{OBS} = MS_{BETWEEN}/MS_{WITHIN}
= 1.20 F(4,95) = 2.53 for α=.05  nondirectional test is
assumed 
Given the following data for five groups, perform an ANOVA:
Since the F_{CRIT} is greater than
the F_{OBS}, the means are not significantly different and no effects
are said to be discovered.
EXAMPLE OF A SIGNIFICANT ONEWAY
ANOVA
Given the following data for five groups,
perform an ANOVA. Note that the numbers
are similar to the previous example except that one has been subtracted from
all scores in Group 3 and one has been added to all scores in Group 4.
In this case the F_{OBS} is greater
than F_{CRIT}, thus the means are significantly different and we decide
that the effects are real.
1 23 
1 31 1 25 1 29 1 30 1 28 1 31 1 31 1 33 2 32 2 28 2 36 2 34 2 41 2 35 2 32 2 28 2 31 
USING MANOVA
MEAN VARIANCE 
7 7 5 4 2 7 5 4 1 7 5 6 7 6 3 5
2 5 1 4 4.65 4.03 6 9 3 6 9 4 9 8 9 3 4 4 7 2 2 7
7 7 9 3 5.90 6.52 4 4 1 4 5 1 2 2 5 7 1 0 0 1 4 6
8 5 4 6 3.50 5.63 5 2 5 9 10 6 3 9 7 9 3 10 7 7 8 9 5 4 2
4 6.25 6.93 3 6 1 2 3 5 8 4 1 5 4 5 6 9 4 2
4 8 9 3 4.60 6.04
Computing
the ANOVA MS_{BETWEEN} = N * s_{_}˛
= 20 * 1.226 = 24.515 MS_{WITHIN} = Mean of the Variances = 5.83 F_{OBS} = MS_{BETWEEN}/MS_{WITHIN}
= 4.20 F(4,95) = 2.53 for α=.05  twotailed test is assumed 
While an single factor between groups ANOVA
may be done using the MEANS command in SPSS, the MANOVA command is a general purpose
command which allows the statistician to do almost any type of multifactor
univariate or multivariate ANOVA.
The Data
The data is entered into a data file
containing two columns. One column
contains the level of the factor to which the observation belongs and the
second the score for the dependent variable.
A third column containing the observation number, in the example a
number from one to nine, is optional.
As in all SPSS data files, the number of rows in the data file
corresponds to the number of subjects and each variable is lined up neatly in
each row. In the example data file
presented to the right, there are two groups of nine each. The level of the independent variable is
given in the first column of the data file, a space is entered, and the
dependent variable is entered in columns 3 and 4.
RUN NAME EXAMPLE FOR ANOVA BOOK  DESIGN A. 
DATA LIST FILE='DESIGNA
DATA A' /1 A 1 X 3‑4. VALUE LABELS A 1 'BLUE BOOK' 2
'COMPUTER' LIST. MANOVA X by A(1,2) /PRINT CELLINFO(MEANS) /DESIGN. 
The RUN NAME command of the example program
gives a general description of the purpose of the program. The second command reads in the data
file. Note that the group factor is
called "A" and the dependent variable is called "X". The value label command then describes the
different levels of the group variable.
The LIST command gives a description of the data as the computer
understands it.
The MANOVA command is followed by the name of
the dependent variable, here X, and the keyword BY . The factor name "A" is then entered, followed by the
the beginning and ending levels of that factor. In this case there were only two levels, defined by a beginning
value of "1", and an ending value of "2". The second line on the command is preceded
by a slash "/'" and then the subcommand PRINT=CELLINFO(MEANS). This command will print the means of the
respective groups. The last subcommand,
"/DESIGN" , is optional at this point, but not including it will
generate a WARNING. Nothing is altered
with the WARNING, but it is not neat.
Example Output
The output produced by the example MANOVA
command is presented on the next page.
The default error term in MANOVA has been changed
from WITHIN CELLS to WITHIN+RESIDUAL.
Note that these are the same for all full factorial designs. 
* *
* * * * A n a l y s i s o f V a r i a n c e * * * * * 18 cases accepted. 0 cases rejected because of out‑of‑range
factor values. 0 cases rejected because of missing
data. 2 non‑empty cells. 1 design will be processed. Cell Means and Standard Deviations Variable .. X FACTOR CODE Mean Std. Dev. N A 1
29.000 3.202 9 A 2
33.000 4.093 9 For entire sample 31.000
4.116 18 * *
* * * * A n a l y s i s o f V a r i a n c e ‑‑
design Tests of Significance for X using UNIQUE
sums of squares Source of Variation SS DF MS F
Sig of F WITHIN CELLS 216.00
16 13.50 A 72.00
1 72.00 5.33
.035 (Model) 72.00
1 72.00 5.33
.035 (Total) 288.00
17 16.94 R‑Squared = .250 Adjusted R‑Squared = .203 
Note that the Fratio was significant with a
value of .03. The means for the
two groups were 29 and 33 respectively.
Dot Notation
In order to simplify the notational system
involving the summation sign, a notational system call dot notation has
been introduced. Dot notation places a period in place of subscript to mean
summation. For example:

The symbol X. means that the variable X has
been summed over whatever counter variable was used as a subscript. In a like manner, if a bar is placed over a
variable, it refers to a mean over the dotted counter variable(s). For example:

where _. means the same thing as _. (The dot
notation does become a bit tricky where real periods are involved.)
The real advantage is apparent when two or
more subscripts are used. For example

and

or if a=1 then

Using the same notational system, means may
be given. For example:

and

or if a=1 then

The difference between __{a}. and __{1}.
is that the second is a special case of the first when a=1.
For example, if A=3, B=4 and X_{11}=5,
X_{12}=8, X_{13}=6, X_{14}=9, X_{21}=7, X_{22}=10,
X_{23}=5, X_{24}=9, X_{31}=6, X_{32}=4, X_{33}=7,
X_{34}=3, then
X.. = 79,
_.. = 79/12 = 6.5833,
X_{1}. = 28,
__{1}. = 28/4 = 7.
POSTHOC Tests of Significance
If the results of the ANOVA are significant,
it indicates that there are real effects between the means of the groups. The nature of the effects are not specified
by the ANOVA. For example, an effect
could be significant because the mean of group three was larger than the means
of the rest of the groups. In another
case, the means of groups one, four and five might be significantly smaller
than the means of groups two, three, and six.
Often the pattern of results can be determined by a close examination of
the means. In other instances, the
reason for the significant differences is not apparent. To assist the statistician in interpreting
effects in significant ANOVAs, posthoc tests of significance were developed.
A posthoc (after the fact) test of significance
is employed only if the results of the overall significance test are
significant. A posthoc test is
basically a multiple ttest procedure with some attempt to control for the
increase in the experiment wide error rate when doing multiple significance
tests. A number of different procedures
are available to perform posthoc tests differing in the means of control of
the increase in error rates . The
different procedures include Duncan’s Multiple Range test, the NewmanKeuls
procedure, and a procedure developed by Sheffe’. The interested reader is referred to Winer (1971) or Hays (1981)
for a thorough discussion of these methods.
My personal feeling is that posthoc tests
are not all that useful. Most often,
the reason for the significant results is obvious from a close observation of
the means of the groups. Better
procedures are available using preplanned contrasts to test patterns of
results. The use of preplanned
contrasts requires that the statistician have the type of comparisons in mind
before doing the analysis. This is the
difference between datadriven (posthoc) and theorydriven (preplanned)
analysis. If a choice is possible, the
recommendation is for theorydriven analysis.
Chapter
10
TWO Between
Groups ANOVA (A x B)
The Design
In this design there are two independent
factors, A and B, crossed with each other. That is, each level of A appears in
combination with each level of B.
Subjects are nested within the combined levels of A and B
such that the full design would be written as S ( A X B ).
Because the Subjects (S) term is confounded with the error term,
it is dropped from the description of the design.
A B
X 
1 1
23 1 1
32 1 1
25 1 2
29 1 2
30 1 2
34 1 3
31 1 3
36 1 3
33 2 1
32 2 1
26 2 1
26 2 2
34 2 2
41 2 2
35 2 3
24 2 3
27 2 3
31 
Suppose a statistics teacher gave an essay
final to his class. He randomly divides
the classes in half such that half the class writes the final with a bluebook
and half with notebook computers. In
addition the students are partitioned into three groups, no typing ability,
some typing ability, and highly skilled at typing. Answers written in bluebooks will be transcribed to word
processors and scoring will be done blindly.
Not with a blindfold, but the instructor will not know the method or
skill level of the student when scoring the final. The dependent measure will be the score on the essay part of the
final exam.
The first factor (A) will be called Method and will have two levels, a_{1}=bluebook
and a_{2 }= computer. The second factor (B) will be
designated as Ability and will have three levels: b_{1}=none, b_{2}=some, and b_{3}=lots. Because each level of A appears with
each level of B, A is said to be crossed with B (AXB). Since different subjects will appear in each
combination of A and B, subjects are nested within AXB. Each subject will be measured a single
time. Any effects discovered will
necessarily be between subjects or groups and hence the designation
"between groups" designs.
The Data
The data file for the A X B
design is similar to the data file for design A with the addition of the
second descriptive variable, B, for each subject. In the case of the example data, the A
factor has two levels while the B factor has three. The X variable is the score on the final
exam. The example data file appears in
the text box on the right.
Example SPSS Program using MANOVA
The SPSS commands necessary to do the
analysis for an A X B design are given in the text box below.
RUN NAME EXAMPLES FOR ANOVA BOOK ‑ A X B DESIGNS 
DATA LIST FILE='AXB
DATA A' / A 3 B 6 X 15‑19. VARIABLE LABELS A 'METHOD OF WRITING EXAM' B 'KEYBOARD EXPERIENCE' X 'SCORE ON FINAL EXAM'. VALUE LABELS A 1 'BLUE‑BOOK' 2 'COMPUTER'/ B 1 'NONE' 2 'SOME' 3 'LOTS'. LIST. MANOVA X BY A (1,2) B
(1,3) /PRINT CELLINFO(MEANS) /DESIGN. 
Note that the DATA LIST command must read in
a variable to code each factor. In this
case the variables were named A and B to correspond with the factor names,
although in most reallife situations more descriptive names will be used. The addition of the optional VALUE LABELS
command will label the output from the PRINT CELLINFO(MEANS) command, making
the output easier to interpret.
The MANOVA command is followed by the name of
the dependent variable, in this case X and then the variable names of the
factors. As in the previous MANOVA
commands, the factor names are each followed by the beginning and ending levels
enclosed in parentheses. In this case
the A factor has two levels beginning at level 1 and the B factor
has three. The PRINT CELLINFO(MEANS)
command is optional, but usually included because the means are the central
focus of the analysis. The DESIGN
command is optional, but excluding it will generate a warning when the program
is run, so it is usually included for the sake of neatness.
All of the analyses on the following pages
were generated from the SPSS program presented above. The program will not be included as part of the interpretation of
the output.
Interpretation of Output
Cell Means and Standard Deviations 
Variable .. X
SCORE ON FINAL EXAM
FACTOR CODE Mean Std. Dev. N 95 percent Conf. Interval
A BLUE‑BOO
B NONE 26.717 4.822 3 14.739 38.695
B SOME 30.923 2.418 3 24.916 36.930
B LOTS 33.277 2.157 3 27.919 38.634
A COMPUTER
B NONE 28.057 3.285 3 19.896 36.217
B SOME 36.627 3.413 3 28.148 45.106
B LOTS 27.080 3.673 3
17.955 36.205 For
entire sample 30.447 4.675 18
28.122 32.771 
The interpretation of the output from the
MANOVA command will focus on two parts:
the table of means and the ANOVA summary table. The table of means is the primary focus of
the analysis while the summary table directs attention to the interesting or
statistically significant portions of the table of means.
A table of means generated using the example
data and the PRINT CELLINFO(MEANS) subcommand in MANOVA is presented below:
Often the means are organized and presented
in a slightly different manner than the form of the output from the MANOVA
command. The table of means may be
rearranged and presented as follows:

b_{1}
b_{2} b_{3} ┌────────┬────────┬────────┐ a_{1} │ 26.72 │ 30.92 │ 33.28 │ 30.31 ├────────┼────────┼────────┤ a_{2} │ 28.06 │ 36.62 │ 27.08 │ 30.58 └────────┴────────┴────────┘ 27.39
33.78 30.18 30.447 
The means inside the boxes are called cell
means, the means in the margins are called marginal means, and the
number on the bottom righthand corner is called the grand mean. An analysis of these means reveals that there is very little difference between
the marginal means for the different levels of A across the levels of B
(30.31 vs. 30.58). The marginal means
of B over levels of A are different (27.39 vs. 33.78 vs. 30.18)
with the mean for b_{2} being the highest. The cell means show an increasing pattern for levels of B
at a_{1} (26.72 vs. 30.92 vs. 33.28) and a different pattern for levels
of B at a_{2} (28.06 vs. 36.62 vs. 27.08).
Graphs of Means


Graphs of means are often used to present
information in a manner which is easier to comprehend than the tables of
means. One factor is selected for
presentation as the Xaxis and its levels are marked on that axis. Separate lines are drawn the height of the
mean for each level of the second factor.
In the following graph, the B, or keyboard experience, factor was
selected for the Xaxis and the A, or method, factor was selected for
the different lines. Presenting the
information in an opposite fashion would be equally correct, although some
graphs are more easily understood than others, depending upon the values for
the means and the number of levels of each factor. The second possible graph is presented below. It is recommended that if there is any doubt
that both versions of the graphs be attempted and the one which best
illustrates the data be selected for inclusion into the statistical
report. It is hopefully obvious that
the graph with B on the Xaxis is easier to understand than the one with A on
the Xaxis.
Because the interpretation of the graph of
the interaction depends upon the results of the analysis, the ANOVA summary
table will now be presented. Following
this, the graph of the interaction will be reanalyzed.
The ANOVA Summary Table
The results of the A X B ANOVA
are presented in the ANOVA summary table by MANOVA. An example of this table is presented below:
Tests of Significance for X using UNIQUE sums of squares 
Source SS DF MS F
Sig of F WITHIN CELLS
139.36 12 11.61 A .36 1
.36 .03 .863 B 123.08 2
61.54 5.30 .022 A BY B 108.73 2 54.36 4.68 .031 
The items of primary interest in this table
are the effects listed under the "Source" column and the values under
the "Sig of F" column. As in
the previous hypothesis test, if the value of "Sig of F" is less than
the value of α as set by the experimenter, then that effect
is significant. If α=.05, then the B main effect and the A BY B
interaction would be significant in this table.
Main Effects
Main
effects are differences in means over levels of one factor collapsed over
levels of the other factor. This is
actually much easier than it sounds.
For example, the main effect of A is simply the difference
between the means of final exam score for the two levels of Method, ignoring or
collapsing over experience. As seen in
the second method of presenting a table of means, the main effect of A
is whether the two marginal means associated with the A factor are
different. In the example case these means
were 30.31 and 30.58 and the differences between these means was not
statistically significant.
As can be seen from the summary table, the
main effect of B is significant.
This effect refers to the differences between the three marginal means
associated with factor B. In
this case the values for these means were 27.39, 33.78, and 30.18 and the
differences between them may be attributed to a real effect.
Simple Main Effects
A simple main effect is a main effect of one
factor at a given level of a second factor. In the example data it would be possible to talk about the simple
main effect of B at a_{1}.
That effect would be the difference between the three cell means at
level a_{1} (26.72, 30.92, and 33.28).
One could also talk about the simple main effect of A at b_{3}
(33.28 and 27.08). Simple main effects
are not directly tested in design A X B, however they are
necessary to understand an interaction.
Interaction Effects
An interaction effect is a change in the
simple main effect of one variable over levels of the second. An AB or A BY B interaction is a change
in the simple main effect of B over levels of A or the change in
the simple main effect of A over levels of B. In either case the cell means cannot be
modelled simply by knowing the size of the main effects. An additional set of parameters must be used
to explain the differences between the cell means. These parameters are collectively called an interaction.

The change in the simple main effect of one
variable over levels of the other is most easily seen in the graph of the
interaction. If the lines describing
the simple main effects are not parallel, then a possibility of an interaction
exists. As can be seen from the graph
of the example data, the possibility of a significant interaction exists
because the lines are not parallel. The
presence of an interaction was confirmed by the significant interaction in the
summary table.
The following graph overlays the main effect
of B on the graph of the interaction.
Two things can be observed from this presentation. The first is that the main effect of B is
possibly significant, because the means are different heights. Second, the interaction is possibly significant
because the simple main effects of B at a_{1} and a_{2} are
different from the main effect of B.
One method of understanding how main effects
and interactions work is to observe a wide variety of data and data
analysis. With three effects, A,
B, and AB, which may or may not be significant there are eight
possible combinations of effects. All
eight are presented on the following pages.
Example Data Sets, Means, and
Summary Tables
No Significant Effects
A B X
1 1 22

1 1 24
1 1 25
1 2 26
1 2 29
1 2 23
1 3 21
1 3 25
1 3 22
2 1 21
2 1 26
2 1 25
2 2 24
2 2 20
2 2 24
2 3 23
2 3 26
2 3 20
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE 23.667 1.528 3 19.872 27.461
B
SOME
26.000 3.000 3 18.548 33.452
B
LOTS
22.667 2.082 3 17.495 27.838
A COMPUTER
B
NONE
24.000 2.646 3 17.428 30.572
B
SOME
22.667 2.309 3 16.930 28.404
B
LOTS 23.000 3.000 3 15.548 30.452
For entire sample 23.667 2.401 18 22.473 24.861
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS
F Sig of F
WITHIN CELLS
74.00 12 6.17
A
3.56 1 3.56 .58 .462
B
7.00 2 3.50 .57 .581
A BY B
13.44 2 6.72 1.09 .367
Main Effect of A
A B X

1 1 32
1 1 34
1 1 35
1 2 36
1 2 39
1 2 33
1 3 31
1 3 35
1 3 32
2 1 21
2 1 26
2 1 25
2 2 24
2 2 20
2 2 24
2 3 23
2 3 26
2 3 20
Cell Means and Standard
Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
33.667 1.528 3 29.872 37.461
B
SOME
36.000 3.000 3 28.548 43.452
B
LOTS
32.667 2.082 3 27.495 37.838
A COMPUTER
B
NONE
24.000 2.646 3 17.428 30.572
B
SOME
22.667 2.309 3 16.930 28.404
B
LOTS
23.000 3.000 3 15.548 30.452
For entire sample 28.667 6.078 18
25.644 31.689
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
74.00 12 6.17
A
533.56 1 533.56
86.52 .000
B
7.00 2 3.50 .57 .581
A BY B
13.44 2 6.72 1.09 .367
Main Effect of B

A B X
1 1 42
1 1 44
1 1 45
1 2 36
1 2 39
1 2 33
1 3 21
1 3 25
1 3 22
2 1 41
2 1 46
2 1 45
2 2 34
2 2 30
2 2 34
2 3 23
2 3 26
2 3 20
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
43.667 1.528 3 39.872 47.461
B
SOME
36.000 3.000 3 28.548 43.452
B
LOTS
22.667 2.082 3 17.495 27.838
A COMPUTER
B
NONE 44.000 2.646 3 37.428 50.572
B
SOME
32.667 2.309 3 26.930 38.404
B
LOTS
23.000 3.000 3 15.548 30.452
For entire sample 33.667 9.133 18 29.125 38.208
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
74.00 12
6.17
A
3.56 1 3.56 .58 .462
B
1327.00 2 663.50
107.59 .000
A BY B
13.44 2 6.72 1.09 .367
AB Interaction

A B X
1 1 42
1 1 44
1 1 45
1 2 36
1 2 39
1 2 33
1 3 21
1 3 25
1 3 22
2 1 21
2 1 26
2 1 25
2 2 34
2 2 30
2 2 34
2 3 43
2 3 46
2 3 40
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
43.667 1.528 3
39.872 47.461
B
SOME
36.000 3.000 3 28.548 43.452
B
LOTS
22.667 2.082 3 17.495 27.838
A COMPUTER
B NONE 24.000
2.646 3 17.428 30.572
B
SOME
32.667 2.309 3 26.930 38.404
B
LOTS
43.000 3.000 3 35.548 50.452
For entire sample 33.667 8.738 18 29.321 38.012
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS 74.00 12 6.17
A
3.56 1 3.56 .58 .462
B
7.00 2 3.50 .57 .581
A BY B
1213.44 2 606.72
98.39 .000
Main Effects of A and B

A B X
1 1 52
1 1 54
1 1 55
1 2 46
1 2 49
1 2 43
1 3 31
1 3 35
1 3 32
2 1 41
2 1 46
2 1 45
2 2 34
2 2 30
2 2 34
2 3 23
2 3 26
2 3 20
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE 53.667 1.528 3 49.872 57.461
B
SOME
46.000 3.000 3 38.548 53.452
B
LOTS
32.667 2.082 3 27.495 37.838
A COMPUTER
B
NONE
44.000 2.646 3 37.428 50.572
B
SOME
32.667 2.309 3 26.930 38.404
B
LOTS 23.000 3.000 3 15.548 30.452
For entire sample 38.667
10.705 18 33.343 43.990
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
74.00 12 6.17
A
533.56 1 533.56
86.52 .000
B
1327.00 2 663.50
107.59 .000
A BY B
13.44 2 6.72 1.09 .367
Main effect of A, AB Interaction

A B X
1 1 52
1 1 54
1 1 55
1 2 46
1 2 49
1 2 43
1 3 31
1 3 35
1 3 32
2 1 21
2 1 26
2 1 25
2 2 34
2 2 30
2 2 34
2 3 43
2 3 46
2 3 40
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
53.667 1.528 3 49.872 57.461
B
SOME
46.000 3.000 3 38.548 53.452
B
LOTS
32.667 2.082 3 27.495 37.838
A COMPUTER
B
NONE
24.000 2.646 3 17.428 30.572
B
SOME
32.667 2.309 3 26.930 38.404
B
LOTS
43.000 3.000 3 35.548 50.452
For entire sample 38.667
10.370 18
33.510 43.823
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
74.00 12 6.17
A
533.56 1 533.56
86.52 .000
B
7.00 2 3.50 .57 .581
A BY B
1213.44 2 606.72
98.39 .000
Main Effect of B, AB Interaction
A B X

1 1 32
1 1 34
1 1 35
1 2 46
1 2 49
1 2 43
1 3 21
1 3 25
1 3 22
2 1 31
2 1 36
2 1 35
2 2 34
2 2 30
2 2 34
2 3 33
2 3 36
2 3 30
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
33.667 1.528 3
29.872 37.461
B
SOME
46.000 3.000 3 38.548 53.452
B
LOTS
22.667 2.082 3 17.495 27.838
A COMPUTER
B NONE 34.000
2.646 3 27.428 40.572
B
SOME
32.667 2.309 3 26.930 38.404
B
LOTS
33.000 3.000 3 25.548 40.452
For entire sample 33.667 7.268 18 30.052 37.281
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS 74.00 12 6.17
A
3.56 1 3.56 .58 .462
B
397.00 2 198.50
32.19 .000
A BY B
423.44 2 211.72
34.33 .000
Main Effects of A and B, AB Interaction
A B X

1 1 22
1 1 24
1 1 25
1 2 36
1 2 39
1 2 33
1 3 41
1 3 45
1 3 42
2 1 41
2 1 46
2 1 45
2 2 44
2 2 40
2 2 44
2 3 43
2 3 46
2 3 40
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B NONE 23.667
1.528 3 19.872 27.461
B
SOME
36.000 3.000 3 28.548 43.452
B
LOTS
42.667 2.082 3 37.495 47.838
A COMPUTER
B
NONE
44.000 2.646 3 37.428 50.572
B
SOME
42.667 2.309 3 36.930 48.404
B LOTS 43.000
3.000 3 35.548 50.452
For entire sample 38.667 7.700 18 34.837 42.496
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
74.00 12 6.17
A
373.56 1 373.56
60.58 .000
B
247.00 2 123.50
20.03 .000
A BY B
313.44 2 156.72
25.41 .000
No Significant Effects

A B X
1 1 32
1 1 24
1 1 15
1 2 46
1 2 39
1 2 23
1 3 31
1 3 45
1 3 52
2 1 31
2 1 46
2 1 55
2 2 34
2 2 40
2 2 54
2 3 33
2 3 46
2 3 50
Cell Means and Standard Deviations
Variable .. X
SCORE ON FINAL EXAM
FACTOR
CODE Mean Std. Dev. N 95 percent
Conf. Interval
A BLUE‑BOO
B
NONE
23.667 8.505 3 2.539 44.794
B
SOME
36.000 11.790 3 6.712 65.288
B
LOTS 42.667 10.693 3 16.104 69.229
A COMPUTER
B
NONE
44.000 12.124 3 13.881 74.119
B
SOME
42.667 10.263 3
17.171 68.162
B
LOTS
43.000 8.888 3 20.920 65.080
For entire sample 38.667
11.499 18 32.948 44.385
Tests of Significance for X using UNIQUE sums of squares
Source of Variation
SS DF MS F Sig of F
WITHIN CELLS
1314.00 12 109.50
A
373.56 1 373.56 3.41 .090
B
247.00 2 123.50 1.13 .356
A BY B
313.44 2 156.72 1.43 .277
Note that the means and graphs of the last
two example data sets were identical.
The ANOVA table, however, provided a quite different analysis of each
data set. The data in this final set
was constructed such that there was a large standard deviation within each
cell. In this case the marginal and
cell means were not different enough to warrant rejecting the hypothesis of no
effects, thus no significant effects were observed.
Dot Notation Revisited

The reader may recall from the previous
chapter that placing dots instead of subscripts is a shorthand notation for
summation. For example
When two subscripts are involved the notation
becomes somewhat more complicated (and powerful). For example

and

or if a=1 then

When three subscripts are involved as is
necessary in an A X B design the notation involves even more
summation signs. For example

and, for example

where one sums over the subscript containing
the dot.
Using the dot notation with means rather than
sums is a relatively simple extension of the dot notation. The mean is found by
dividing the sum by the number of scores which were included in the sum. For example, the grand mean can be found as
follows:

and the cell means

All this is most easily understood in the
context of a data table.
b_{1} b_{2} b_{3}
X_{111}=10 X_{11}.=33 X_{121}=22 X_{12}.=72 X_{131}=14 X_{13}.=45 X_{1}..=150
a_{1} X_{112}=11
__{11}.=11 X_{122}=24 __{12}.=24 X_{132}=15 __{13}.=15 __{1}..=16.67
X_{113}=12
X_{123}=26 X_{133}=16
X_{211}=20 X_{21}.=63 X_{221}=21 X_{22}.=66 X_{231}=18 X_{23}.=57 X_{2}..=186
a_{2} X_{212}=21
__{21}.=21 X_{222}=22 __{22}.=22 X_{232}=19 __{23}.=19 __{2}..=20.67
X_{213}=22
X_{223}=23 X_{233}=20
X_{.1.}.=96 X_{.2.}.=138 X_{.3.}.=102 X...=336
__{.1.}.=16 __{.2.}.=23 __{.3.}.=17 _...=18.67
Chapter
11
Nested Two
Factor Between Groups Designs B(A)
The
Design
Factor B is
said to be nested within factor A when a given level of B appears under
a single level of A. This occurs, for
example, when the first three levels of factor B (b_{1} ,b_{3},
and b_{3}) appear only under level a_{1} of factor A and the
next three levels of B (b_{4}
,b_{5}, and b_{6})
appear only under level a_{2} of factor A. These types of designs are also designated as hierarchical designs in
some textbooks.
A B X 
1 1
1 1 1
2 1 1
3 1 2
5 1 2
6 1 2
7 1 3
9 1 3 10 1 3 11 2 1
1 2 1
2 2 1
3 2 2
1 2 2
2 2 2
3 2 3
1 2 3
2 2 3
3 
Nested or
hierarchical designs can appear because many aspects of society are
organized hierarchically. For example within the university, classes
(sections) are nested within courses, courses are nested within instructors,
and instructors are nested within departments.
In experimental
research it is also possible to nest treatment conditions within other
treatment conditions. For example in
studying the addictive potential of drugs, drug type could be nested within
drug function, i. e. Librium, Valium,
and Xanax are drugs used to treat anxiety, Prozac is used to treat depression,
and Halcion sold as a sleeping pill.
Here each drug appears in one and only one level of drug function.
The
Data
The data are
organized similarly to the A x B design.
Note that the data in the following table is identical to that in Table
6.1.
SPSS
commands
The SPSS
commands to do this analysis are
identical to those necessary to do the A x B analysis with one exception; the
DESIGN subcommand must completely specify the design. In this case the design can be specified with the effects A and B
WITHIN A, where A corresponds to the main effect of A, and B WITHIN A, the nested main effect of
B. The following text box contains the
SPSS program to run the analysis.
The
Analysis
RUN NAME PSYCHOLOGY 460 ‑ ASSIGNMENT
4 ‑ DAVID W. STOCKBURGER 
DATA LIST FILE='ANOVA2 DATA A'/A B X X2 X3 X4 1‑12 LIST. MANOVA X BY A(1,2) B(1,3) /PRINT=CELLINFO(MEANS) /DESIGN A B WITHIN A. 
The analysis of
the B(A) design is similar to the A x B design. The Tables of Means will be identical, but the graphs drawn from
the means will be done differently than the A x B design. The ANOVA table will contain only two
effects and interpretation will be discussed.
Cell Means and
Standard Deviations 
FACTOR CODE Mean Std.Dev. N
A 1
B 1 2.000 1.000 3
B 2 6.000 1.000 3
B 3 10.000 1.000 3
A 2
B 1 2.000 1.000 3
B 2 2.000 1.000 3
B 3 2.000 1.000 3 
The
Table of Means
The Table of
Means produced in this analysis is identical to that produced in the A x B
design.
Graphs


Because the
qualitative or quantitative meaning of levels of a at each level of b is
different, the graph of the means must be modified. The nested main effects of b are presented sidebyside rather
than on top of one another. This is
necessary because b_{1} is different depending on whether it appears
under a_{1} or a_{2}.
The
ANOVA Table
The ANOVA table
produced by design B(A) is presented below.
Source of Variation
SS DF MS
F Sig of F 
WITHIN CELLS 12.00 12 1.00 A 72.00 1
72.00 72.00 .000 B WITHIN A 96.00 4 4.00 24.00
.000 
Interpretation
of Output
The
interpretation of the ANOVA table is straightforward. The WITHIN CELLS and A main effect SS's, DF's, and MS's are
identical to the analysis done in design A x B and are interpreted similarly.
The B WITHIN A term is called a nested main
effect and is the sum of the two simple B main effects. If this term is significant, then the graph
of the simple main effects should be drawn (Figure 8.1). What significance means is that the points
WITHIN each line are not the same height (value). That is, the means for b_{1}, b_{2}, and b_{3}
within levels a_{1} and a_{2} are different. In the example graph the simple main effect
of B under a_{1} would be significant while the simple main effect of B
under a_{2} would not. Because
the B WITHIN A effect is the sum of both simple effects, the combined effect
was found to be significant in this case.
Similarities
to the A X B Analysis
As noted
earlier, the data files for the A x B and B(A) designs are identical. Likewise, the MANOVA commands are similar
except that the DESIGN subcommand on the B(A) design must be specified because
it is not a completely crossed design.
The table of means of the B(A) design will be identical to the A x B
design, the difference being that they will be plotted differently. The ANOVA source table will slightly differ
for the two types of designs. Both are
presented below to allow them to be contrasted.
Source Table
for the B(A) Design 
Source of Variation SS DF MS
F Sig of F WITHIN CELLS
12.00 12 1.00 A 72.00
1 72.00 72.00
.000 B WITHIN A 96.00 4 4.00
24.00 .000 Source Table for the A x B Design Source of Variation SS DF MS
F Sig of F WITHIN
CELLS 12.00 12
1.00 A 72.00 1
72.00 72.00 .000 B 48.00 2
24.00 24.00 .000 A BY
B 48.00 2
24.00 24.00 .000 
Note that the
WITHIN CELLS and A effects are identical in both analyses. Note also that both the SS and DF for B and
A BY B in the A x B design together add up to the SS and DF for B WITHIN A for
the B(A) design. What is happening here
is that the B main effect and the A x B interaction in the A x B design is
collapsed into the nested main effect of B in the B(A) design.
Chapter
12
Contrasts,
Special and Otherwise
Understanding how contrasts may be employed
in the analysis of experiments gives the researcher considerably flexibility in
the specification of
"effects".
In addition, the study of contrasts leads to
even greater flexibility if multiple regression models are employed. Using contrasts, the researcher can test
specific “theorydriven” comparisons between groups.
Definition
A contrast is a set of numbers. When a set of means are being contrasted, a
contrast for the set of means will contain the same number of numbers as the
number of means. For example, if six
means are being contrasted, a contrast for the set of means would contain six
numbers. The order of the numbers in
the contrast corresponds to the order of the means. Examination of the signs of the numbers in the contrast describes
which means are being contrasted.
For example, the following contrast
1 1 1 1 1 1
would compare the first three means with the
last three means.
A contrast of the form
2 2 1 1 1 1
would compare the first two means with the
last four means. The manner in which
the means are compared is determined by similar number. In the above example, the first two groups
share the same number (2), while the last four groups share the number minus
one (1). All groups sharing the same
positive number are compared with all groups sharing the same negative
number. If a contrast contains zeros,
then those groups with zeros are not included in the contrast. For example:
0 0 3 1 1 1
Would compare the third mean with the last
three means. The first and second means
would not be included in the calculation or interpretation of the contrast.
Sets of Contrasts
A set of contrasts is simply a number
of contrasts considered simultaneously.
For example, the three contrasts presented above could be combined into
a set of contrasts as follows:
contrast 1 2 2 1 1 1 1
contrast 2 0 0
3 1 1 1
contrast 3 1 1
1 1 1 1
Orthogonal Contrasts
An orthogonal contrast is two contrasts such
that when corresponding numbers in the contrasts are multiplied together and
then the products are summed, the sum of the products is zero. For example, in the preceding example,
contrasts 1 and 2 are orthogonal as can be seen in the following:
contrast 1 2 2 1 1 1 1
contrast 2 x
0 x 0 x 3 x 1 x
1 x 1
products =
0 = 0 = 3 = 1 = 1 = 1
The sum of the products, 0 + 0 + 3 + 1 + 1 +
1 , equals zero, thus contrast 1 and contrast 2 are orthogonal contrasts.
Nonorthogonal Contrasts
Nonorthogonal contrasts are two contrasts such that when
corresponding numbers in the contrasts are multiplied together and then the
products are summed, the sum of the products is not zero. For example, in the preceding example,
contrasts 2 and 3 are nonorthogonal as can be seen in the following:
contrast 3 1 1 1 1 1 1
contrast 2 x
0 x 0 x 3 x 1 x
1 x 1
products =
0 = 0 = 3 =
1 = 1 = 1
The sum of the products, 0 + 0 + 3 + 1 + 1 +
1 , equals six, thus contrast 2 and contrast 3 are nonorthogonal
contrasts. In a similar manner
contrasts 1 and 3 are nonorthogonal.
Sets of Orthogonal Contrasts
The guiding principle is that the number
of possible orthogonal contrasts will always equal the number of means being
contrasted. If six means are being
contrasted, there will be no more than six contrasts which will be orthogonal
to one another. If six means are being
contrasted and five orthogonal contrasts have already been found, then there
exists a contrast which is orthogonal to the first five contrasts. That contrast is not always easy to find.
Finding Sets of Orthogonal Contrasts
In ANOVA the first contrast, which will be
referred to as contrast 0, is a set of 1's.
This contrast will be seen to be equivalent to the m term in the score
model. For a contrast comparing six
means, the first contrast would be:
contrast 0 1 1 1 1 1 1
In order for any following contrasts to be
orthogonal with contrast 0, it can be seen that the sum of the numbers in the
contrast must equal zero. Both contrasts 1 and 2 described earlier fit this
criterion, and since they are orthogonal to one another, they will be included
in the current set of orthogonal contrasts.
contrast 1 2 2 1 1 1 1
contrast 2 0 0 3 1 1 1
In finding a fourth contrast which is
orthogonal to the first three, look for patterns of means in the preceding
contrasts which have the same number.
In both contrasts 1 and 2, the first and second means have the same
number, 2 in contrast 1 and 0 in contrast 2.
Working with this subset, finding numbers which sum to zero, and setting
all other numbers to zero , the following contrast is found:
contrast 3 1 1 0 0 0 0
The student should verify that this contrast
is indeed orthogonal to all three preceding contrasts.
Using the same logic, the fifth contrast will
involve the fourth, fifth, and sixth means.
There are any number of possibilities, but the following will be used:
contrast 4 0 0 0 2 1 1
The final contrast will compare the fifth
mean and the sixth mean. Any two
numbers which sum to zero (i. e. 2.34
and 2.34) could be used without changing the analysis or interpretation, but 1
and 1 make the contrasts parsimonious.
contrast 5 0 0 0 0 1 1
Putting the six contrasts together as a set
results in the following:
contrast 0 1
1 1 1 1 1
contrast 1 2
2 1 1 1 1
contrast 2 0
0 3 1 1 1
contrast 3 1 1 0
0 0 0
contrast 4 0
0 0 2 1
1
contrast 5 0
0 0 0 1 1
G X 
1 1 1 