The mixed-effects analyses shown here were done with the program MLwiN (version 2.02; dated June 2005) running under Microsoft Windows 2000 Professional (SP4).

In the log below, user input commands are indicated by blue and bolder type, like this.

Program output is indicated by black monospace type, like this.

Comments and annotations are given in italic sans-serif type, with indented left margin, like this.

The first two stages of the modeling process, data preparation and (crossed) model preparation, are identical to the "normal" case with regular, normally-distributed data.

Start the program MLwiN. Open a command-line input window, by choosing Data Manipulation > Command interface. The input commands below were entered in this Commands window. The program will also open a separate Output window. **Unfortunately, the contents of neither window can be saved!**

We start with reading data into the work sheet, from a disk file named

`h24.txt`

. This can only be done through the GUI, choose File > Ascii Input. In the Columns field, enter c1-c4 to store the file contents in the first four columns of the worksheet. Use the Browse button to find the input data file.names c1 "subj" c2 "item" c3 "cond" c4 "resp"

Rename columns to sensible variable names.

Sorting: This program requires that data are sorted in hierarchical order, i.e. by higher-level units as primary key, lower-level units as secondary key, etc. This was already done in the present data file. If necessary, data can be sorted in MLwiN ...

calc c10=100*"subj"+"item"

... by calculating an auxiliary variable from the sort key columns...

sort c10 c1-c4 c10 c10 c1-c4 c10

... and then sorting all columns using the auxiliary variable.

`Sort`

, using key in `c10`

, the input columns in `c1-c4`

and in `c10`

. Put the sorted key (back) in `c10`

, and put the sorted output columns (back) in `c1-c4`

and in `c10`

. This program does not add constant variances into the model by default; these have to be specified by the user. An auxiliary column containing the value unity (1) everywhere is needed for this purpose.

put 864 1 c5

name c5 "const"

dummy "cond" c11-c13

In order to analyze the fixed-only or 'cell means' model, we need dummy variables for each of the 3 values or levels of the

`cond`

factor. The resulting dummy variables are stored in columns 11 to 13.names c11 "cond1" c12 "cond2" c13 "cond3"

calc c6="const"

calc c7="const"

calc c7="const"

For modeling *binomial* data, the program MLwiN requires two additional columns of one's, i.e., copies of the column

`const`

. names c6 "b.cons" c7 "denom"

The new columns are given the default names

`b.cons`

and `denom`

that MLwiN expects.We can inspect the data with the command

`names`

(without arguments) or `print`

. Data inspection is perhaps easier through the Names window in the GUI. Open this window by choosing Data Manipulation > Names. The Names window should resemble the one here. resp "hit"

expl 1 "const"

expl 1 "const"

The independent (

`EXPL`

anatory) and dependent (`RESP`

onse) variables must be specified. The command to include or exclude `EXPL`

anatory variables works as a toggle. The first argument `1`

forces inclusion of the (constant intercept) predictor in column 5 into the model. By default, explanatory variables are included in the fixed part, but not in the random part.iden 1 "item" 2 "subj"

The hierarchical structure must be specified. Level-1 units are the items, within subjects, at the lowest level --- levels are counted from the lowest upward, as we do in our papers. Level-2 units are the participants or subjects, at a higher level.

The model specified so far is a basic multilevel model with the random effect of

`item`

nested under that of `subj`

. expl 1 "b.cons"

MLwiN requires two copies of the same constant intercept, one (named **must be specified first**, as done above) at the (higher) subjects level, and in the fixed part, and the other (named

`const`

, `b.cons`

, specified after `const`

) at the (lower) items level.fpart 0 "b.cons"

By default, explanatory variables are entered into the fixed part of the model. Hence

`b.cons`

must be removed from the fixed part. setv 1 "b.cons"

The random part of the model must be specified. A constant variance at both levels is specified here, but **different copies** are used at each level. We use

`b.cons`

at the lower level...setv 2 "const"

... and

`const`

at the higher level.Specifying a GLMM on binomial data is easiest in the Equations window in MLwiN. Open this window from the menu (choose Model > Equations).

In the first line, click on capital**N** on righthand side of model equation, then select Binomial, and select logit link function. Confirm by pressing button "Done".

If necessary, in the first line, click on lowercase**n** on righthand side in parentheses, select

In the first line, click on capital

If necessary, in the first line, click on lowercase

`b.cons`

, and confirm with button "Done". This enters the intercept `b.cons`

into the random part. The Equations window should now resemble the one here.

This is still a basic multilevel model with items nested under subjects. We proceed with specifying the crossing of items and subjects.

iden 3 "const"

Crossed effects are created by adding a virtual third level, consisting of a single unit (here identified by value

`1`

).mlre "subj" "item" c20

This single level however is different for each

`item`

within `subj`

, so we need to create unique identifiers for items within subjects. The new identifiers are stored in column 20. setx "const" 3 c20 c201-c236 c250

rcon c250

rcon c250

These two commands realize the actual cross-classification of two random effects. The 36 identifiers (for the 36 items within subjects) in

`c20`

are used to build an additional constraint, at the now highest level `3`

, that the coefficients for each item (identified in `c20`

, dummies in `c201-c236`

) at that highest level are identical. In other words, items-within-subjects are constrained to be identical across subjects if they have the same identifier. start

Discard any previous estimates, and start iterative estimation of the coefficients in the updated (crossed) model. The program is set here to iterate until a certain convergence criterion is achieved (

`BATCH ON`

). The command `BATCH OFF`

allows inspection of estimates after each iteration. The Equations window should now resemble the one here. Press the Estimates button (at the bottom of the Equations window) to see the final estimates in green.

random

Show the estimates for the random part of the current (empty) model (13).

LEV. PARAMETER (NCONV) ESTIMATE S. ERROR(U) PREV. ESTIM CORR. ------------------------------------------------------------------------------- 3 C201 /C201 ( 1) 0.6416 0.2234 0.6414 1 3 C202 /C202 ( 1) 0.6416 0.2234 0.6414 1 3 C203 /C203 ( 1) 0.6416 0.2234 0.6414 1 3 C204 /C204 ( 1) 0.6416 0.2234 0.6414 1 . . . 3 C233 /C233 ( 1) 0.6416 0.2234 0.6414 1 3 C234 /C234 ( 1) 0.6416 0.2234 0.6414 1 3 C235 /C235 ( 1) 0.6416 0.2234 0.6414 1 3 C236 /C236 ( 1) 0.6416 0.2234 0.6414 1 ------------------------------------------------------------------------------- 2 const /const ( 1) 0.9429 0.3322 0.9426 1 ------------------------------------------------------------------------------- 1 b.cons /b.cons ( 4) 1 2.971e-009 1

The between-item variance at level 3 is constrained to be identical for each item. Note that the residual variance at level 1 is constrained to be unity. Estimates of between-subject and between-item variance are reported with their standard error.

fixed

Show the estimates for the fixed part of the current (empty) model (13).

PARAMETER ESTIMATE S. ERROR(U) PREV. ESTIMATE const -1.585 0.2556 -1.585 9916467 spaces left on worksheet

The estimated grand mean, or intercept, or constant, is -1.585 logit units, corresponding to a hit rate of .170, which agrees closely with the observed average hit rate.

like

Compute and show the likelihood ratio of the current model (13).

-2*log(lh) is 652.749

The next, intermediate model (14) has the fixed effect of

`cond`

in its fixed part. Instead of listing only the intercept (in the fixed part of the regression formula, as in the empty model (13) above, the new model adds the fixed effect of `cond`

and suppresses the intercept.expl 1 c11-c13

Add (

`1`

) the three dummies in `c11-c13`

, for the three levels of factor `cond`

, as explanatory variables. The dummies are included in the fixed part by default.fpart 0 "const"

Remove (

`0`

) the `const`

(intercept) predictor from the fixed part. The variable remains in the model as an explanatory variable in the random part.note : no changes in random part

NOTE adds comment into the output window and log file.

start

random

Show the estimates for the random part of the current model.

LEV. PARAMETER (NCONV) ESTIMATE S. ERROR(U) PREV. ESTIM CORR. ------------------------------------------------------------------------------- 3 C201 /C201 ( 5) 0 0 0 1 3 C202 /C202 ( 5) 0 0 0 1 3 C203 /C203 ( 5) 0 0 0 1 3 C204 /C204 ( 5) 0 0 0 1 . . . 3 C233 /C233 ( 5) 0 0 0 1 3 C234 /C234 ( 5) 0 0 0 1 3 C235 /C235 ( 5) 0 0 0 1 3 C236 /C236 ( 5) 0 0 0 1 ------------------------------------------------------------------------------- 2 const /const ( 1) 1.403 0.3369 1.402 1 ------------------------------------------------------------------------------- 1 b.cons /b.cons ( 5) 1 0 1 9916406 spaces left on worksheet

There are several indications that the iterative computations for this model have not converged properly. First, the estimated between-items variance (level 3) is zero, which is highly implausible. Second, the addition of two predictors should decrease the random variances, but here the between-subject variance has increased relative to the predecessor model (14). Third, the likelihood ratio (see below) has also increased rather than decreased.

In general, it is better to include a predictor both in the fixed and in the random parts of the model. Only terms that are not significant are subsequently removed. For expository reasons, we have followed the reverse procedure here: predictors were added to the fixed part only. This inappropriate procedure does not fly: the variances and covariances at level 2 may differ too much among conditions, so that these differences cannot be ignored (as was attempted here).

In general, it is better to include a predictor both in the fixed and in the random parts of the model. Only terms that are not significant are subsequently removed. For expository reasons, we have followed the reverse procedure here: predictors were added to the fixed part only. This inappropriate procedure does not fly: the variances and covariances at level 2 may differ too much among conditions, so that these differences cannot be ignored (as was attempted here).

fixed

Show the estimates for the fixed part of the current model.

PARAMETER ESTIMATE S. ERROR(U) PREV. ESTIMATE cond1 -1.174 0.2787 -1.174 cond2 -1.660 0.2904 -1.660 cond3 -2.045 0.3044 -2.045

A fourth sign of trouble is that the per-condition standard errors are larger (and not smaller) in this model, as compared to the standard error of the grand mean in the previous (empty) model.

like

Compute and show the likelihood ratio of the current model.

-2*log(lh) is 648.041

The likelihood ratio has decreased by 4.7 (df=2, p=.095). The updated model is not significantly better than the empty predecessor model (13).

Instead of testing the fixed effects, of a faulty model, we prefer to improve the model by adding the three dummies for the three conditions into the random part at the subjects level (level 2).
We also include the full variance-covariance matrix at the subjects level, i.e., we do not assume sphericity.

setv 2 c11-c13

Add the three dummies in

`c11`

, `c12`

and `c13`

to the random part, at the participant level (level 2). The command `SETV`

(set variable) adds a variable into the variance-covariance matrix. This adds not only the three variances (on the diagonal) but also all their covariances (off the diagonal).clrv 2 "const"

Remove the

`const`

(intercept) predictor from the random part at level 2 (participants). The variable may remain in the model as an explanatory variable elsewhere.start

This full model now has the effect of

`cond`

in its fixed part, and in the random part at the participant level. Hence this model does not require homogeneity of variance (homoschedasticity). Because there are also covariances specified in the model, sphericity is not assumed either.random

Show the estimates for the random part of the current full model.

LEV. PARAMETER (NCONV) ESTIMATE S. ERROR(U) PREV. ESTIM CORR. ------------------------------------------------------------------------------- 3 C201 /C201 ( 1) 0.6114 0.2187 0.612 1 3 C202 /C202 ( 1) 0.6114 0.2187 0.612 1 3 C203 /C203 ( 1) 0.6114 0.2187 0.612 1 3 C204 /C204 ( 1) 0.6114 0.2187 0.612 1 . . . 3 C233 /C233 ( 1) 0.6114 0.2187 0.612 1 3 C234 /C234 ( 1) 0.6114 0.2187 0.612 1 3 C235 /C235 ( 1) 0.6114 0.2187 0.612 1 3 C236 /C236 ( 1) 0.6114 0.2187 0.612 1 ------------------------------------------------------------------------------- 2 cond1 /cond1 ( 1) 0.7499 0.3566 0.7495 1 2 cond2 /cond1 ( 1) 0.6462 0.2998 0.6463 0.856 2 cond2 /cond2 ( 1) 0.7594 0.4056 0.76 1 2 cond3 /cond1 ( 2) 0.8944 0.465 0.8944 0.629 2 cond3 /cond2 ( 2) 1.32 0.5299 1.321 0.922 2 cond3 /cond3 ( 1) 2.7 1.023 2.698 1 ------------------------------------------------------------------------------- 1 b.cons /b.cons ( 5) 1 2.046e-016 1

The non-zero co-variances clearly indicate that the sphericity assumption is violated. Subjects' average hit rates are correlated between conditions. A conventional RM-ANOVA would be inappropriate.

In addition, the between-subject variances (level 2) for the third condition is considerably and significantly larger than in the second baseline condition. Hence the homoschedasticity assumption is also violated.

fixed

Show the estimates for the fixed part of the current full model.

PARAMETER ESTIMATE S. ERROR(U) PREV. ESTIMATE cond1 -1.174 0.2598 -1.174 cond2 -1.66 0.2729 -1.66 cond3 -2.045 0.4046 -2.045

In order to evaluate the coefficients in the fixed part of the model, we have to set up appropriate contrast weights. There are 3 coefficients in the fixed part of this model. For each contrast, 3 weights are specified for the 3 coefficients, followed by the expected value of the contrast under H0 (i.e. zero). The sequence of 3 weights plus expected value is given for two pairwise comparisons A-B and C-B, because the central condition is a baseline in the present fictitious study. All weights for all contrasts in the fixed part are put in a single auxiliary variable (in column 350).

put 8 0 c350

edit 1 c350 1

edit 2 c350 -1

edit 6 c350 -1

edit 7 c350 1

edit 1 c350 1

edit 2 c350 -1

edit 6 c350 -1

edit 7 c350 1

Column 350 is first filled with zeros, some of which are later edited to meaningful weights.

name c350 "f.contrasts"

ftest "f.contrasts"

The

`FTEST`

command performs the actual testing in the fixed part using the weights in column 350. CONTRASTS cond1 -1.00 0.00 cond2 1.00 1.00 cond3 0.00 -1.00 result -0.49 0.38 chi square ( 1 df) 4.36 1.57 +/-95% c.i.(sep.) 0.46 0.60 +/-95% c.i.(sim.) 0.65 0.86 chi sq for simultaneous contrasts(3 df) = 7.10

Each contrast is tested by evaluating the amount of variance associated with that contrast, using a chi-square test statistic. The chi-square statistic is also reported for the joint contrasts, i.e., for the main effect of condition. Note that the reported degrees of freedom for the joint or simultaneous contrasts are not correct; this should be the number of contrasts that make up the whole test. Here we should evaluate chi-square=7.10 with 2 (not 3) d.f.

cpro 7.10 2

0.028725

Calculate the Chi-square PROBability for the two contrasts together, with chi-square = 7.10 and df=2, p=.029. There are significant differences in response among the three treatment conditions.

like

-2*log(lh) is 597.387

Use the GUI to save the current full model. This cannot be done from the Commands window, nor from inside a macro, so we have to choose the menu options File > Save worksheet As...

`m15plus.ws`

.
In general it is recommended to save each model in a separate file.The extended model above is probably the optimal model, which should be included in a formal report. All variances and covariances in the model are significantly non-zero, as indicated by their standard errors (using a one-sided Z test). The data are neither homoschedastic (homogeneous) nor spherical. The investigator might try to explain why condition 3 yields larger between-subject variance.

For expository reasons, however, we would like to estimate two "stripped" versions of this full model: Model (15) without the covariances of the dummies at level 2, and Model (14) without the dummies in the random part.

For expository reasons, however, we would like to estimate two "stripped" versions of this full model: Model (15) without the covariances of the dummies at level 2, and Model (14) without the dummies in the random part.

Proceeding from the previous full model, it is easiest to discard the covariances from the between-subjects variance-covariance matrix at level 2. This can be achieved by just clicking on the off-diagonal elements in that matrix. Confirm the removal of that element from the matrix.

It is also convenient to use the present estimates for this stripped model, instead of starting the estimation afresh. Press the button "More" in the upper left corner (or enter next in the Commands window).

It is also convenient to use the present estimates for this stripped model, instead of starting the estimation afresh. Press the button "More" in the upper left corner (or enter next in the Commands window).

random

Show the estimates for the random part of the current model (15).

LEV. PARAMETER (NCONV) ESTIMATE S. ERROR(U) PREV. ESTIM CORR. ------------------------------------------------------------------------------- 3 C201 /C201 ( 1) 0.5478 0.2054 0.5478 1 3 C202 /C202 ( 1) 0.5478 0.2054 0.5478 1 3 C203 /C203 ( 1) 0.5478 0.2054 0.5478 1 3 C204 /C204 ( 1) 0.5478 0.2054 0.5478 1 . . . 3 C233 /C233 ( 1) 0.5478 0.2054 0.5478 1 3 C234 /C234 ( 1) 0.5478 0.2054 0.5478 1 3 C235 /C235 ( 1) 0.5478 0.2054 0.5478 1 3 C236 /C236 ( 1) 0.5478 0.2054 0.5478 1 ------------------------------------------------------------------------------- 2 cond1 /cond1 ( 2) 0.8305 0.3806 0.8304 1 2 cond2 /cond2 ( 2) 0.7145 0.3931 0.7146 1 2 cond3 /cond3 ( 2) 2.807 1.055 2.807 1 ------------------------------------------------------------------------------- 1 b.cons /b.cons (10) 1 2.673e-016 1 9916012 spaces left on worksheet

Between-subject residuals (level 2) are indeed quite different in the three treatment conditions, as before, and as expected from the simulation parameters (see Note 1).

fixed

Show the estimates for the fixed part of the current model (15).

PARAMETER ESTIMATE S. ERROR(U) PREV. ESTIMATE cond1 -1.174 0.2628 -1.174 cond2 -1.66 0.2662 -1.66 cond3 -2.045 0.4079 -2.045 9916012 spaces left on worksheet

like

-2*log(lh) is 614.977

The higher log-likelihood indicates that the current model fits considerably worse than the previous full model. As a rough indication, we compute the difference in log-likelihood (614.977-597.387=17.59), and evaluate this with a chi-square statistic, with the difference in number of model parameters (3) as d.f., yielding p<.001.

Use the GUI to save the current model.

Proceeding from the previous model, it is easiest to discard the variances from the between-subjects variance-covariance matrix at level 2. This can be achieved in several ways. One is by just clicking on the diagonal elements in that matrix. Confirm the removal of that element from the matrix.

clrv 2 c11-c13

If the intercept

`const`

is still defined as an explanatory variable, we can bypass the `EXPL`

command, and bring that constant predictor into the random part.expl 1 "const"

setv 2 "const"

Because this model fits the data so badly, one should use the previous estimates as starting values for this stripped model, instead of starting the estimation afresh. Press the button "More" in the upper left corner (or enter next in the Commands window).

random

Show the estimates for the random part of the current model (14).

LEV. PARAMETER (NCONV) ESTIMATE S. ERROR(U) PREV. ESTIM CORR. ------------------------------------------------------------------------------- 3 C201 /C201 ( 1) 0.6526 0.2272 0.6517 1 3 C202 /C202 ( 1) 0.6526 0.2272 0.6517 1 3 C203 /C203 ( 1) 0.6526 0.2272 0.6517 1 3 C204 /C204 ( 1) 0.6526 0.2272 0.6517 1 . . . 3 C233 /C233 ( 1) 0.6526 0.2272 0.6517 1 3 C234 /C234 ( 1) 0.6526 0.2272 0.6517 1 3 C235 /C235 ( 1) 0.6526 0.2272 0.6517 1 3 C236 /C236 ( 1) 0.6526 0.2272 0.6517 1 ------------------------------------------------------------------------------- 2 const /const ( 1) 0.9675 0.3402 0.9663 1 ------------------------------------------------------------------------------- 1 bcons.1 /bcons.1 ( 5) 1 2.166e-016 1 9820482 spaces left on worksheet

fixed

Show the estimates for the fixed part of the current model (15).

PARAMETER ESTIMATE S. ERROR(U) PREV. ESTIMATE cond1 -1.174 0.2787 -1.174 cond2 -1.66 0.2904 -1.66 cond3 -2.045 0.3044 -2.045

ftest "f.contrasts"

The

`FTEST`

command performs the actual testing in the fixed part using the weights in column 350. CONTRASTS cond1 -1.00 0.00 cond2 1.00 1.00 cond3 0.00 -1.00 result -0.49 0.38 chi square ( 1 df) 5.24 2.46 +/-95% c.i.(sep.) 0.42 0.48 +/-95% c.i.(sim.) 0.59 0.69 chi sq for simultaneous contrasts(3 df) = 14.96

cprob 14.96 2

0.00056426

Calculate the Chi-square PROBability for the two contrasts together, with chi-square = 14.96 and df=2, p<.001. Even in this GLMM, there are significant differences in response among the three treatment conditions.

like

-2*log(lh) is 611.677

The log-likelihood of the current model (14) does not differ significantly from that of the previous model (15) (difference in log-likelihood is 3.3, with 2 d.f., yielding p=.192). Both models perform equally poor.

Use the GUI to save the current model.

Finally, to leave the program

`MLwiN`

, choose File > Exit from the GUI.2007.06.08 HQ