course code 200800180
201011, period 1, SeptemberNovember
 [2010.11.24]
All grades are now available.
Also see "post mortem" remarks.
 [2010.11.09]
Included hint by Marja Oudega, for assignment 7a.
Added useful websites of Daniel Soper and R. Lenth, under Further Reading.
Added "post mortem" remarks.
 [2010.10.31]
Assignments for session 8 are modified!
 [2010.10.19]
As requested, a model answer for assignment 5 is available.
 [2010.09.27]
Powerpoints for sessions 1, 2 and 3 have been added to the surfgroepen site, under Shared Documents > Extra.
 [2010.09.27]
New submission deadlines: assignments Wed 20:00h, reviews Fri 12:00 noon.
 [2010.09.16]
Schedule for reviews posted on the surfgroepen site,
https://www.surfgroepen.nl/sites/mer1011blok1/, with apologies for my delay in informing you about this. It is OK to upload your review one day later, ie by Friday 6pm.
 [2010.08.09]
Note that this edition of the course is intended for secondyear students in the twoyear M.Phil. program in Linguistics. Firstyear M.Phil. students should take this course in period 2. Students in the M.Sc. program Logopediewetenschap should take the shorter, Dutch version of this course in period 2 (code 200800181).
Teacher
Hugo Quené
email h dot quene AT uu nl,
Trans 10, room 1.17
office hours Tue 14:0016:00 and by appointment
Readings
Reading materials are indicated below for each session. These may be found in digital form on the UU Library pages, on the course website in WebCT, or in paper form in the UU Library.
Some recommended additional reading materials about reseach methododology and data analysis are: Butler (1985), Maxwell & Delaney (2004), Statsoft (2004), Johnson (2008, with helpful examples in R), Rosenthal & Rosnow (2008), and Moore e.a. (2009) [details].

Additional reading materials;
these will be distributed online or through a pigeonhole (postvak) at Trans 10.
Schedule
The most recent schedule will be available at the
course schedule.
Prerequisites
This course requires a basic insight in and experience with statistics and of data analysis, including hypothesis testing, t tests, and analysis of variance. This is typically acquired in one or two introductory statitics courses, and/or from an introductory statistics textbook.
You should be comfortable with most questions in exams or in selfassessment tests about Statistics, such as the one by
Jones.
The course has weekly class meetings on Mondays.
In addition, there are computer lab sessions on Tuesdays, in which we will practice data analysis techniques for the weekly assignments.
The focus in this course is on independent study, assignments, and peer review, and less on class meetings.
The course will be taught in English.
Before each class meeting you'll have to do the following:
 make assignments about the topics covered in the last meeting;
 hand in your assignments (see below), by Wednesday 18:00h at the latest;
 review and judge the assignments of a fellow student, by Thursday 18:00h at the latest;
 read and study new materials.
During a class meeting we will discuss your work,
using your mutual reviews, and new topics will be introduced.
After each class meeting, assignments have to be handed to the group bulletin board (on WebCT), so all information is available to all.
Put your work in one document per week, this has to be in PDF format (why PDF?).
Name your document as LASTNAMEassN.pdf (use your last name and assignment number N).
This should be done by Wednesday 18:00h at the latest.
Retrieve the document of your selected peer student for this week, and write a review of her/his work in a separate document.
Name your review document as LASTNAMErevNREVIEWED.pdf (replace with your last name and assignment number N and name of reviewed student).
Place your review on the group bulletin board in the same folder as the assignments, by Thursday 18:00h at the latest.
Before the Wednesday class session, you should read the review of your assignment.
Notice that everybody's cooperation is required to make this schedule work! Failure to meet deadlines will cause problems "downstream", so make sure to finish and upload your work on time.
For the most part of the course, there will also be a "data lab" on Tuesdays, to practice and rehearse your skills in data analysis.
Peer review, commenting the work of a peer or colleague, is a serious business.
You can learn more about it through these web pages:

Peer Review, by Laura Guertin (Science Education Research Center, Carleton College, Northfield, MN);

Peer Review (Manoa Writing Program, Univ of Hawai'i, Honolulu, HI);

You Lost Me In The Third Paragraph, about "gracious criticism" (Writing Center, George Mason Univ, Fairfax, VA);

Responding To Other People's Writing (Writing Center, Univ North Carolina, Chapel Hill, NC).
Your final grade is determined by the weekly assigments (35%+35%) and the final assigment (30%).
Your collected works and class participations of the first part of the course will be graded halfway during the course (weight 35%), and similarly for the second part of the course (also weight 35%). This means that your weekly assignments and reviews will not be graded weekly!
It is your responsibility to bring up questions and to ask for clarifications about your work during class meetings. Remember to use the other students' assignments and peer reviews as well.
The final assignment determines 30% of your final grade. Due to the limited time in period 1, the final grades may not yet be available immediately after the end of the course.
session 1: Mon 13 Sept
Experimentation. General methodology. The experimental method. Testing hypotheses. How to peerreview.
Reading:

H. Quené (2010). How to design and analyze language acquisition studies.
To appear in: S. Unsworth & E. Blom (Eds.) Experimental Methods in Language Acquisition Research. Amsterdam: Benjamins.
[PDF].

Butler, Ch. (1985) Statistics in Linguistics. s.l.: Blackwell.
[out of print, but see the
web version]. Chapter 6.
Before:

This course requires and presumes that you already have previous knowledge of statistics, equivalent with an introductory course in statistics. You may test yourself by means of this tentamen of the Statistiek course.

Make sure that you have an account on the Solis UU netwerk.
 Browse the various websites listed below.
Make sure to browse the
Research Methods Knowledge Base.
Assignments:
Write clearly, correctly, and concisely.
Make a document
in PDF format with a maximum length of about 2000 words.

Visit the University library — you could even do this physically. The location at Drift 27 is convenient and holds excellent collections.
Take a recent printed issue (2009 or 2010) of an experimental linguistics journal (in phonetics, psycholinguistics, etc.), such as Journal of Phonetics, Journal of Memory and Language, Phonetica, etc, and select an article that reports an experimental study.
(a) Which questions does the study attempt to answer?
(b) Which independent and dependent variables are involved in the study?
(c) Describe the design of the experiment.

A researcher wants to know whether the vowel duration in stressed vowels is longer than in unstressed vowels. There are two groups of participants, and the researcher is interested in their difference (e.g. L1 and L2 speakers). The target vowels occur in the first vs. the third syllable of threesyllable words. To prevent strategic behavior (what's that?), a speaker may not produce words with different stress patterns: all words produced by a single speaker need to have the same stress pattern.
Provide a possible design for this experiment. Indicate which factors are between or within subjects, dependent or independent, etc. Make a graph or table to illustrate your design.

Surf to the Online Statistics website. Read Chapter 9 about "Logic of Hypothesis Testing", all sections.
Answer the questions in the section "Interpreting Significant Results" and in the section "Interpreting NonSignificant Results". How many questions did you answer correctly?
Print out and memorize the section "Misconceptions".

This last assignment is not for peer review but for independent study. Now is the perfect time to brush up your statistical skills. Answer the tentamina of my Statistics course (see above). Afterwards, check your answers with those provided on the course webpage. Determine what parts of your statistics proficiency are still deficient. Design a plan of action, to remedy your shortcomings during this teaching period.
Links:

Research Methods Knowledge Base by William M. Trochim, Cornell University, Ithaca, NY (Web Center for Social Research Methods)

Statistics Every Writer Should Know by Robert Niles, journalist at the Los Angeles Times.

Rice Virtual Lab in Statistics by David Lane, Rice University, Houston, TX. This website also contains HyperStat Online, an online introduction to statistics.

WISE Project, Web Interface for Statistics Education, at Claremont Graduate University, Claremont, CA.

StatPages.Net by John C. Pezzullo, Georgetown University, Washington DC. A treasure trove of helpful links and programs.

webbased tools for statistical computation, by Richard Lowry, Vassar College, Poughkeepsie, NY.

GraphPad QuickCalcs, easy online statistical calculators.

Lucian Freud, Grosses Interieur W 11 (nach Wattau) (1981/'83).

mountain gorillas

Highly recommended: Java Applets for Power and Sample Size by Russell V. Lenth, University of Iowa, Iowa City, IA.
Read the whole web page first, before using the applets!
lab 1: Tue 14 Sept
Introduction. Practicalities. Working with SPSS. Working with R. Descriptive statistics. Inferential statistics: t tests and ANOVA.
In this course we will introduce and support two programs for data analysis.
SPSS can be used in the computer labs, and it can be obtained for a low fee under the UU campus license, from the surfspot web store.
R is a more recent program, more flexible than SPSS. R is quickly gaining in popularity, and becoming the standard in academic research. It can be obtained as opensource software from
www.rproject.org;
for an introduction see my
tutorial.
We will use this toy data set (created by this
R script).
session 2: Mon 20 Sept
Experimental design. Validity.
Readings:
Additional readings:
Assignments:
For this assignment you have to provide the experimental design of a prospective (future) study of your own. You could, for example, select an idea for your masters thesis, or a research project for one of your classes, or a followup study building on a previous experiment. Your prospective study should in principle be suitable for publication in a top peerreviewed journal in your field; this means that not only the question being addressed, but the design and methodology need to be very good too! Your experimental design and methods should be adequate to provide answers to your question.
Give a brief introduction about the issues your study attempts to answer, and describe and motivate the experimental design and methods. Which are the dependent and independent variables? Discuss the construct validity of your manipulations (treatments) and observations. Describe and classify your design according to the schemes in the reading materials (withinsubject, splitplot, etc). Can you give some estimate of the expected effect size? And if so, what would be the power of your study? How many units (children, participants, sentences, items) do you need to achieve that power? Think about plausible alternative explanations, and other threats to the validity of your study, and how to neutralize these threats in your design.
As before, your elaborations have to result in a PDF document to be placed (or announced) on the group webpage (see above). Write clearly, correctly, and concisely (you'll probably need about 2 or 3 pages of text).
CANCELLED: lab 2: Tue 21 Sept
Exploring significance, power, effect size, sample size.
Lab sessions will continue on Tue 28 Sept.
If we are comparing two groups of means, as in a pairwise t test, then the effect size d is defined as: d = (m_{1}m_{2})/s (Cohen, 1969, p.18; m represents mean).
A value of d=.2 is regarded as small, d=.5 as medium, d=.8 as large. It is left to the researcher to classify intermediate values (ibid., p.2325).
The difference in body length between girls of 15 and 16 years old has a small effect size, just as malefemale differences in subtests of an IQ test. "A medium effect size is conceived as one large enough to be visible to the naked eye," e.g. the difference in body length between girls of ages 14 and 18. Large effect sizes are "grossly perceptible", e.g. the difference in body length between girls of ages 13 and 18, or the difference in IQ between PhD graduates and freshman students.
If we are comparing k groups of means, as in an F test (ANOVA), then the effect size f is defined as: f = s_{m}/s, where s_{m} in turn is defined as the standard deviation of the k different group means (ibid., p.268). If k=2, then d=2f (ibid., p.278). These rules apply only if all groups are of the same size; otherwise different criteria apply.
A value of f=.10 is regarded as small, f=.25 as medium, f=.40 as large.
Again, it is left to the researcher to classify intermediate values (ibid., p.278281).
Smallsized effects can also be meaningful or interesting. Large differences may correspond to small effect sizes, due to measurement error, disruptive side effects, etc. Medium effect sizes are observed in IQ differences between house painters, mechanics, carpenters, butchers. Large effect sizes are observed in IQ differences between house painters, mechanics, carpenters, (railroad) engine drivers, and lab technicians.
Adapted from: Cohen, J. (1969). Statistical Power Analysis for the Behavioral Sciences (1st ed.). New York: Academic Press.
Additional reading: Rosenthal, R., R. L. Rosnow, & Rubin, D.B. (2000). Contrasts and Effect Sizes in Behavioral Research: A correlational approach. Cambridge: Cambridge University Press. ISBN 0521659809.
session 3: Mon 27 Sept
linear regression, error of measurement, reliability.
Reading:
 Ferguson, G. A., & Takane, Y. (1989). Statistical Analysis in Psychology and Education (6th ed.). New York: McGrawHill. Chapter 24 "Errors of Measurement", pp.466478.
 Trochim, W.M. (2002). Measurement. In: Research Methods Knowledge Base
(Web Center for Social Research Methods).
Links:
Let us assume that we have 2 observations for each of 5 persons. These observations are about the perceived body weight, as judged by two 'raters' or judges, x1 and x2.
The data are as follows:
person x1 x2
1 60 62
2 70 68
3 70 71
4 65 65
5 65 63
Because we have only two measures (variables), there is only one pair of measures to compare in this example. Very often, however, there are more than two judges involved, and hence many more pairs.
First, let us calculate the correlation between these two variables x1 and x2. This can be done in SPSS with the Correlations command (Analyze > Correlate > Bivariate, check Pearson correlation coefficient).
This yields r=.904, and the average r (over 1 pair of judges) is the same.
If you need to compute r manually, one method is to first convert x1 and x2 to Zvalues [(xmean)/s], yielding z1 and z2. Then r = SUM(z1×z2) / (n1).
This value of r corresponds to Cronbach's Alpha of (2×.904)/(1+.904) = .946 (with N=2 judges).
Cronbach's Alpha can be obtained in SPSS by choosing Analyze > Scale > Reliability Analysis. Select the "items" (or judges) x1 and x2, and select model Alpha.
The output states: Reliability Coefficients [over] 2 items, Alpha = .9459 [etc.]
If the same average correlation r=.904 had been observed over 4 judges (i.e. over 4×3 pairs of judges), then that would have indicated an even higher interrater reliability, viz. alpha = (4×.904)/(1+3×.904) = .974.
Exactly the same reasoning applies if the data are not provided by 2 raters judging the same 5 objects, but by 2 test items "judging" a property of the same 5 persons. Both approaches are common in language research. Although SPSS only mentions items, and interitem reliability; the analysis is equally applicable to raters or judges, and interrater reliability.
Note that both judges (items) may be inaccurate. A priori, we do not know how good each judge is, nor which judge is better. We know, however, that their reliability of judging the same thing (true body weight, we hope) increases with their mutual correlation.
Now, let's regard the same data, but in a different context. We have one measuring instrument of the abstract concept x that we try to measure. The same 5 objects are measured twice (testretest), yielding the data given above. In this testretest context, there is always just one correlation, and the idea of interrater reliability does not apply in this context. We find that r_{xx}=.904.
This reliability coefficient r = s^{2}_{T} / s^{2}_{x} . This provides us with an estimate about how much of the total variance is due to variance in the underlying, unknown, "true" scores. In this example, 90.4% of the total variance is estimated to be due to variance of the true scores. The complementary part, 9.6% of the total variance, is estimated to be due to measurement error. If there were no measurement error, then we would predict perfect correlation (r=1); if the measurements would contain only error (and no true score component at all), then we would predict zero correlation (r=0) between x1 and x2.
In this example, we find that
s_{e} = s_{x} × sqrt(1.904) = sqrt(15.484) × sqrt(.096) = 1.219
check: s^{2}_{x} = 15.484 = s^{2}_{T} + s^{2}_{e} =
s^{2}_{T} + (1.219)^{2},
so s^{2}_{T} = 15.484  1.486 = 13.997
and indeed r = .904 = s^{2}_{T} / s^{2}_{x} = 13.997 / 15.484.
Supposedly, x1 and x2 measure the same property x. To obtain s^{2}_{x}, the total observed variance of x (as needed above), we cannot use x1 exclusively nor x2 exclusively. The total variance is obtained here from the two standard deviations:
s^{2}_{x} = s_{x1} × s_{x2}
s^{2}_{x} = 4.18330 × 3.70135 = 15.484
In general, a reliability coefficient smaller than .5 is regarded as low, between .5 and .8 as moderate, and over .8 as high.
session 3 (continued)
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.

We have constructed a test consisting of 4 items, with an average interitem correlation of 0.4.
a. How many interitem correlations are there, between 4 items? (Ignore the trivial correlation of an item with itself.)
b. Compute the Cronbach Alpha reliability coefficient of this test of 4 items.
Now we add a new 5th item.
c. How many new interitem correlations are added to the correlation matrix when a 5th item is added to the test?
Unfortunately the coding of this item happens to be incorrect, that is, the scale was reversed for this new item. The interitem correlation of this 5th item with each of the 4 older items is 0.4 (note the negative sign).
d. What is the average interitem correlation after adding this 5th test item?
e. Compute the Cronbach Alpha coefficient of the longer test of 5 items.
f. Compare and discuss the reliability and usefulness of the shorter and of the longer test.

A student weights an object 6 times. The object is known to weigh 10 kg. She obtains readings on the scale of 9, 12, 5, 12, 10, and 12 kg. Describe the systematic error and the random errors characterizing the scale's performance.
Adapted from: R.L. Rosnow & R. Rosenthal (2002). Beginning Behavioral Research: A conceptual primer (4th ed.). Upper Saddle River, NJ: Prentice Hall. Ch.6, Q.7, p.159.

Let us assume that in this course, in addition to writing a peer review, you would also have to grade each other's work as part of the peer review process. Grades would have to be on the Dutch scale from 1 (bad) to 10 (good).
Discuss the reliability and validity of this method to assess student performance. What are the possible threats to reliability and validity, and how could these be reduced?
lab 3: Tue 28 Sept
Reliability.
In the first hour I will introduce the R software for data analysis.
In the second hour we will attempt to determine the reliability of a set of
data (4 variables, 30 units), and perhaps work on the above assignments.
session 4: Mon 4 Oct
ANOVA: general principles, oneway, effect size. Posthoc tests, multipletest problem, Bonferroni adjustements.
Readings:
Additional Readings:
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.

In a study of cardiovascular risk factors, joggers who run at least 15 miles per week were compared with a control group described as "generally sedentary". Both men and women participated in this study. The design is a 2×2 betweensubjects ANOVA, with Group and Sex as factors. There were 200 participants for each combination of factors. One of the dependent variables is the rate of heartbeat of a participant, after 6 minutes on a treadmill, expressed in beats per minute.
Data from this study are available here in SPSS format, or as plain text (the latter file contains variable names in the first line).
(a) What do you think of the construct validity? Please comment.
(b) Is is allowed to conduct an analysis of variance on these data? Motivate your answer with relevant statistical considerations.
(c) Conduct a twoway ANOVA on these data.
(d) Write a summary of the results of this study, including the (partial) effect size η and η^{2}. Draw your conclusions clearly.
(e) From each cell (combination of factors), draw a random sample of n=20 individuals, out of the 200 in that cell. Explain how you have performed the random sampling. Repeat the twoway ANOVA on this smaller data set.
(f) Discuss the similarities and differences in results between (b) and (d).
This exercise is adapted from:
Moore, D.S., & McCabe, G.P. (2003). Introduction to the Practice of Statistics (4th ed.). New York: Freeman. Example 13.8, pp.813816.
lab 4: Tue 5 Oct

Perform oneway ANOVA of the datasets warpbreaks (number of breaks, wool type, tension condition) and Pitt_Shoaf1.txt (participant ID, condition, reaction time) that are provided in the Surfnet group (under Shared Documents > Extra).

Explore oneway ANOVA by means of this Java
applet.
For example, what happens if you add outliers or change variances?

Introduction to twoway ANOVA, interaction, fixed and random effects, error terms.

Work on above assignment involving twoway ANOVA about runners and sitters.
Adjusting t or adjusting df?
If two variables have unequal variances, then the t test statistic may become inflated. The computed t value is larger than it should be. Consequently H0 may be rejected while in fact it should not be rejected. This is known as a Type I error. To prevent this error, we should decrease the t test statistic by some amount. However, in practice it is easier to decrease not the t value itself, but its associated degrees of freedom.
In this way we pretend that the t value is based on fewer observations than it was. Thus we are more conservative while testing our hypotheses.
The figure below shows the critical values of t (on the vertical axis) for a range of df (on horizontal axis).
As you can see, decreasing the value of the t statistic with unchanged df (down arrow) yields a similar effect as decreasing the df with unchanged t (left arrow). Both adjustements would result here in an insignificant outcome, and H0 would not be rejected.
Because it's easier to compute the adjustement in df (length of left arrow) than the adjustement in t (length of down arrow), we commonly adjust the degrees of freedom, and not the t value, if we need to be more conservative.
We will encounter the same reasoning with F values used in ANOVA; those adjustements are known as the HuynhFeldt and GreenhouseGeisser corrections to the degrees of freedom.
session 5: Mon 11 Oct
ANOVA with subjects as random factor. Repeated Measures ANOVA.
Readings:
Additional Readings:

Johnson (2008, Ch.4), chapter on Psycholinguistics also introduces ANOVA methods (book details below).

Additional ANOVA Topics,
by Burt Gerstman (sections on RM ANOVA and following)
Links:
Compare these notes from similar courses in experimental research methods, at other universities:
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.

Conduct a Repeated Measures ANOVA of the data from a splitplot design, as provided in file
md593wide.txt. These are imaginary response times to
the same task under three treatment conditions. Each participant is tested under all treatments conditions. Participants are from two groups, of young and old persons. Hence group is a betweensubjects factor, and treatment is a withinsubjects factor.
Of course, you should start out with some exploratory data analysis, and have a look at the interaction pattern, and verify whether the data meet the assumptions for Repeated Measures ANOVA. You should also evaluate and discuss the effect sizes of the main effects and interactions.
If you want to do this in R, then you should use the same data in "long" format, in md593long.txt. (Conversion between long and wide data formats can be done with the reshape function in R.)
These data are from Maxwell & Delaney (2004, p.593, Tables 12.7 and 12.15).

See model answer for assignment 5.
lab 5: Tue 11 Oct
RMANOVA in SPSS and in R. Converting between wide and long data layouts. Interpreting results.
session 6: Mon 18 Oct
Multiple regression, multivariate analyses. Collinearity. Factor Analysis.
Readings:

Moore, McCabe & Craig (2009). Chapter 11 "Multiple Regression".
Availability TBA.

optional:
Peck & Devore (YEAR) Statistics: The Exploration and Analysis of Data. Chapter 14 "Multiple Regression Analysis".

optional:
chapter on Multiple Regression,
from the excellent online statistics textbook at StatSoft, Inc.
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.

Answer the following questions:
Moore McCabe & Craig (2009), Chapter 11: Exercises 1, 2, 3, 4, 30.
Data for the last question are available here in plain text format (the first line of this file contains variable names, SC stands for selfconcept).
Forward or Backward?
For questions 16 and 33 the FORWARD method is most appropriate.
This means that you start with an empty model (only intercept b_{0})
to which predictors are added step by step. After each addition of a predictor,
you check whether the model performs significantly better than before
(e.g. by checking whether R^{2} increases).
The questions are about the increment in R^{2} by adding a predictor.
The relevant information is easier to find in the SPSS output if you specify
the FORWARD method.
As a bonus, you could check what happens if you exclude case #51 from the
data set, e.g. by marking it as a missing value. This is quite easy if you
keep the regression command in a Syntax window for repeated use.
HSS, SAT, GPA??
The chapter by Moore, McCabe & Craig draws heavily on American concepts. In the USA, your achievements are all that counts, in life as well as in study. The US grading system ranges from A+ (excellent) to F (fail).
For admission to a university, two things are taken into account:
(a) your average grades in the final years of high school (HSM, HSS, HSE), and
(b) your score in a national admissions exam, like the Dutch CITO test (Scholastic Aptitude Test, SAT).
Topclass universities, like Harvard, Yale, Stanford, etc., use both parameters in selection. You have to be the best in your class (but your classmates are strongly competing for this honor), plus you need a minimal score on your SAT.
During your academic study, all your grades and results contribute to your Grade Point Average (GPA), a weighted average grade. This GPA is generally used as an indication of academic achievement and success. The authors attempt to predict the GPA from the previously obtained indicators (a) and (b).
regression
Why is it "regression"? This has to do with heredity, the field of biology where regression was first developed by Francis Galton (cousin of Charles Darwin) in the late 19th century.
Take a sample of fathers, and note their body length (X). Wait for one full generation, and measure the body length of each father's oldest adult son (Y). Make a scattergram of X and Y. The bestfitting line throught the observations has a slope of less than 1 (typically about .65). This is because the sons' length Y tends to "regress to the mean" — outlier fathers tend to produce average sons, and average fathers also tend to produce average sons. Galton called this phenomenon "regression towards mediocrity". Thus the bestfitting line is a "regression" line because it shows the degree of regression to the mean, from one generation to the next. (Note that any slope larger than 0 suggests an hereditary component in the sons' body length, Y.)
Questions: Which variable has the larger variance, X or Y? Does the variation in body length increase or decrease (regress) over generations? Why?
partial correlation
The partial correlation between X
_{1} and X
_{2}, with X
_{3} removed from both, is given by:
r
_{12.3} = ( r
_{12}r
_{13}r
_{23} ) / sqrt[ (1r
^{2}_{13})(1r
^{2}_{23}) ]
 Ferguson, G. A., & Takane, Y. (1989). Statistical Analysis in Psychology and Education (6th ed.). New York: McGrawHill. p495.
lab 6: Tue 19 Oct
Multiple regression.
session 7: Mon 25 Oct
Putting things together: mixedeffects modeling.
Readings:

H. Quené & H. van den Bergh (2008). Examples of mixedeffects modeling with crossed random effects and with binomial data. Journal of Memory and Language, 59, 413425. [doi:10.1016/j.jml.2008.02.002, see
www.hugoquene.nl/mixedeffects].
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.

Import the data analyzed by Quené & Van den Bergh (2008) into your statistical package. Perform the following analyses:

Repeated Measures ANOVA, with subjects as random factor.
(If you do this in SPSS you may need the data in wide format, as in file
x24bysubj.txt. You need to import these data so that 36 consecutive rows constitute one case. In the SPSS data set, each row then corresponds to one subject. The 36 responses within a subject are ordered by condition. There are 24 subjects.)
Hint: When you load the txtfile, make sure that you tick the box 'delimited' instead of the default 'fixed width' in step 2. In step 3 you then can specify the number of variables per case.  Thanks to Marja Oudega

MixedEffects (Regression) Analysis, with subjects as random factor.
(For SPSS and R, you need the data in long format, as in file
x24r2.txt, where each row corresponds to one response, with the subjects, items and conditions coded as separate columns).

MixedEffects (Regression) Analysis, with subjects and items as two crossed random factors.
(For SPSS and R, you need the data in long format again).
Discuss your findings, and discuss the similarities and differences among the three analyses.
lab 7: Tue 26 Oct
Mixedeffects modeling in SPSS and in R.
session 8: Mon 1 Nov
logistic regression, GLM.
Readings:

Moore, McCabe & Craig (2009), Chapter 14 "Logistic Regression", only available online.

optional: Generalized Linear Models, from StatSoft, Inc — be warned that this is not an easy text! Concentrate on the first part, until "Types of Analyses". The sections on matrix algebra may be skipped. Make notes about your questions and problems with this text.

optional: Johnson (2008), Chapter 5 "Sociolinguistics".
Links:

Logistic Regression, from UCLA, Los Angeles, CA.

GLM in SPSS, from UCLA, Los Angeles, CA.

Generalized Linear Models (GLZ), by Edward F. Connor, San Francisco State University, CA.

Generalized Linear Models and other things, by Ben Bolker, University of Florida, Gainesville, FL.

GLM in SPSS, from Universiteit Gent.

Logistic Regression, by Michael T. Brannick, University of South Florida, Tampa, FL.
Assignments:
Your answers and solutions to the questions below have to be handed in
as described
above.
As always, write clearly, correctly, and concisely.
 Answer the following questions:
Moore, McCabe & Craig (2009), Chapter 14, Exercises 26, 28, 30, 43 (exercise numbers revised for 6th ed.).
In order to speed up your work on exercise 14.43, I've put the data on the web, in a plain text data file. The first line contains the names of the variables. Data (N=2900) start on line 2, and are coded as follows:
hospital: 0=hosp.A, 1=hosp.B;
outcome: 0=died, 1=survived;
condition: 0=poor, 1=good.
Variables are separated by commas.
In your logistic regression, the variables hospital and condition must be treated as categorical variables. For easier interpretation of the results, I prefer to use the zero codes as references or baselines (in SPSS choose Reference: First).
SPSS does not provide you with 95% confidence intervals; you need to calculate these by hand. The Wald statistic in the SPSS output is the same as the test statistic for β as defined on p.46 in the reading material.
lab 8: Tue 2 Nov
GLM in SPSS and in R.
We're going to analyze the fictional data in file
polder.txt.
final assignment
For your final assignment you can choose either one of the two assignments described below.
Deadline is Monday 8 Nov 2010, 23:59 h.
option one
This final assignment is to submit a revised or improved version of one previous assignment of this course. You're free to choose which one you want to revise.
As always, the revised paper should be (as much as possible) a running text, not a collection of incomplete sentences and statistical output.
In the revised version you have to accommodate the comments of your reviewer — if you agree of course. Also use the reading materials and hyperlinks provided.
You may discuss the reviewer's comments in the text of your revised version. But perhaps you find it easier to write a coherent (revised) text on your own, plus a second document with revision notes, in which you discuss the reviewer's comments explicitly, stating which comments you have taken into account, which comments you have ignored, and why.
option two
There are considerable similarities between analysis of variance (ANOVA) and multiple regression (MR), especially in designs without repeated measurements. You can read more about these similarities in the sources given below.
Your assignment is to analyze a given dataset with both methods, and to discuss the differences and similarities among the two methods. The ANOVA must use a single independent variable named opleiding (type of study: 1=alfa, 2=beta, 3=gamma). The MR must use the socalled dummy factors named isalfa, isbeta, isgamma (0=false, 1=true, for each dummy factor), or a subset of these dummies. (Note that the given dataset already contains the categorical factor as well as the associated dummy factors.) Each row or unit represents a single participant of a fictional survey about students' work load.
The dependent variable studietijd represents the time (in hour/week) a student spends on studyrelated activities.
In your analyses, do not forget to inspect all relevant relationships between the factor(s) and the DV, to test whether assumptions are met, and to inspect residuals of all models.
Sources:
Please remember to evaluate this course. Go to
www.let.uu.nl/oce,
log in with your SolisID, and fill in the evaluation about this course.
After 1 Nov 2010, all courses at the Faculty of Humanities, including this one, will be evaluated according to a new protocol, and without involvement of the coordinator and/or faculty. More information should reach you soon.
The likelihood L of a statistical model refers to the probability of obtaining the data with the model, i.e., the probability that a regression model with estimated regression coefficients would yield the data set under study. Thus likelihood refers to the probability of the data given the model, and not to the probability of the model given the data. For computational purposes we often work with the logarithm of the likelihood L, log(L), rather than with the raw likelihood.
The notion of likelihood is similar to significance, which also refers to the likelihood of a test statistic (e.g. F or t, computed from the data) given a model (e.g. H0), and not to the probability of H0 given the data..
Assignment 14.26: Gender bias in syntactic textbooks. The point is not that there are more male than female references. That is a bit like one hospital taking in a higher number of patients than the other hospital. Instead, the point is that the odds of a female reference being juvenile is/are 4:1 (P=.80), whereas the odds of a male reference being juvenile is/are 0.65:1 (P=.39). Hence the odds of a reference being juvenile is 6.15 times as large for females than for males: the odds ratio, i.e. ratio of odds, is 4.0/0.65 = 6.15. That does not imply that the proportion is 6.15 times as large for females than for males, but it states that the odds are 6.15 as large for females than for males. Assignment 14.28 shows that this gender difference (in odds of a reference being juvenile) is indeed significant. So the gender bias is not that there are more female than male references, but that the chance of a reference being juvenile is larger for female than for male references.

Butler, Ch. (1985). Statistics in Linguistics. s.l.: Blackwell.
[out of print, but see the
web version].

Carver, R.H. & Nash, J.G. (2005). Doing Data Analysis with SPSS version 12.0. Belmont, CA: Brooks/Cole. ISBN 053446551x.

Gelman, A. & Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge: Cambridge University Press. ISBN 9780521686891.

Johnson, K. (2008). Quantitative Methods in Linguistics. Malden, MA: Blackwell. ISBN 9781405144254. [Recommended!]

Maxwell, S.E. & Delaney, H.D. (2004). Designing Experiments and Analyzing Data: A model comparison perspective (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. ISBN 0805837183. [very good, but not an easy book].

Kirkpatrick, L.A. & Feeney, B.C. (2005). A Simple Guide to SPSS for Windows/ for Version 12.0. Belmont, CA: Thomson Wadsworth. ISBN 0534610064.

Lenth, R.V. (2009). Java Applets for Power and Sample Size. [Read the whole page first before you use the applets.]

Levin, I.P. (1998). Relating Statistics and Experimental Design: An introduction. Thousand Oaks, CA: Sage. Sage University Papers Series on Quantitative Applications in the Social Sciences; 07125. ISBN 0761914722.

Moore, D.S., McCabe, G.P., & Craig, B.A. (2009). Introduction to the Practice of Statistics (6th ed.). New York: Freeman.

Rosenthal, R., & Rosnow, R.L. (2008). Essentials of Behavioral Research: Methods and Data Analysis. Boston: McGraw Hill. ISBN 0073531960.

Rosenthal, R., R. L. Rosnow, & Rubin, D.B. (2000). Contrasts and Effect Sizes in Behavioral Research: A correlational approach. Cambridge: Cambridge University Press. ISBN 0521659809.

Soper, D. (2010). Daniel Soper Homepage, with statistical calculators and other goodies.

StatSoft, Inc. (2004). Electronic Statistics Textbook. Tulsa, OK: StatSoft.
URL: http://www.statsoft.com/textbook/stathome.html [clear and concise chapters about most statistical topics].

Also check the hyperlinks listed under session 1.

Also check the webpage of my statistiek course [in Dutch].
© 20032010
HQ
2010.11.24
html
css