Methods of Experimental Research

course code 200800180

2010-11, period 1, September-November



News

  1. [2010.11.24] All grades are now available. Also see "post mortem" remarks.
  2. [2010.11.09] Included hint by Marja Oudega, for assignment 7a. Added useful websites of Daniel Soper and R. Lenth, under Further Reading. Added "post mortem" remarks.
  3. [2010.10.31] Assignments for session 8 are modified!
  4. [2010.10.19] As requested, a model answer for assignment 5 is available.
  5. [2010.09.27] Powerpoints for sessions 1, 2 and 3 have been added to the surfgroepen site, under Shared Documents > Extra.
  6. [2010.09.27] New submission deadlines: assignments Wed 20:00h, reviews Fri 12:00 noon.
  7. [2010.09.16] Schedule for reviews posted on the surfgroepen site, https://www.surfgroepen.nl/sites/mer1011blok1/, with apologies for my delay in informing you about this. It is OK to upload your review one day later, ie by Friday 6pm.
  8. [2010.08.09] Note that this edition of the course is intended for second-year students in the two-year M.Phil. program in Linguistics. First-year M.Phil. students should take this course in period 2. Students in the M.Sc. program Logopediewetenschap should take the shorter, Dutch version of this course in period 2 (code 200800181).

Practicalities

Teacher

Hugo Quené
e-mail h dot quene AT uu nl,
Trans 10, room 1.17
office hours Tue 14:00-16:00 and by appointment

Readings

Reading materials are indicated below for each session. These may be found in digital form on the UU Library pages, on the course website in WebCT, or in paper form in the UU Library.

Some recommended additional reading materials about reseach methododology and data analysis are: Butler (1985), Maxwell & Delaney (2004), Statsoft (2004), Johnson (2008, with helpful examples in R), Rosenthal & Rosnow (2008), and Moore e.a. (2009) [details].

Schedule

The most recent schedule will be available at the course schedule.

Prerequisites

This course requires a basic insight in and experience with statistics and of data analysis, including hypothesis testing, t tests, and analysis of variance. This is typically acquired in one or two introductory statitics courses, and/or from an introductory statistics textbook.

You should be comfortable with most questions in exams or in self-assessment tests about Statistics, such as the one by Jones.

Organization

The course has weekly class meetings on Mondays. In addition, there are computer lab sessions on Tuesdays, in which we will practice data analysis techniques for the weekly assignments.
The focus in this course is on independent study, assignments, and peer review, and less on class meetings.
The course will be taught in English.

Before each class meeting you'll have to do the following:

  1. make assignments about the topics covered in the last meeting;
  2. hand in your assignments (see below), by Wednesday 18:00h at the latest;
  3. review and judge the assignments of a fellow student, by Thursday 18:00h at the latest;
  4. read and study new materials.

During a class meeting we will discuss your work, using your mutual reviews, and new topics will be introduced.

After each class meeting, assignments have to be handed to the group bulletin board (on WebCT), so all information is available to all.
Put your work in one document per week, this has to be in PDF format (why PDF?). Name your document as LASTNAMEassN.pdf (use your last name and assignment number N). This should be done by Wednesday 18:00h at the latest.
Retrieve the document of your selected peer student for this week, and write a review of her/his work in a separate document. Name your review document as LASTNAMErevNREVIEWED.pdf (replace with your last name and assignment number N and name of reviewed student). Place your review on the group bulletin board in the same folder as the assignments, by Thursday 18:00h at the latest.
Before the Wednesday class session, you should read the review of your assignment. Notice that everybody's cooperation is required to make this schedule work! Failure to meet deadlines will cause problems "downstream", so make sure to finish and upload your work on time.

For the most part of the course, there will also be a "data lab" on Tuesdays, to practice and rehearse your skills in data analysis.

Peer Review

Peer review, commenting the work of a peer or colleague, is a serious business. You can learn more about it through these web pages:

Grading

Your final grade is determined by the weekly assigments (35%+35%) and the final assigment (30%). Your collected works and class participations of the first part of the course will be graded halfway during the course (weight 35%), and similarly for the second part of the course (also weight 35%). This means that your weekly assignments and reviews will not be graded weekly! It is your responsibility to bring up questions and to ask for clarifications about your work during class meetings. Remember to use the other students' assignments and peer reviews as well.
The final assignment determines 30% of your final grade. Due to the limited time in period 1, the final grades may not yet be available immediately after the end of the course.


Schedule

session 1: Mon 13 Sept

Experimentation. General methodology. The experimental method. Testing hypotheses. How to peer-review.

Reading: Before: Assignments:
Write clearly, correctly, and concisely. Make a document in PDF format with a maximum length of about 2000 words.
  1. Visit the University library — you could even do this physically. The location at Drift 27 is convenient and holds excellent collections.
    Take a recent printed issue (2009 or 2010) of an experimental linguistics journal (in phonetics, psycholinguistics, etc.), such as Journal of Phonetics, Journal of Memory and Language, Phonetica, etc, and select an article that reports an experimental study.
    (a) Which questions does the study attempt to answer?
    (b) Which independent and dependent variables are involved in the study?
    (c) Describe the design of the experiment.
  2. A researcher wants to know whether the vowel duration in stressed vowels is longer than in unstressed vowels. There are two groups of participants, and the researcher is interested in their difference (e.g. L1 and L2 speakers). The target vowels occur in the first vs. the third syllable of three-syllable words. To prevent strategic behavior (what's that?), a speaker may not produce words with different stress patterns: all words produced by a single speaker need to have the same stress pattern.
    Provide a possible design for this experiment. Indicate which factors are between or within subjects, dependent or independent, etc. Make a graph or table to illustrate your design.
  3. Surf to the Online Statistics website. Read Chapter 9 about "Logic of Hypothesis Testing", all sections.
    Answer the questions in the section "Interpreting Significant Results" and in the section "Interpreting Non-Significant Results". How many questions did you answer correctly?
    Print out and memorize the section "Misconceptions".
  4. This last assignment is not for peer review but for independent study. Now is the perfect time to brush up your statistical skills. Answer the tentamina of my Statistics course (see above). Afterwards, check your answers with those provided on the course webpage. Determine what parts of your statistics proficiency are still deficient. Design a plan of action, to remedy your shortcomings during this teaching period.
Links:

lab 1: Tue 14 Sept

Introduction. Practicalities. Working with SPSS. Working with R. Descriptive statistics. Inferential statistics: t tests and ANOVA.

In this course we will introduce and support two programs for data analysis.
SPSS can be used in the computer labs, and it can be obtained for a low fee under the UU campus license, from the surfspot web store.
R is a more recent program, more flexible than SPSS. R is quickly gaining in popularity, and becoming the standard in academic research. It can be obtained as open-source software from www.r-project.org; for an introduction see my tutorial.

We will use this toy data set (created by this R script).

session 2: Mon 20 Sept

Experimental design. Validity.

Readings:

Additional readings:

Assignments:

For this assignment you have to provide the experimental design of a prospective (future) study of your own. You could, for example, select an idea for your masters thesis, or a research project for one of your classes, or a follow-up study building on a previous experiment. Your prospective study should in principle be suitable for publication in a top peer-reviewed journal in your field; this means that not only the question being addressed, but the design and methodology need to be very good too! Your experimental design and methods should be adequate to provide answers to your question.

Give a brief introduction about the issues your study attempts to answer, and describe and motivate the experimental design and methods. Which are the dependent and independent variables? Discuss the construct validity of your manipulations (treatments) and observations. Describe and classify your design according to the schemes in the reading materials (within-subject, split-plot, etc). Can you give some estimate of the expected effect size? And if so, what would be the power of your study? How many units (children, participants, sentences, items) do you need to achieve that power? Think about plausible alternative explanations, and other threats to the validity of your study, and how to neutralize these threats in your design.

As before, your elaborations have to result in a PDF document to be placed (or announced) on the group webpage (see above). Write clearly, correctly, and concisely (you'll probably need about 2 or 3 pages of text).

CANCELLED: lab 2: Tue 21 Sept

Exploring significance, power, effect size, sample size.

Lab sessions will continue on Tue 28 Sept.

effect size

If we are comparing two groups of means, as in a pairwise t test, then the effect size d is defined as: d = (m1-m2)/s (Cohen, 1969, p.18; m represents mean).

A value of d=.2 is regarded as small, d=.5 as medium, d=.8 as large. It is left to the researcher to classify intermediate values (ibid., p.23-25).
The difference in body length between girls of 15 and 16 years old has a small effect size, just as male-female differences in sub-tests of an IQ test. "A medium effect size is conceived as one large enough to be visible to the naked eye," e.g. the difference in body length between girls of ages 14 and 18. Large effect sizes are "grossly perceptible", e.g. the difference in body length between girls of ages 13 and 18, or the difference in IQ between PhD graduates and freshman students.

If we are comparing k groups of means, as in an F test (ANOVA), then the effect size f is defined as: f = sm/s, where sm in turn is defined as the standard deviation of the k different group means (ibid., p.268). If k=2, then d=2f (ibid., p.278). These rules apply only if all groups are of the same size; otherwise different criteria apply.

A value of f=.10 is regarded as small, f=.25 as medium, f=.40 as large. Again, it is left to the researcher to classify intermediate values (ibid., p.278-281).
Small-sized effects can also be meaningful or interesting. Large differences may correspond to small effect sizes, due to measurement error, disruptive side effects, etc. Medium effect sizes are observed in IQ differences between house painters, mechanics, carpenters, butchers. Large effect sizes are observed in IQ differences between house painters, mechanics, carpenters, (railroad) engine drivers, and lab technicians.

Adapted from: Cohen, J. (1969). Statistical Power Analysis for the Behavioral Sciences (1st ed.). New York: Academic Press.

Additional reading: Rosenthal, R., R. L. Rosnow, & Rubin, D.B. (2000). Contrasts and Effect Sizes in Behavioral Research: A correlational approach. Cambridge: Cambridge University Press. ISBN 0-521-65980-9.

session 3: Mon 27 Sept

linear regression, error of measurement, reliability.

Reading: Links:

Reliability

Let us assume that we have 2 observations for each of 5 persons. These observations are about the perceived body weight, as judged by two 'raters' or judges, x1 and x2. The data are as follows:

person  x1  x2
 1      60  62
 2      70  68
 3      70  71
 4      65  65
 5      65  63

Because we have only two measures (variables), there is only one pair of measures to compare in this example. Very often, however, there are more than two judges involved, and hence many more pairs.

First, let us calculate the correlation between these two variables x1 and x2. This can be done in SPSS with the Correlations command (Analyze > Correlate > Bivariate, check Pearson correlation coefficient). This yields r=.904, and the average r (over 1 pair of judges) is the same.

If you need to compute r manually, one method is to first convert x1 and x2 to Z-values [(x-mean)/s], yielding z1 and z2. Then r = SUM(z1×z2) / (n-1).

This value of r corresponds to Cronbach's Alpha of (2×.904)/(1+.904) = .946 (with N=2 judges). Cronbach's Alpha can be obtained in SPSS by choosing Analyze > Scale > Reliability Analysis. Select the "items" (or judges) x1 and x2, and select model Alpha. The output states: Reliability Coefficients [over] 2 items, Alpha = .9459 [etc.]
If the same average correlation r=.904 had been observed over 4 judges (i.e. over 4×3 pairs of judges), then that would have indicated an even higher inter-rater reliability, viz. alpha = (4×.904)/(1+3×.904) = .974.

Exactly the same reasoning applies if the data are not provided by 2 raters judging the same 5 objects, but by 2 test items "judging" a property of the same 5 persons. Both approaches are common in language research. Although SPSS only mentions items, and inter-item reliability; the analysis is equally applicable to raters or judges, and inter-rater reliability.

Note that both judges (items) may be inaccurate. A priori, we do not know how good each judge is, nor which judge is better. We know, however, that their reliability of judging the same thing (true body weight, we hope) increases with their mutual correlation.

Now, let's regard the same data, but in a different context. We have one measuring instrument of the abstract concept x that we try to measure. The same 5 objects are measured twice (test-retest), yielding the data given above. In this test-retest context, there is always just one correlation, and the idea of inter-rater reliability does not apply in this context. We find that rxx=.904.

This reliability coefficient r = s2T / s2x . This provides us with an estimate about how much of the total variance is due to variance in the underlying, unknown, "true" scores. In this example, 90.4% of the total variance is estimated to be due to variance of the true scores. The complementary part, 9.6% of the total variance, is estimated to be due to measurement error. If there were no measurement error, then we would predict perfect correlation (r=1); if the measurements would contain only error (and no true score component at all), then we would predict zero correlation (r=0) between x1 and x2.
In this example, we find that
se = sx × sqrt(1-.904) = sqrt(15.484) × sqrt(.096) = 1.219
check: s2x = 15.484 = s2T + s2e = s2T + (1.219)2,
so s2T = 15.484 - 1.486 = 13.997
and indeed r = .904 = s2T / s2x = 13.997 / 15.484.

Supposedly, x1 and x2 measure the same property x. To obtain s2x, the total observed variance of x (as needed above), we cannot use x1 exclusively nor x2 exclusively. The total variance is obtained here from the two standard deviations:
s2x = sx1 × sx2
s2x = 4.18330 × 3.70135 = 15.484

In general, a reliability coefficient smaller than .5 is regarded as low, between .5 and .8 as moderate, and over .8 as high.

session 3 (continued)

Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. We have constructed a test consisting of 4 items, with an average inter-item correlation of 0.4.
    a. How many inter-item correlations are there, between 4 items? (Ignore the trivial correlation of an item with itself.)
    b. Compute the Cronbach Alpha reliability coefficient of this test of 4 items.
    Now we add a new 5th item.
    c. How many new inter-item correlations are added to the correlation matrix when a 5th item is added to the test?
    Unfortunately the coding of this item happens to be incorrect, that is, the scale was reversed for this new item. The inter-item correlation of this 5th item with each of the 4 older items is -0.4 (note the negative sign).
    d. What is the average inter-item correlation after adding this 5th test item?
    e. Compute the Cronbach Alpha coefficient of the longer test of 5 items.
    f. Compare and discuss the reliability and usefulness of the shorter and of the longer test.
  2. A student weights an object 6 times. The object is known to weigh 10 kg. She obtains readings on the scale of 9, 12, 5, 12, 10, and 12 kg. Describe the systematic error and the random errors characterizing the scale's performance.
    Adapted from: R.L. Rosnow & R. Rosenthal (2002). Beginning Behavioral Research: A conceptual primer (4th ed.). Upper Saddle River, NJ: Prentice Hall. Ch.6, Q.7, p.159.
  3. Let us assume that in this course, in addition to writing a peer review, you would also have to grade each other's work as part of the peer review process. Grades would have to be on the Dutch scale from 1 (bad) to 10 (good). Discuss the reliability and validity of this method to assess student performance. What are the possible threats to reliability and validity, and how could these be reduced?

lab 3: Tue 28 Sept

Reliability.

In the first hour I will introduce the R software for data analysis. In the second hour we will attempt to determine the reliability of a set of data (4 variables, 30 units), and perhaps work on the above assignments.

session 4: Mon 4 Oct

ANOVA: general principles, one-way, effect size. Post-hoc tests, multiple-test problem, Bonferroni adjustements.

Readings:

Additional Readings:

Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. In a study of cardiovascular risk factors, joggers who run at least 15 miles per week were compared with a control group described as "generally sedentary". Both men and women participated in this study. The design is a 2×2 between-subjects ANOVA, with Group and Sex as factors. There were 200 participants for each combination of factors. One of the dependent variables is the rate of heartbeat of a participant, after 6 minutes on a treadmill, expressed in beats per minute.
    Data from this study are available here in SPSS format, or as plain text (the latter file contains variable names in the first line).
    (a) What do you think of the construct validity? Please comment.
    (b) Is is allowed to conduct an analysis of variance on these data? Motivate your answer with relevant statistical considerations.
    (c) Conduct a two-way ANOVA on these data.
    (d) Write a summary of the results of this study, including the (partial) effect size η and η2. Draw your conclusions clearly.
    (e) From each cell (combination of factors), draw a random sample of n=20 individuals, out of the 200 in that cell. Explain how you have performed the random sampling. Repeat the two-way ANOVA on this smaller data set.
    (f) Discuss the similarities and differences in results between (b) and (d).
    This exercise is adapted from: Moore, D.S., & McCabe, G.P. (2003). Introduction to the Practice of Statistics (4th ed.). New York: Freeman. Example 13.8, pp.813-816.

lab 4: Tue 5 Oct

Adjusting t or adjusting df?

If two variables have unequal variances, then the t test statistic may become inflated. The computed t value is larger than it should be. Consequently H0 may be rejected while in fact it should not be rejected. This is known as a Type I error. To prevent this error, we should decrease the t test statistic by some amount. However, in practice it is easier to decrease not the t value itself, but its associated degrees of freedom. In this way we pretend that the t value is based on fewer observations than it was. Thus we are more conservative while testing our hypotheses.

The figure below shows the critical values of t (on the vertical axis) for a range of df (on horizontal axis). critical t values
As you can see, decreasing the value of the t statistic with unchanged df (down arrow) yields a similar effect as decreasing the df with unchanged t (left arrow). Both adjustements would result here in an insignificant outcome, and H0 would not be rejected. Because it's easier to compute the adjustement in df (length of left arrow) than the adjustement in t (length of down arrow), we commonly adjust the degrees of freedom, and not the t value, if we need to be more conservative.

We will encounter the same reasoning with F values used in ANOVA; those adjustements are known as the Huynh-Feldt and Greenhouse-Geisser corrections to the degrees of freedom.

session 5: Mon 11 Oct

ANOVA with subjects as random factor. Repeated Measures ANOVA.

Readings:

Additional Readings:

Links:
Compare these notes from similar courses in experimental research methods, at other universities: Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. Conduct a Repeated Measures ANOVA of the data from a split-plot design, as provided in file md593wide.txt. These are imaginary response times to the same task under three treatment conditions. Each participant is tested under all treatments conditions. Participants are from two groups, of young and old persons. Hence group is a between-subjects factor, and treatment is a within-subjects factor.
    Of course, you should start out with some exploratory data analysis, and have a look at the interaction pattern, and verify whether the data meet the assumptions for Repeated Measures ANOVA. You should also evaluate and discuss the effect sizes of the main effects and interactions.
    If you want to do this in R, then you should use the same data in "long" format, in md593long.txt. (Conversion between long and wide data formats can be done with the reshape function in R.)
    These data are from Maxwell & Delaney (2004, p.593, Tables 12.7 and 12.15).
  2. See model answer for assignment 5.

lab 5: Tue 11 Oct

RM-ANOVA in SPSS and in R. Converting between wide and long data layouts. Interpreting results.

session 6: Mon 18 Oct

Multiple regression, multivariate analyses. Collinearity. Factor Analysis.

Readings: correlation cartoon, xkcd.com/552
Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. Answer the following questions: Moore McCabe & Craig (2009), Chapter 11: Exercises 1, 2, 3, 4, 30.
    Data for the last question are available here in plain text format (the first line of this file contains variable names, SC stands for self-concept).

Forward or Backward?

For questions 16 and 33 the FORWARD method is most appropriate. This means that you start with an empty model (only intercept b0) to which predictors are added step by step. After each addition of a predictor, you check whether the model performs significantly better than before (e.g. by checking whether R2 increases).
The questions are about the increment in R2 by adding a predictor. The relevant information is easier to find in the SPSS output if you specify the FORWARD method.
As a bonus, you could check what happens if you exclude case #51 from the data set, e.g. by marking it as a missing value. This is quite easy if you keep the regression command in a Syntax window for repeated use.

HSS, SAT, GPA??

The chapter by Moore, McCabe & Craig draws heavily on American concepts. In the USA, your achievements are all that counts, in life as well as in study. The US grading system ranges from A+ (excellent) to F (fail).
For admission to a university, two things are taken into account: (a) your average grades in the final years of high school (HSM, HSS, HSE), and (b) your score in a national admissions exam, like the Dutch CITO test (Scholastic Aptitude Test, SAT). Top-class universities, like Harvard, Yale, Stanford, etc., use both parameters in selection. You have to be the best in your class (but your classmates are strongly competing for this honor), plus you need a minimal score on your SAT.
During your academic study, all your grades and results contribute to your Grade Point Average (GPA), a weighted average grade. This GPA is generally used as an indication of academic achievement and success. The authors attempt to predict the GPA from the previously obtained indicators (a) and (b).

regression

Why is it "regression"? This has to do with heredity, the field of biology where regression was first developed by Francis Galton (cousin of Charles Darwin) in the late 19th century.
Take a sample of fathers, and note their body length (X). Wait for one full generation, and measure the body length of each father's oldest adult son (Y). Make a scattergram of X and Y. The best-fitting line throught the observations has a slope of less than 1 (typically about .65). This is because the sons' length Y tends to "regress to the mean" — outlier fathers tend to produce average sons, and average fathers also tend to produce average sons. Galton called this phenomenon "regression towards mediocrity". Thus the best-fitting line is a "regression" line because it shows the degree of regression to the mean, from one generation to the next. (Note that any slope larger than 0 suggests an hereditary component in the sons' body length, Y.)
Questions: Which variable has the larger variance, X or Y? Does the variation in body length increase or decrease (regress) over generations? Why?

partial correlation

The partial correlation between X1 and X2, with X3 removed from both, is given by:
r12.3 = ( r12-r13r23 ) / sqrt[ (1-r213)(1-r223) ]

lab 6: Tue 19 Oct

Multiple regression.

session 7: Mon 25 Oct

Putting things together: mixed-effects modeling.

Readings:
Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. Import the data analyzed by Quené & Van den Bergh (2008) into your statistical package. Perform the following analyses:
    1. Repeated Measures ANOVA, with subjects as random factor. (If you do this in SPSS you may need the data in wide format, as in file x24bysubj.txt. You need to import these data so that 36 consecutive rows constitute one case. In the SPSS data set, each row then corresponds to one subject. The 36 responses within a subject are ordered by condition. There are 24 subjects.)
      Hint: When you load the txt-file, make sure that you tick the box 'delimited' instead of the default 'fixed width' in step 2. In step 3 you then can specify the number of variables per case. --- Thanks to Marja Oudega
    2. Mixed-Effects (Regression) Analysis, with subjects as random factor. (For SPSS and R, you need the data in long format, as in file x24r2.txt, where each row corresponds to one response, with the subjects, items and conditions coded as separate columns).
    3. Mixed-Effects (Regression) Analysis, with subjects and items as two crossed random factors. (For SPSS and R, you need the data in long format again).
    Discuss your findings, and discuss the similarities and differences among the three analyses.

lab 7: Tue 26 Oct

Mixed-effects modeling in SPSS and in R.

session 8: Mon 1 Nov

logistic regression, GLM.

Readings: Links: Assignments:
Your answers and solutions to the questions below have to be handed in as described above. As always, write clearly, correctly, and concisely.
  1. Answer the following questions: Moore, McCabe & Craig (2009), Chapter 14, Exercises 26, 28, 30, 43 (exercise numbers revised for 6th ed.). In order to speed up your work on exercise 14.43, I've put the data on the web, in a plain text data file. The first line contains the names of the variables. Data (N=2900) start on line 2, and are coded as follows:
    hospital:  0=hosp.A, 1=hosp.B;
    outcome:   0=died, 1=survived;
    condition: 0=poor, 1=good.
    
    Variables are separated by commas.
    In your logistic regression, the variables hospital and condition must be treated as categorical variables. For easier interpretation of the results, I prefer to use the zero codes as references or baselines (in SPSS choose Reference: First).
    SPSS does not provide you with 95% confidence intervals; you need to calculate these by hand. The Wald statistic in the SPSS output is the same as the test statistic for β as defined on p.46 in the reading material.

lab 8: Tue 2 Nov

GLM in SPSS and in R.

We're going to analyze the fictional data in file polder.txt.

final assignment

For your final assignment you can choose either one of the two assignments described below. Deadline is Monday 8 Nov 2010, 23:59 h.

option one

This final assignment is to submit a revised or improved version of one previous assignment of this course. You're free to choose which one you want to revise.
As always, the revised paper should be (as much as possible) a running text, not a collection of incomplete sentences and statistical output.
In the revised version you have to accommodate the comments of your reviewer — if you agree of course. Also use the reading materials and hyperlinks provided.
You may discuss the reviewer's comments in the text of your revised version. But perhaps you find it easier to write a coherent (revised) text on your own, plus a second document with revision notes, in which you discuss the reviewer's comments explicitly, stating which comments you have taken into account, which comments you have ignored, and why.

option two

There are considerable similarities between analysis of variance (ANOVA) and multiple regression (MR), especially in designs without repeated measurements. You can read more about these similarities in the sources given below. Your assignment is to analyze a given dataset with both methods, and to discuss the differences and similarities among the two methods. The ANOVA must use a single independent variable named opleiding (type of study: 1=alfa, 2=beta, 3=gamma). The MR must use the so-called dummy factors named isalfa, isbeta, isgamma (0=false, 1=true, for each dummy factor), or a subset of these dummies. (Note that the given dataset already contains the categorical factor as well as the associated dummy factors.) Each row or unit represents a single participant of a fictional survey about students' work load. The dependent variable studietijd represents the time (in hour/week) a student spends on study-related activities. In your analyses, do not forget to inspect all relevant relationships between the factor(s) and the DV, to test whether assumptions are met, and to inspect residuals of all models.

Sources:

Please remember to evaluate this course. Go to www.let.uu.nl/oce, log in with your SolisID, and fill in the evaluation about this course.
After 1 Nov 2010, all courses at the Faculty of Humanities, including this one, will be evaluated according to a new protocol, and without involvement of the coordinator and/or faculty. More information should reach you soon.

Post Mortem remarks

The likelihood L of a statistical model refers to the probability of obtaining the data with the model, i.e., the probability that a regression model with estimated regression coefficients would yield the data set under study. Thus likelihood refers to the probability of the data given the model, and not to the probability of the model given the data. For computational purposes we often work with the logarithm of the likelihood L, log(L), rather than with the raw likelihood.
The notion of likelihood is similar to significance, which also refers to the likelihood of a test statistic (e.g. F or t, computed from the data) given a model (e.g. H0), and not to the probability of H0 given the data..

Assignment 14.26: Gender bias in syntactic textbooks. The point is not that there are more male than female references. That is a bit like one hospital taking in a higher number of patients than the other hospital. Instead, the point is that the odds of a female reference being juvenile is/are 4:1 (P=.80), whereas the odds of a male reference being juvenile is/are 0.65:1 (P=.39). Hence the odds of a reference being juvenile is 6.15 times as large for females than for males: the odds ratio, i.e. ratio of odds, is 4.0/0.65 = 6.15. That does not imply that the proportion is 6.15 times as large for females than for males, but it states that the odds are 6.15 as large for females than for males. Assignment 14.28 shows that this gender difference (in odds of a reference being juvenile) is indeed significant. So the gender bias is not that there are more female than male references, but that the chance of a reference being juvenile is larger for female than for male references.


Further reading and browsing


© 2003-2010 HQ 2010.11.24

html   css