Related SAS Tutorials
Related SPSS Tutorials
Here is a summary of the tests we will learn for the scenario where k = 2. Methods in BOLD will be our main focus.
We have completed our discussion on dependent samples (2nd column) and now we move on to independent samples (1st column).
Independent Samples (More Emphasis) 
Dependent Samples (Less Emphasis) 
Standard Tests
NonParametric Test

Standard Test
NonParametric Tests

We have discussed the dependent sample case where observations are matched/paired/linked between the two samples. Recall that in that scenario observations can be the same individual or two individuals who are matched between samples. To analyze data from dependent samples, we simply took the differences and analyzed the difference using onesample techniques.
Now we will discuss the independent sample case. In this case, all individuals are independent of all other individuals in their sample as well as all individuals in the other sample. This is most often accomplished by either:
Recall that here we are interested in the effect of a twovalued (k = 2) categorical variable (X) on a quantitative response (Y). Random samples from the two subpopulations (defined by the two categories of X) are obtained and we need to evaluate whether or not the data provide enough evidence for us to believe that the two subpopulation means are different.
In other words, our goal is to test whether the means μ_{1} and μ_{2} (which are the means of the variable of interest in the two subpopulations) are equal or not, and in order to do that we have two samples, one from each subpopulation, which were chosen independently of each other.
The test that we will learn here is commonly known as the twosample ttest. As the name suggests, this is a ttest, which as we know means that the pvalues for this test are calculated under some tdistribution.
Here are figures that illustrate some of the examples we will cover. Notice how the original variables X (categorical variable with two levels) and Y (quantitative variable) are represented. Think about the fact that we are in case C → Q!
As in our discussion of dependent samples, we will often simplify our terminology and simply use the terms “population 1” and “population 2” instead of referring to these as subpopulations. Either terminology is fine.
Question: Does it matter which population we label as population 1 and which as population 2?
Answer: No, it does not matter as long as you are consistent, meaning that you do not switch labels in the middle.
Recall that our goal is to compare the means μ_{1} and μ_{2} based on the two independent samples.
The hypotheses represent our goal to compare μ_{1}and μ_{2}.
The null hypothesis is always:
Ho: μ_{1} – μ_{2} = 0 (which is the same as μ_{1} = μ_{2})
(There IS NO association between the categorical explanatory variable and the quantitative response variable)
We will focus on the twosided alternative hypothesis of the form:
Ha: μ_{1} – μ_{2} ≠ 0 (which is the same as μ_{1} ≠ μ_{2}) (twosided)
(There IS AN association between the categorical explanatory variable and the quantitative response variable)
Note that the null hypothesis claims that there is no difference between the means. Conceptually, Ho claims that there is no relationship between the two relevant variables (X and Y).
Our parameter of interest in this case (the parameter about which we are making an inference) is the difference between the means (μ_{1} – μ_{2}) and the null value is 0. The alternative hypothesis claims that there is a difference between the means.
The twosample ttest can be safely used as long as the following conditions are met:
The two samples are indeed independent.
We are in one of the following two scenarios:
(i) Both populations are normal, or more specifically, the distribution of the response Y in both populations is normal, and both samples are random (or at least can be considered as such). In practice, checking normality in the populations is done by looking at each of the samples using a histogram and checking whether there are any signs that the populations are not normal. Such signs could be extreme skewness and/or extreme outliers.
(ii) The populations are known or discovered not to be normal, but the sample size of each of the random samples is large enough (we can use the rule of thumb that a sample size greater than 30 is considered large enough).
Assuming that we can safely use the twosample ttest, we need to summarize the data, and in particular, calculate our data summary—the test statistic.
Test Statistic for TwoSample Ttest:
There are two choices for our test statistic, and we must choose the appropriate one to summarize our data We will see how to choose between the two test statistics in the next section. The two options are as follows:
We use the following notation to describe our samples:
Here are the two cases for our test statistic.
(A) Equal Variances: If it is safe to assume that the two populations have equal standard deviations, we can pool our estimates of this common population standard deviation and use the following test statistic.
where
(B) Unequal Variances: If it is NOT safe to assume that the two populations have equal standard deviations, we have unequal standard deviations and must use the following test statistic.
Comments:
Each of these tests rely on a particular tdistribution under which the pvalues are calculated. In the case where equal variances are assumed, the degrees of freedom are simply:
whereas in the case of unequal variances, the formula for the degrees of freedom is more complex. We will rely on the software to obtain the degrees of freedom in both cases and provided us with the correct pvalue (usually this will be a twosided pvalue).
As usual, we draw our conclusion based on the pvalue. Be sure to write your conclusions in context by specifying your current variables and/or precisely describing the difference in population means in terms of the current variables.
If the pvalue is small, there is a statistically significant difference between what was observed in the sample and what was claimed in Ho, so we reject Ho.
Conclusion: There is enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is enough evidence that the difference in population means is not equal to zero.
If the pvalue is not small, we do not have enough statistical evidence to reject Ho.
Conclusion: There is NOT enough evidence that the categorical explanatory variable is related to (or associated with) the quantitative response variable. More specifically, there is enough evidence that the difference in population means is not equal to zero.
In particular, if a cutoff probability, α (significance level), is specified, we reject Ho if the pvalue is less than α. Otherwise, we do not reject Ho.
As in previous methods, we can followup with a confidence interval for the difference between population means, μ_{1} – μ_{2} and interpret this interval in the context of the problem.
Interpretation: We are 95% confident that the population mean for (one group) is between __________________ compared to the population mean for (the other group).
Confidence intervals can also be used to determine whether or not to reject the null hypothesis of the test based upon whether or not the null value of zero falls outside the interval or inside.
If the null value, 0, falls outside the confidence interval, Ho is rejected. (Zero is NOT a plausible value based upon the confidence interval)
If the null value, 0, falls inside the confidence interval, Ho is not rejected. (Zero IS a plausible value based upon the confidence interval)
NOTE: Be careful to choose the correct confidence interval about the difference between population means using the same assumption (variances equal or variances unequal) and not the individual confidence intervals for the means in the groups themselves.
Since we have two possible tests we can conduct, based upon whether or not we can assume the population standard deviations (or variances) are equal, we need a method to determine which test to use.
Although you can make a reasonable guess using information from the data (i.e. look at the distributions and estimates of the standard deviations and see if you feel they are reasonably equal), we have a test which can help us here, called the test for Equality of Variances. This output is automatically displayed in many software packages when a twosample ttest is requested although the particular test used may vary.The hypotheses of this test are:
Ho: σ_{1} = σ_{2} (the standard deviations in the two populations are the same)
Ha: σ_{1} ≠ σ_{2} (the standard deviations in the two populations are not the same)
Now let’s look at a complete example of conducting a twosample ttest, including the embedded test for equality of variances.
This question was asked of a random sample of 239 college students, who were to answer on a scale of 1 to 25. An answer of 1 means personality has maximum importance and looks no importance at all, whereas an answer of 25 means looks have maximum importance and personality no importance at all. The purpose of this survey was to examine whether males and females differ with respect to the importance of looks vs. personality.
Note that the data have the following format:
Score (Y)  Gender (X) 
15  Male 
13  Female 
10  Female 
12  Male 
14  Female 
14  Male 
6  Male 
17  Male 
etc. 
The format of the data reminds us that we are essentially examining the relationship between the twovalued categorical variable, gender, and the quantitative response, score. The two values of the categorical explanatory variable (k = 2) define the two populations that we are comparing — males and females. The comparison is with respect to the response variable score. Here is a figure that summarizes the example:
Comments:
Step 1: State the hypotheses
Recall that the purpose of this survey was to examine whether the opinions of females and males differ with respect to the importance of looks vs. personality. The hypotheses in this case are therefore:
Ho: μ_{1} – μ_{2} = 0 (which is the same as μ_{1} = μ_{2})
Ha: μ_{1} – μ_{2} ≠ 0 (which is the same as μ_{1} ≠ μ_{2})
where μ_{1} represents the mean “looks vs personality score” for females and μ_{2} represents the mean “looks vs personality score” for males.
It is important to understand that conceptually, the two hypotheses claim:
Ho: Score (of looks vs. personality) is not related to gender
Ha: Score (of looks vs. personality) is related to gender
Step 2: Obtain data, check conditions, and summarize data
The output might also be broken up if you export or copy the items in certain ways. The results are the same but it can be more difficult to read.
Step 3: Find the pvalue of the test by using the test statistic as follows
Step 4: Conclusion
As usual a small pvalue provides evidence against Ho. In our case our pvalue is practically 0 (which is smaller than any level of significance that we will choose). The data therefore provide very strong evidence against Ho so we reject it.
As a followup to this conclusion, we can construct a confidence interval for the difference between population means. In this case we will construct a confidence interval for μ_{1} – μ_{2} the population mean “looks vs personality score” for females minus the population mean “looks vs personality score” for males.
Practical Significance:
We should definitely ask ourselves if this is practically significant
SPSS Output for this example (NonParametric Output for Examples 1 and 2)
SAS Output and SAS Code (Includes NonParametric Test)
Here is another example.
A study was conducted which enrolled and followed heart attack patients in a certain metropolitan area. In this example we are interested in determining if there is a relationship between Body Mass Index (BMI) and gender. Individuals presenting to the hospital with a heart attack were randomly selected to participate in the study.
Step 1: State the hypotheses
Ho: μ_{1} – μ_{2} = 0 (which is the same as μ_{1} = μ_{2})
Ha: μ_{1} – μ_{2} ≠ 0 (which is the same as μ_{1} ≠ μ_{2})
where μ_{1} represents the mean BMI for males and μ_{2} represents the mean BMI for females.
It is important to understand that conceptually, the two hypotheses claim:
Ho: BMI is not related to gender in heart attack patients
Ha: BMI is related to gender in heart attack patients
Step 2: Obtain data, check conditions, and summarize data
Step 3: Find the pvalue of the test by using the test statistic as follows
Step 4: Conclusion
As usual a small pvalue provides evidence against Ho. In our case our pvalue is 0.001 (which is smaller than any level of significance that we will choose). The data therefore provide very strong evidence against Ho so we reject it.
As a followup to this conclusion, we can construct a confidence interval for the difference between population means. In this case we will construct a confidence interval for μ_{1} – μ_{2} the population mean BMI for males minus the population mean BMI for females.
Practical Significance:
SPSS Output for this example (NonParametric Output for Examples 1 and 2)
SAS Output and SAS Code (Includes NonParametric Test)
Note: In the SAS output the variable gender is not formatted, in this case Males = 0 and Females = 1.
Comments:
You might ask yourself: “Where do we use the test statistic?”
It is true that for all practical purposes all we have to do is check that the conditions which allow us to use the twosample ttest are met, lift the pvalue from the output, and draw our conclusions accordingly.
However, we feel that it is important to mention the test statistic for two reasons:
Now try some more activities for yourself.
We will look at one nonparametric test in the twoindependent samples setting. More details will be discussed later (Details for NonParametric Alternatives).
Related SAS Tutorials
Related SPSS Tutorials
We are in Case CQ of inference about relationships, where the explanatory variable is categorical and the response variable is quantitative.
As we mentioned in the summary of the introduction to Case C→Q, the first case that we will deal with is that involving matched pairs. In this case:
Notice from this point forward we will use the terms population 1 and population 2 instead of subpopulation 1 and subpopulation 2. Either terminology is correct.
One of the most common cases where dependent samples occur is when both samples have the same subjects and they are “paired by subject.” In other words, each subject is measured twice on the response variable, typically before and then after some kind of treatment/intervention in order to assess its effectiveness.
Suppose you want to assess the effectiveness of an SAT prep class.
It would make sense to use the matched pairs design and record each sampled student’s SAT score before and after the SAT prep classes are attended:
Recall that the two populations represent the two values of the explanatory variable. In this situation, those two values come from a single set of subjects.
This, however, is not the only case where the paired design is used. Other cases are when the pairs are “natural pairs,” such as siblings, twins, or couples.
Notes about graphical summaries for paired data in Case CQ:
The idea behind the paired ttest is to reduce this twosample situation, where we are comparing two means, to a single sample situation where we are doing inference on a single mean, and then use a simple ttest that we introduced in the previous module.
In this setting, we can easily reduce the raw data to a set of differences and conduct a onesample ttest.
In other words, by reducing the two samples to one sample of differences, we are essentially reducing the problem from a problem where we’re comparing two means (i.e., doing inference on μ_{1}−μ_{2}) to a problem in which we are studying one mean.
In general, in every matched pairs problem, our data consist of 2 samples which are organized in n pairs:
We reduce the two samples to only one by calculating the difference between the two observations for each pair.
For example, think of Sample 1 as “before” and Sample 2 as “after”. We can find the difference between the before and after results for each participant, which gives us only one sample, namely “before – after”. We label this difference as “d” in the illustration below.
The paired ttest is based on this one sample of n differences,
and it uses those differences as data for a onesample ttest on a single mean — the mean of the differences.
This is the general idea behind the paired ttest; it is nothing more than a regular onesample ttest for the mean of the differences!
We will now go through the 4step process of the paired ttest.
Recall that in the ttest for a single mean our null hypothesis was: Ho: μ = μ_{0} and the alternative was one of Ha: μ < μ_{0} or μ > μ_{0} or μ ≠ μ_{0}. Since the paired ttest is a special case of the onesample ttest, the hypotheses are the same except that:
Instead of simply μ we use the notation μ_{d} to denote that the parameter of interest is the mean of the differences.
In this course our null value μ_{0} is always 0. In other words, going back to our original paired samples our null hypothesis claims that that there is no difference between the two means. (Technically, it does not have to be zero if you are interested in a more specific difference – for example, you might be interested in showing that there is a reduction in blood pressure of more than 10 points but we will not specifically look at such situations).
Therefore, in the paired ttest: The null hypothesis is always:
Ho: μ_{d} = 0
(There IS NO association between the categorical explanatory variable and the quantitative response variable)
We will focus on the twosided alternative hypothesis of the form:
Ha: μ_{d} ≠ 0
(There IS AN association between the categorical explanatory variable and the quantitative response variable)
Some students find it helpful to know that it turns out that μ_{d} = μ_{1} – μ_{2} (in other words, the difference between the means is the same as the mean of the differences). You may find it easier to first think about the hypotheses in terms of μ_{1} – μ_{2} and then represent it in terms of μ_{d}.
The paired ttest, as a special case of a onesample ttest, can be safely used as long as:
The sample of differences is random (or at least can be considered random in context).
The distribution of the differences in the population should vary normally if you have small samples. If the sample size is large, it is safe to use the paired ttest regardless of whether the differences vary normally or not. This condition is satisfied in the three situations marked by a green check mark in the table below.
Note: normality is checked by looking at the histogram of differences, and as long as no clear violation of normality (such as extreme skewness and/or outliers) is apparent, the normality assumption is reasonable.
Assuming that we can safely use the paired ttest, the data are summarized by a test statistic:
where
This test statistic measures (in standard errors) how far our data are (represented by the sample mean of the differences) from the null hypothesis (represented by the null value, 0).
Notice this test statistic has the same general form as those discussed earlier:
As a special case of the onesample ttest, the null distribution of the paired ttest statistic is a t distribution (with n – 1 degrees of freedom), which is the distribution under which the pvalues are calculated. We will use software to find the pvalue for us.
As usual, we draw our conclusion based on the pvalue. Be sure to write your conclusions in context by specifying your current variables and/or precisely describing the population mean difference in terms of the current variables.
In particular, if a cutoff probability, α (significance level), is specified, we reject Ho if the pvalue is less than α. Otherwise, we fail to reject Ho.
If the pvalue is small, there is a statistically significant difference between what was observed in the sample and what was claimed in Ho, so we reject Ho.
Conclusion: There is enough evidence that the categorical explanatory variable is associated with the quantitative response variable. More specifically, there is enough evidence that the population mean difference is not equal to zero.
Remember: a small pvalue tells us that there is very little chance of getting data like those observed (or even more extreme) if the null hypothesis were true. Therefore, a small pvalue indicates that we should reject the null hypothesis.
If the pvalue is not small, we do not have enough statistical evidence to reject Ho.
Conclusion: There is NOT enough evidence that the categorical explanatory variable is associated with the quantitative response variable. More specifically, there is NOT enough evidence that the population mean difference is not equal to zero.
Notice how much better the first sentence sounds! It can get difficult to correctly phrase these conclusions in terms of the mean difference without confusing double negatives.
As in previous methods, we can followup with a confidence interval for the mean difference, μ_{d} and interpret this interval in the context of the problem.
Interpretation: We are 95% confident that the population mean difference (described in context) is between (lower bound) and (upper bound).
Confidence intervals can also be used to determine whether or not to reject the null hypothesis of the test based upon whether or not the null value of zero falls outside the interval or inside.
If the null value, 0, falls outside the confidence interval, Ho is rejected. (Zero is NOT a plausible value based upon the confidence interval)
If the null value, 0, falls inside the confidence interval, Ho is not rejected. (Zero IS a plausible value based upon the confidence interval)
NOTE: Be careful to choose the correct confidence interval about the population mean difference and not the individual confidence intervals for the means in the groups themselves.
Now let’s look at an example.
Note: In some of the videos presented in the course materials, we do conduct the onesided test for this data instead of the twosided test we conduct below. In Unit 4B we are going to restrict our attention to twosided tests supplemented by confidence intervals as needed to provide more information about the effect of interest.
Drunk driving is one of the main causes of car accidents. Interviews with drunk drivers who were involved in accidents and survived revealed that one of the main problems is that drivers do not realize that they are impaired, thinking “I only had 12 drinks … I am OK to drive.”
A sample of 20 drivers was chosen, and their reaction times in an obstacle course were measured before and after drinking two beers. The purpose of this study was to check whether drivers are impaired after drinking two beers. Here is a figure summarizing this study:
Since the measurements are paired, we can easily reduce the raw data to a set of differences and conduct a onesample ttest.
Here are some of the results for this data:
Step 1: State the hypotheses
We define μ_{d }= the population mean difference in reaction times (Before – After).
As we mentioned, the null hypothesis is:
The null hypothesis claims that the differences in reaction times are centered at (or around) 0, indicating that drinking two beers has no real impact on reaction times. In other words, drivers are not impaired after drinking two beers.
Although we really want to know whether their reaction times are longer after the two beers, we will still focus on conducting twosided hypothesis tests. We will be able to address whether the reaction times are longer after two beers when we look at the confidence interval.
Therefore, we will use the twosided alternative:
Step 2: Obtain data, check conditions, and summarize data
Let’s first check whether we can safely proceed with the paired ttest, by checking the two conditions.
We can see from the histogram above that there is no evidence of violation of the normality assumption (on the contrary, the histogram looks quite normal).
Also note that the vast majority of the differences are negative (i.e., the total reaction times for most of the drivers are larger after the two beers), suggesting that the data provide evidence against the null hypothesis.
The question (which the pvalue will answer) is whether these data provide strong enough evidence or not against the null hypothesis. We can safely proceed to calculate the test statistic (which in practice we leave to the software to calculate for us).
Test Statistic: We will use software to calculate the test statistic which is t = 2.58.
Step 3: Find the pvalue of the test by using the test statistic as follows
As a special case of the onesample ttest, the null distribution of the paired ttest statistic is a t distribution (with n – 1 degrees of freedom), which is the distribution under which the pvalues are calculated.
We will let the software find the pvalue for us, and in this case, gives us a pvalue of 0.0183 (SAS) or 0.018 (SPSS).
The small pvalue tells us that there is very little chance of getting data like those observed (or even more extreme) if the null hypothesis were true. More specifically, there is less than a 2% chance (0.018=1.8%) of obtaining a test statistic of 2.58 (or lower) or 2.58 (or higher), assuming that 2 beers have no impact on reaction times.
Step 4: Conclusion
In our example, the pvalue is 0.018, indicating that the data provide enough evidence to reject Ho.
Followup Confidence Interval:
As a followup to this conclusion, we quantify the effect that two beers have on the driver, using the 95% confidence interval for μ_{d}.
Using statistical software, we find that the 95% confidence interval for μ_{d}, the mean of the differences (before – after), is roughly (0.9, 0.1).
Note: Since the differences were calculated beforeafter, longer reaction times after the beers would translate into negative differences.
Since the confidence interval does not contain the null value of zero, we can use it to decide to reject the null hypothesis. Zero is not a plausible value of the population mean difference based upon the confidence interval. Notice that using this method is not always practical as often we still need to provide the pvalue in clinical research. (Note: this is NOT the interpretation of the confidence interval but a method of using the confidence interval to conduct a hypothesis test.)
Practical Significance:
We should definitely ask ourselves if this is practically significant and I would argue that it is.
In the output, we are generally provided the twosided pvalue. We must be very careful when converting this to a onesided pvalue (if this is not provided by the software)
The “driving after having 2 beers” example is a case in which observations are paired by subject. In other words, both samples have the same subject, so that each subject is measured twice. Typically, as in our example, one of the measurements occurs before a treatment/intervention (2 beers in our case), and the other measurement after the treatment/intervention.
Our next example is another typical type of study where the matched pairs design is used—it is a study involving twins.
Researchers have long been interested in the extent to which intelligence, as measured by IQ score, is affected by “nurture” as opposed to “nature”: that is, are people’s IQ scores mainly a result of their upbringing and environment, or are they mainly an inherited trait?
A study was designed to measure the effect of home environment on intelligence, or more specifically, the study was designed to address the question: “Are there statistically significant differences in IQ scores between people who were raised by their birth parents, and those who were raised by someone else?”
In order to be able to answer this question, the researchers needed to get two groups of subjects (one from the population of people who were raised by their birth parents, and one from the population of people who were raised by someone else) who are as similar as possible in all other respects. In particular, since genetic differences may also affect intelligence, the researchers wanted to control for this confounding factor.
We know from our discussion on study design (in the Producing Data unit of the course) that one way to (at least theoretically) control for all confounding factors is randomization—randomizing subjects to the different treatment groups. In this case, however, this is not possible. This is an observational study; you cannot randomize children to either be raised by their birth parents or to be raised by someone else. How else can we eliminate the genetics factor? We can conduct a “twin study.”
Because identical twins are genetically the same, a good design for obtaining information to answer this question would be to compare IQ scores for identical twins, one of whom is raised by birth parents and the other by someone else. Such a design (matched pairs) is an excellent way of making a comparison between individuals who only differ with respect to the explanatory variable of interest (upbringing) but are as alike as they can possibly be in all other important aspects (inborn intelligence). Identical twins raised apart were studied by Susan Farber, who published her studies in the book “Identical Twins Reared Apart” (1981, Basic Books).
In this problem, we are going to use the data that appear in Farber’s book in table E6, of the IQ scores of 32 pairs of identical twins who were reared apart.
Here is a figure that will help you understand this study:
Here are the important things to note in the figure:
Each of the 32 rows represents one pair of twins. Keeping the notation that we used above, twin 1 is the twin that was raised by his/her birth parents, and twin 2 is the twin that was raised by someone else. Let’s carry out the analysis.
Step 1: State the hypotheses
Recall that in matched pairs, we reduce the data from two samples to one sample of differences:
The hypotheses are stated in terms of the mean of the difference where, μ_{d} = population mean difference in IQ scores (Birth Parents – Someone Else):
Step 2: Obtain data, check conditions, and summarize data
Is it safe to use the paired ttest in this case?
The data don’t reveal anything that we should be worried about (like very extreme skewness or outliers), so we can safely proceed. Looking at the histogram, we note that most of the differences are negative, indicating that in most of the 32 pairs of twins, twin 2 (raised by someone else) has a higher IQ.
From this point we rely on statistical software, and find that:
Our test statistic is 1.85.
Our data (represented by the sample mean of the differences) are 1.85 standard errors below the null hypothesis (represented by the null value 0).
Step 3: Find the pvalue of the test by using the test statistic as follows
The pvalue is 0.074, indicating that there is a 7.4% chance of obtaining data like those observed (or even more extreme) assuming that H_{o} is true (i.e., assuming that there are no differences in IQ scores between people who were raised by their natural parents and those who weren’t).
Step 4: Conclusion
Using the conventional significance level (cutoff probability) of .05, our pvalue is not small enough, and we therefore cannot reject H_{o}.
Confidence Interval:
The 95% confidence interval for the population mean difference is (6.11322, 0.30072).
Interpretation:
This confidence interval does contain zero and thus results in the same conclusion to the hypothesis test. Zero IS a plausible value of the population mean difference and thus we cannot reject the null hypothesis.
Practical Significance:
It is very important to pay attention to whether the twosample ttest or the paired ttest is appropriate. In other words, being aware of the study design is extremely important. Consider our example, if we had not “caught” that this is a matched pairs design, and had analyzed the data as if the two samples were independent using the twosample ttest, we would have obtained a pvalue of 0.114.
Note that using this (wrong) method to analyze the data, and a significance level of 0.05, we would conclude that the data do not provide enough evidence for us to conclude that reaction times differed after drinking two beers. This is an example of how using the wrong statistical method can lead you to wrong conclusions, which in this context can have very serious implications.
Comments:
Now try a complete example for yourself.
Here are two other datasets with paired samples.
The statistical tests we have previously discussed (and many we will discuss) require assumptions about the distribution in the population or about the requirements to use a certain approximation as the sampling distribution. These methods are called parametric.
When these assumptions are not valid, alternative methods often exist to test similar hypotheses. Tests which require only minimal distributional assumptions, if any, are called nonparametric or distributionfree tests.
At the end of this section we will provide some details (see Details for NonParametric Alternatives), for now we simply want to mention that there are two common nonparametric alternatives to the paired ttest. They are:
The fact that both of these tests have the word “sign” in them is not a coincidence – it is due to the fact that we will be interested in whether the differences have a positive sign or a negative sign – and the fact that this word appears in both of these tests can help you to remember that they correspond to paired methods where we are often interested in whether there was an increase (positive sign) or a decrease (negative sign).