- Slides 1-6
- Introduction
- Effect of Sample Size on Hypothesis Testing

- Slides 7 – 11
- Statistical Significance vs. Practical Importance

- Slides 12 – 17
- Using Confidence Intervals to Conduct Hypothesis Tests

- Slides 18 – 21
- What Confidence Intervals ADD to our analyses
- Summary

This document linked from More about Hypothesis Testing

]]>View Lecture Slides with Transcript – Population Proportions

This document linked from Population Proportions

To test the first figure, let p be the proportion of email users who feel that spam has increased in their personal email. The first set of hypotheses that the student wants to test is then:

Ho: p = 0.37

Ha: p ≠ 0.37

Based upon the data collected by this student, a 95% confidence interval for p was found to be:

(0.25, 0.32).

Based on the collected data, a 95% confidence interval for p was found to be (0.08, 0.14).

For testing the second figure in the report, let p be the proportion of email users who feel that spam has increased in their work email. The second set of hypotheses that the students wants to test, is then:

Ho: p = 0.29

Ha: p ≠ 0.29

Based upon the data collected by this student, a 95% confidence interval for p was found to be:

(0.273, 0.304).

According to a study completed in 2006 by Pew Internet, 42% of all Americans had a broadband Internet connection at home. This same statistics student wanted to see if this percentage is different for students at his university.

Ho: p = 0.42

Ha: p ≠ 0.42

Based upon the data the student collected, a 95% confidence interval for p was found to be:

(0.439, 0.457).

According to the same Pew Internet study, 8% of those with broadband connections are using fixed wireless. let p be the proportion of broadband users who use fixed wireless, and consider the hypotheses:

Ho: p = 0.08

Ha: p ≠ 0.08

This document is linked from More about Hypothesis Testing.

]]>Ho: p = 0.087

Ha: p ≠ 0.087

Based on the collected data, a 95% confidence interval for p was found to be (0.08, 0.14).

The UCLA Internet Report (February 2003) estimated that roughly 60.5% of U.S. adults use the Internet at work for personal use. A follow-up study was conducted in order to explore whether that figure has changed since. Let p be the proportion of U.S. adults who use the Internet at work for personal use.

Ho: p = 0.605

Ha: p ≠ 0.605

Based on the collected data, the p-value of the test was found to be 0.001.

This document is linked from More about Hypothesis Testing.

]]>**Background:**

Recall from a previous activity the results of a study on the safety of airplane drinking water that was conducted by the U.S. Environmental Protection Agency (EPA). A study found that out of a random sample of 316 airplanes tested, 40 had coliform bacteria in the drinking water drawn from restrooms and kitchens. As a benchmark comparison, in 2003 the EPA found that about 3.5% of the U.S. population have coliform bacteria-infected drinking water. The question of interest is whether, based on the results of this study, we can conclude that drinking water on airplanes is more contaminated than drinking water in general. Let p be the proportion of contaminated drinking water on airplanes.

In a previous activity we tested Ho: p = 0.035 vs. Ha: p > 0.035 and found that the data provided extremely strong evidence to reject Ho. We concluded that the proportion of contaminated drinking water in airplanes is larger than the proportion of contaminated drinking water in general (which is 0.035).

Now that we’ve concluded that, all we know about p is that we have very strong evidence that it is higher than 0.035. However, we have no sense of its magnitude. It will make sense to follow up the test by estimating p with a 95% confidence interval.

This document is linked from More about Hypothesis Testing.

]]>The issues regarding hypothesis testing that we will discuss are:

- The effect of sample size on hypothesis testing.
- Statistical significance vs. practical importance.
- Hypothesis testing and confidence intervals—how are they related?

Let’s begin.

We have already seen the effect that the sample size has on inference, when we discussed point and interval estimation for the population mean (μ, mu) and population proportion (p). Intuitively …

Larger sample sizes give us more information to pin down the true nature of the population. We can therefore expect the **sample** mean and **sample **proportion obtained from a larger sample to be closer to the population mean and proportion, respectively. As a result, for the same level of confidence, we can report a smaller margin of error, and get a narrower confidence interval. What we’ve seen, then, is that larger sample size gives a boost to how much we trust our sample results.

In hypothesis testing, larger sample sizes have a similar effect. We have also discussed that the power of our test increases when the sample size increases, all else remaining the same. This means, we have a better chance to detect the difference between the true value and the null value for larger samples.

The following two examples will illustrate that a larger sample size provides more convincing evidence (the test has greater power), and how the evidence manifests itself in hypothesis testing. Let’s go back to our example 2 (marijuana use at a certain liberal arts college).

Is the proportion of marijuana users in the college higher than the national figure?

We do **not** have enough evidence to conclude that the proportion of students at the college who use marijuana is higher than the national figure.

**Now, let’s increase the sample size. **

There are rumors that students in a certain liberal arts college are more inclined to use drugs than U.S. college students in general. Suppose that **in a simple random sample of 400 students from the college, 76 admitted to marijuana use**. Do the data provide enough evidence to conclude that the proportion of marijuana users among the students in the college (p) is **higher** than the national proportion, which is 0.157? (Reported by the Harvard School of Public Health).

Our results here are statistically **significant**. In other words, in example 2* the data provide enough evidence to reject Ho.

**Conclusion:**There is enough evidence that the proportion of marijuana users at the college is higher than among all U.S. students.

What do we learn from this?

We see that sample results that are based on a larger sample carry more weight (have greater power).

In example 2, we saw that a sample proportion of 0.19 based on a sample of size of 100 was not enough evidence that the proportion of marijuana users in the college is higher than 0.157. Recall, from our general overview of hypothesis testing, that this conclusion (not having enough evidence to reject the null hypothesis) **doesn’t** mean the null hypothesis is necessarily true (so, we never “accept” the null); it only means that the particular study didn’t yield sufficient evidence to reject the null. It **might** be that the sample size was simply too small to detect a statistically significant difference.

However, in example 2*, we saw that when the sample proportion of 0.19 is obtained from a sample of size 400, it carries much more weight, and in particular, provides enough evidence that the proportion of marijuana users in the college is higher than 0.157 (the national figure). In **this** case, the sample size of 400 **was** large enough to detect a statistically significant difference.

The following activity will allow you to practice the ideas and terminology used in hypothesis testing when a result is not statistically significant.

Now, we will address the issue of statistical significance versus practical importance (which also involves issues of sample size).

The following activity will let you explore the effect of the sample size on the statistical significance of the results yourself, and more importantly will discuss issue **2: Statistical significance vs. practical importance.**

This suggests that when interpreting the results of a test, you should always think not only about the statistical significance of the results but also about their practical importance.

The last topic we want to discuss is the relationship between hypothesis testing and confidence intervals. Even though the flavor of these two forms of inference is different (confidence intervals estimate a parameter, and hypothesis testing assesses the evidence in the data against one claim and in favor of another), there is a strong link between them.

We will explain this link (using the z-test and confidence interval for the population proportion), and then explain how confidence intervals can be used after a test has been carried out.

Recall that a confidence interval gives us a set of plausible values for the unknown population parameter. We may therefore examine a confidence interval to informally decide if a proposed value of population proportion seems plausible.

For example, if a 95% confidence interval for p, the proportion of all U.S. adults already familiar with Viagra in May 1998, was (0.61, 0.67), then it seems clear that we should be able to reject a claim that only 50% of all U.S. adults were familiar with the drug, since based on the confidence interval, 0.50 is not one of the plausible values for p.

In fact, the information provided by a confidence interval can be formally related to the information provided by a hypothesis test. (**Comment:** The relationship is more straightforward for two-sided alternatives, and so we will not present results for the one-sided cases.)

Suppose we want to carry out the **two-sided test:**

- Ho: p = p
_{0} - Ha: p ≠ p
_{0}

using a significance level of 0.05.

An alternative way to perform this test is to find a 95% **confidence interval** for p and check:

- If p
_{0}falls**outside**the confidence interval,**reject**Ho. - If p
_{0}falls**inside**the confidence interval,**do not reject**Ho.

In other words,

- If p
_{0}is not one of the plausible values for p, we reject Ho. - If p
_{0}is a plausible value for p, we cannot reject Ho.

(**Comment:** Similarly, the results of a test using a significance level of 0.01 can be related to the 99% confidence interval.)

Let’s look at an example:

Recall example 3, where we wanted to know whether the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, when it was 0.64.

We are testing:

**Ho:**p = 0.64 (No change from 2003).**Ha:**p ≠ 0.64 (Some change since 2003).

and as the figure reminds us, we took a sample of 1,000 U.S. adults, and the data told us that 675 supported the death penalty for convicted murderers (p-hat = 0.675).

A 95% confidence interval for p, the proportion of **all** U.S. adults who support the death penalty, is:

Since the 95% confidence interval for p does not include 0.64 as a plausible value for p, we can reject Ho and conclude (as we did before) that there is enough evidence that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003.

You and your roommate are arguing about whose turn it is to clean the apartment. Your roommate suggests that you settle this by tossing a coin and takes one out of a locked box he has on the shelf. Suspecting that the coin might not be fair, you decide to test it first. You toss the coin 80 times, thinking to yourself that if, indeed, the coin is fair, you should get around 40 heads. Instead you get 48 heads. You are puzzled. You are not sure whether getting 48 heads out of 80 is enough evidence to conclude that the coin is unbalanced, or whether this a result that could have happened just by chance when the coin is fair.

Statistics can help you answer this question.

Let p be the true proportion (probability) of heads. We want to test whether the coin is fair or not.

We are testing:

**Ho:**p = 0.5 (the coin is fair).**Ha:**p ≠ 0.5 (the coin is not fair).

The data we have are that out of n = 80 tosses, we got 48 heads, or that the sample proportion of heads is p-hat = 48/80 = 0.6.

A 95% confidence interval for p, the true proportion of heads for this coin, is:

Since in this case 0.5 is one of the plausible values for p, we cannot reject Ho. In other words, the data do not provide enough evidence to conclude that the coin is not fair.

**Comment**

The context of the last example is a good opportunity to bring up an important point that was discussed earlier.

Even though we use 0.05 as a cutoff to guide our decision about whether the results are statistically significant, we should not treat it as inviolable and we should always add our own judgment. Let’s look at the last example again.

It turns out that the p-value of this test is 0.0734. In other words, it is maybe not extremely unlikely, but it is quite unlikely (probability of 0.0734) that when you toss a fair coin 80 times you’ll get a sample proportion of heads of 48/80 = 0.6 (or even more extreme). It is true that using the 0.05 significance level (cutoff), 0.0734 is not considered small enough to conclude that the coin is not fair. However, if you really don’t want to clean the apartment, the p-value might be small enough for you to ask your roommate to use a different coin, or to provide one yourself!

**Here is our final point on this subject:**

When the data provide enough evidence to reject Ho, we can conclude (depending on the alternative hypothesis) that the population proportion is either less than, greater than, or not equal to the null value p_{0}. However, we do not get a more informative statement about its actual value. It might be of interest, then, to follow the test with a 95% confidence interval that will give us more insight into the actual value of p.

In our example 3,

we concluded that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, when it was 0.64. It is probably of interest not only to know that the proportion has changed, but also to estimate what it has changed to. We’ve calculated the 95% confidence interval for p on the previous page and found that it is (0.646, 0.704).

We can combine our conclusions from the test and the confidence interval and say:

Data provide evidence that the proportion of U.S. adults who support the death penalty for convicted murderers has changed since 2003, and we are 95% confident that it is now between 0.646 and 0.704. (i.e. between 64.6% and 70.4%).

Let’s look at our example 1 to see how a confidence interval following a test might be insightful in a different way.

Here is a summary of example 1:

We conclude that as a result of the repair, the proportion of defective products has been reduced to below 0.20 (which was the proportion prior to the repair). It is probably of great interest to the company not only to know that the proportion of defective has been reduced, but also estimate what it has been reduced to, to get a better sense of how effective the repair was. A 95% confidence interval for p in this case is:

We can therefore say that the data provide evidence that the proportion of defective products has been reduced, and we are 95% confident that it has been reduced to somewhere between 12.4% and 19.6%. This is very useful information, since it tells us that even though the results were significant (i.e., the repair reduced the number of defective products), the repair might not have been effective enough, if it managed to reduce the number of defective products only to the range provided by the confidence interval. This, of course, ties back in to the idea of statistical significance vs. practical importance that we discussed earlier. Even though the results are statistically significant (Ho was rejected), practically speaking, the repair might still be considered ineffective.

Even though this portion of the current section is about the z-test for population proportion, it is loaded with very important ideas that apply to hypothesis testing in general. We’ve already summarized the details that are specific to the z-test for proportions, so the purpose of this summary is to highlight the general ideas.

The process of hypothesis testing has **four steps**:

**I. Stating the null and alternative hypotheses (Ho and Ha).**

**II. Obtaining a random sample (or at least one that can be considered random) and collecting data. Using the data:**

**Check that the conditions** under which the test can be reliably used are met.

**Summarize the data using a test statistic. **

- The test statistic is a measure of the evidence in the data against Ho. The larger the test statistic is in magnitude, the more evidence the data present against Ho.

**III. Finding the p-value of the test. **The p-value is the probability of getting data like those observed (or even more extreme) assuming that the null hypothesis is true, and is calculated using the null distribution of the test statistic. The p-value is a measure of the evidence against Ho. The smaller the p-value, the more evidence the data present against Ho.

**IV. Making conclusions. **

Conclusions about the statistical **significance of the results:**

If the p-value is small, the data present enough evidence to reject Ho (and accept Ha).

If the p-value is not small, the data do not provide enough evidence to reject Ho.

To help guide our decision, we use the significance level as a cutoff for what is considered a small p-value. The significance cutoff is usually set at 0.05.

Conclusions should then be provided **in the context** of the problem.

**Additional Important Ideas about Hypothesis Testing**

- Results that are based on a larger sample carry more weight, and therefore
**as the sample size increases, results become more statistically significant.**

- Even a very small and practically unimportant effect becomes statistically significant with a large enough sample size. The
**distinction between statistical significance and practical importance**should therefore always be considered.

**Confidence intervals can be used in order to carry out two-sided tests**(95% confidence for the 0.05 significance level). If the null value is not included in the confidence interval (i.e., is not one of the plausible values for the parameter), we have enough evidence to reject Ho. Otherwise, we cannot reject Ho.

- If the results are statistically significant, it might be of interest to
**follow up the tests with a confidence interval**in order to get insight into the actual value of the parameter of interest.

- It is important to be aware that there are two types of errors in hypothesis testing (
**Type I and Type II**) and that the**power**of a statistical test is an important measure of how likely we are to be able to detect a difference of interest to us in a particular problem.

As we wrap up this topic, we wanted to again discuss the interpretation of a confidence interval.

What do we mean by “confidence”?

Suppose we find a 95% confidence interval for an unknown parameter, what does the 95% mean exactly?

- If we repeat the process for all possible samples of this size for the population, 95% of the intervals we construct will contain the parameter

This is NOT the same as saying “*the probability that μ (mu) is contained in (the interval constructed from my sample) is 95%.*” Why?!

**Answer:**

- Once we have a particular confidence interval, the true value is either in the interval constructed from our sample (probability = 1) or it is not (probability = 0). We simply do not know which it is. If we were to say “the probability that μ (mu) is contained in (the interval constructed from my sample) is 95%,” we know we would be incorrect since it is either 0 (No) or 1 (Yes) for any given sample. The probability comes from the “long run” view of the process.

- The probability we used to construct the confidence interval was based upon the fact that the sample statistic (x-bar, p-hat) will vary in a manner we understand (because we know the sampling distribution).

- The probability is associated with the randomness of our statistic so that for a particular interval we only speak of being “95% confident” which translates into an understanding about the process.

**In other words, in statistics, “95% confident” means our confidence in the process and implies that in the long run, we will be correct by using this process 95% of the time but that 5% of the time we will be incorrect. For one particular use of this process we cannot know if we are one of the 95% which are correct or one of the 5% which are incorrect. That is the statistical definition of confidence.**

- We can say that in the long run, 95% of these intervals will contain the true parameter and 5% will not.

**Example: **Suppose a 95% confidence interval for the proportion of U.S. adults who are not active at all is (0.23, 0.27).

**Correct Interpretation #1:**We are 95% confident that the true proportion of U.S. adults who are not active at all is between 23% and 27%

**Correct Interpretation #2:**We are 95% confident that the true proportion of U.S. adults who are not active at all is covered by the interval (23%, 27%)

**A More Thorough Interpretation:**Based upon our sample, the true proportion of U.S. adults who are not active at all is estimated to be 25%. With 95% confidence, this value could be as small as 23% to as large as 27%.

**A Common Interpretation in Journal Articles:**Based upon our sample, the true proportion of U.S. adults who are not active at all is estimated to be 25% (95% CI 23%-27%).

Now let’s look at an INCORRECT interpretation which we have seen before

**INCORRECT Interpretation:***There is a 95% chance that the true proportion of U.S. adults who are not active at all is between 23% and 27%*. We know this is incorrect because at this point, the true proportion and the numbers in our interval are fixed. The probability is either 1 or 0 depending on whether the interval is one of the 95% that cover the true proportion, or one of the 5% that do not.

For confidence intervals regarding a population mean, we have an additional caution to discuss about interpretations.

**Example: **Suppose a 95% confidence interval for the average minutes per day of exercise for U.S. adults is (12, 18).

**Correct Interpretation:**We are 95% confident that the true mean minutes per day of exercise for U.S. adults is between 12 and 18 minutes.

**INCORRECT Interpretation:**We are 95% confident that an individual U.S. adult exercises between 12 and 18 minutes per day. We must remember that our intervals are about the parameter, in this case the population mean. They do not apply to an individual as we expect individuals to have much more variation.

**INCORRECT Interpretation:**We are 95% confident that U.S. adults exercise between 12 and 18 minutes per day.This interpretation is implying this is true for all U.S. adults. This is an incorrect interpretation for the same reason as the previous incorrect interpretation!

As we continue to study inferential statistics, we will see that confidence intervals are used in many situations. The goal is always to provide confidence in our interval estimate of a quantity of interest. Population means and proportions are common parameters, however, any quantity that can be estimated from data has a population counterpart which we may wish to estimate.

**Background:** The U.S. federal ban on assault weapons expired in September 2004, which meant that after 10 years (since the ban was instituted in 1994) there were certain types of guns that could be manufactured legally again. A poll asked a random sample of 1,200 eligible voters (among other questions) whether they were satisfied with the fact that the law had expired. The datafile linked below contains the results of this poll (Data were generated based on a poll conducted by NBC news/Wall Street Journal Poll). We would like to estimate p, the proportion of U.S. eligible voters who were satisfied with the expiration of the law, with a 95% confidence interval.

The dataset is available in Excel format (guns.xls) and in CSV format (guns.csv).

If you prefer to skip this process and just answer the questions, you can **reveal the value needed to continue.
**

This document is linked from Population Proportions.

]]>

This document is linked from Population Proportions.

]]>

A poll asked a random sample of 1,000 U.S. adults, “Do you think that the use of marijuana should be legalized?” 560 of those asked answered yes.

This document is linked from Population Proportions.

]]>