# Steps in Hypothesis Testing

- Hypothesis Testing Step 1: State the Hypotheses
- Hypothesis Testing Step 2: Collect Dtaa, Check Conditions, and Summarize Data
- Hypothesis Testing Step 3: Assess the Evidence
- Hypothesis Testing Step 4: Making Conclusions
- Let’s summarize

**CO-6:**Apply basic concepts of probability, random variation, and commonly used statistical probability distributions.

**LO 6.26:**Outline the logic and process of hypothesis testing.

**LO 6.27:**Explain what the p-value is and how it is used to draw conclusions.

**Video:**Steps in Hypothesis Testing (16:02)

Now that we understand the general idea of how statistical hypothesis testing works, let’s go back to each of the steps and delve slightly deeper, getting more details and learning some terminology.

**Hypothesis Testing Step 1: State the Hypotheses**

In all three examples, our aim is to decide between two opposing points of view, Claim 1 and Claim 2. In hypothesis testing, **Claim 1** is called the **null hypothesis** (denoted “**Ho**“), and **Claim 2** plays the role of the **alternative hypothesis** (denoted “**Ha**“). As we saw in the three examples, the null hypothesis suggests nothing special is going on; in other words, there is no change from the status quo, no difference from the traditional state of affairs, no relationship. In contrast, the alternative hypothesis disagrees with this, stating that something is going on, or there is a change from the status quo, or there is a difference from the traditional state of affairs. The alternative hypothesis, Ha, usually represents what we want to check or what we suspect is really going on.

Let’s go back to our three examples and apply the new notation:

**In example 1:**

**Ho:**The proportion of smokers at GU is 0.20.**Ha:**The proportion of smokers at GU is less than 0.20.

**In example 2:**

**Ho:**The mean concentration in the shipment is the required 245 ppm.**Ha:**The mean concentration in the shipment is not the required 245 ppm.

**In example 3:**

**Ho:**Performance on the SAT is not related to gender (males and females score the same).**Ha:**Performance on the SAT is related to gender – males score higher.

**Learn by Doing:**State the Hypotheses

**Did I Get This?:**State the Hypotheses

**Hypothesis Testing Step 2: Collect Data, Check Conditions and Summarize Data**

This step is pretty obvious. This is what inference is all about. You look at sampled data in order to draw conclusions about the entire population. In the case of hypothesis testing, based on the data, you draw conclusions about whether or not there is enough evidence to reject Ho.

There is, however, one detail that we would like to add here. In this step we collect data and **summarize** it. Go back and look at the second step in our three examples. Note that in order to summarize the data we used simple sample statistics such as the sample proportion (*p*-hat), sample mean (x-bar) and the sample standard deviation (s).

In practice, you go a step further and use these sample statistics to summarize the data with what’s called a **test statistic**. We are not going to go into any details right now, but we will discuss test statistics when we go through the specific tests.

This step will also involve checking any conditions or assumptions required to use the test.

**Hypothesis Testing Step 3: Assess the Evidence**

As we saw, this is the step where we calculate how likely is it to get data like that observed (or more extreme) when Ho is true. In a sense, this is the heart of the process, since we draw our conclusions based on this probability.

- If this probability is very small (see example 2), then that means that it would be very surprising to get data like that observed (or more extreme) if Ho were true. The fact that we
**did**observe such data is therefore evidence against Ho, and we should reject it. - On the other hand, if this probability is not very small (see example 3) this means that observing data like that observed (or more extreme) is not very surprising if Ho were true. The fact that we observed such data does not provide evidence against Ho. This crucial probability, therefore, has a special name. It is called the
**p-value**of the test.

In our three examples, the p-values were given to you (and you were reassured that you didn’t need to worry about how these were derived yet):

- Example 1: p-value = 0.106
- Example 2: p-value = 0.0007
- Example 3: p-value = 0.29

Obviously, the smaller the p-value, the more surprising it is to get data like ours (or more extreme) when Ho is true, and therefore, the stronger the evidence the data provide against Ho.

Looking at the three p-values of our three examples, we see that the data that we observed in example 2 provide the strongest evidence against the null hypothesis, followed by example 1, while the data in example 3 provides the least evidence against Ho.

**Comment:**

- Right now we will not go into specific details about p-value calculations, but just mention that since the p-value is the probability of getting
**data**like those observed (or more extreme) when Ho is true, it would make sense that the calculation of the p-value will be based on the data summary, which, as we mentioned, is the test statistic. Indeed, this is the case. In practice, we will mostly use software to provide the p-value for us.

**Hypothesis Testing Step 4: Making Conclusions**

Since our statistical conclusion is based on how small the p-value is, or in other words, how surprising our data are when Ho is true, it would be nice to have some kind of guideline or cutoff that will help determine how small the p-value must be, or how “rare” (unlikely) our data must be when Ho is true, for us to conclude that we have enough evidence to reject Ho.

This cutoff exists, and because it is so important, it has a special name. It is called the **significance level of the test** and is usually denoted by the Greek letter α (alpha). The most commonly used significance level is α (alpha) = 0.05 (or 5%). This means that:

- if the p-value < α (alpha) (usually 0.05), then the data we obtained is considered to be “rare (or surprising) enough” under the assumption that Ho is true, and we say that the data provide statistically significant evidence against Ho, so we reject Ho and thus accept Ha.
- if the p-value > α (alpha)(usually 0.05), then our data are not considered to be “surprising enough” under the assumption that Ho is true, and we say that our data do not provide enough evidence to reject Ho (or, equivalently, that the data do not provide enough evidence to accept Ha).

Now that we have a cutoff to use, here are the appropriate conclusions for each of our examples based upon the p-values we were given.

**In Example 1:**

- Using our cutoff of 0.05, we fail to reject Ho.
**Conclusion**: There**IS NOT**enough evidence that the proportion of smokers at GU is less than 0.20**Still we should consider:**Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?

**In Example 2:**

- Using our cutoff of 0.05, we reject Ho.
**Conclusion**: There**IS**enough evidence that the mean concentration in the shipment is not the required 245 ppm.**Still we should consider: Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?**

**In Example 3:**

- Using our cutoff of 0.05, we fail to reject Ho.
**Conclusion**: There**IS NOT**enough evidence that males score higher on average than females on the SAT.**Still we should consider:**Does the evidence seen in the data provide any practical evidence towards our alternative hypothesis?

Notice that all of the above conclusions are written in terms of the alternative hypothesis and are given in the context of the situation. In no situation have we claimed the null hypothesis is true. Be very careful of this and other issues discussed in the following comments.

**Comments:**

- Although the significance level provides a good guideline for drawing our conclusions, it should not be treated as an incontrovertible truth. There is a lot of room for personal interpretation. What if your p-value is 0.052? You might want to stick to the rules and say “0.052 > 0.05 and therefore I don’t have enough evidence to reject Ho”, but you might decide that 0.052 is small enough for you to believe that Ho should be rejected. It should be noted that scientific journals do consider 0.05 to be the cutoff point for which any p-value below the cutoff indicates enough evidence against Ho, and any p-value above it,
**or even equal to it**, indicates there is not enough evidence against Ho. Although a p-value between 0.05 and 0.10 is often reported as marginally statistically significant.

- It is important to draw your conclusions
**in context**. It is**never enough**to say:**“p-value = …, and therefore I have enough evidence to reject Ho at the 0.05 significance level.”**You**should always word your conclusion in terms of the data.**Although we will use the terminology of “rejecting Ho” or “failing to reject Ho” – this is mostly due to the fact that we are instructing you in these concepts. In practice, this language is rarely used. We also suggest writing your conclusion in terms of the alternative hypothesis.Is there or is there not enough evidence that the alternative hypothesis is true?

- Let’s go back to the issue of the nature of the two types of conclusions that I can make.

*Either***I reject Ho (when the p-value is smaller than the significance level)***or***I cannot reject Ho (when the p-value is larger than the significance level).**

As we mentioned earlier, note that the second conclusion does not imply that I accept Ho, but just that I don’t have enough evidence to reject it. Saying (by mistake) “I don’t have enough evidence to reject Ho so I accept it” indicates that the data provide evidence that Ho is true, which is **not necessarily the case**. Consider the following slightly artificial yet effective example:

## EXAMPLE:

An employer claims to subscribe to an “equal opportunity” policy, not hiring men any more often than women for managerial positions. Is this credible? You’re not sure, so you want to test the following **two hypotheses:**

**Ho:**The proportion of male managers hired is 0.5**Ha:**The proportion of male managers hired is more than 0.5

**Data:** You choose at random three of the new managers who were hired in the last 5 years and find that all 3 are men.

**Assessing Evidence:** If the proportion of male managers hired is really 0.5 (Ho is true), then the probability that the random selection of three managers will yield three males is therefore 0.5 * 0.5 * 0.5 = 0.125. This is the p-value (using the multiplication rule for independent events).

**Conclusion:** Using 0.05 as the significance level, you conclude that since the p-value = 0.125 > 0.05, the fact that the three randomly selected managers were all males is not enough evidence to reject the employer’s claim of subscribing to an equal opportunity policy (Ho).

However, **the data (all three selected are males) definitely does NOT provide evidence to accept the employer’s claim (Ho).**

**Learn By Doing:**Using p-values

**Did I Get This?:**Using p-values

**Comment about wording:** Another common wording in scientific journals is:

- “The results are statistically significant” – when the p-value < α (alpha).
- “The results are not statistically significant” – when the p-value > α (alpha).

Often you will see significance levels reported with additional description to indicate the degree of statistical significance. A general guideline (although not required in our course) is:

- If 0.01 ≤ p-value < 0.05, then the results are (statistically)
*significant*. - If 0.001 ≤ p-value < 0.01, then the results are
*highly statistically significant*. - If p-value < 0.001, then the results are
*very highly statistically significant*. - If p-value > 0.05, then the results are
*not statistically significant*(NS). - If 0.05 ≤ p-value < 0.10, then the results are
*marginally statistically significant*.

## Let’s summarize

We learned quite a lot about hypothesis testing. We learned the logic behind it, what the key elements are, and what types of conclusions we can and cannot draw in hypothesis testing. Here is a quick recap:

**Video:**Hypothesis Testing Overview (2:20)

Here are a few more activities if you need some additional practice.

**Did I Get This?:**Hypothesis Testing Overview

**Comments:**

- Notice that
**the p-value is an example of a conditional probability**. We calculate the probability of obtaining results like those of our data (or more extreme) GIVEN the null hypothesis is true. We could write P(Obtaining results like ours or more extreme | Ho is True).

- Another common phrase used to define the p-value is: “
**The probability of obtaining a statistic as or more extreme than your result given the null hypothesis is TRUE**“.- We could write P(Obtaining a test statistic as or more extreme than ours | Ho is True).
- In this case we are asking “Assuming the null hypothesis is true, how rare is it to observe something as or more extreme than what I have found in my data?”
- If after assuming the null hypothesis is true, what we have found in our data is extremely rare (small p-value), this provides evidence to reject our assumption that Ho is true in favor of Ha.

- The
**p-value can also be thought of as the probability, assuming the null hypothesis is true, that the result we have seen is solely due to random error (or random chance).**We have already seen that statistics from samples collected from a population vary. There is random error or random chance involved when we sample from populations.

In this setting, if the p-value is very small, this implies, assuming the null hypothesis is true, that it is extremely unlikely that the results we have obtained would have happened due to random error alone, and thus our assumption (Ho) is rejected in favor of the alternative hypothesis (Ha).

**It is EXTREMELY important that you find a definition of the p-value which makes sense to you. New students often need to contemplate this idea repeatedly through a variety of examples and explanations before becoming comfortable with this idea. It is one of the two most important concepts in statistics (the other being confidence intervals).**

**Remember:**

- We infer that the alternative hypothesis is true ONLY by rejecting the null hypothesis.
- A statistically significant result is one that has a very low probability of occurring if the null hypothesis is true.
- Results which are
**statistically**significant may or may not have**practical**significance and vice versa.