Learn by Doing – Power of Hypothesis Tests

Published: March 8th, 2013

Category: Activity 1: Learn By Doing, Activity 3: Simulations and Tools

We will use the following applet for populations means in this activity which will help you understand the concept of power.

Interactive Applet: Power

We will start with two examples of using the applet and then ask a few questions. The applet has changed slightly and does not look exactly the same in the link above as our images below but the processes are the same.

We are interested in studying whether the mean IQ score among children with high blood lead levels is lower than the population average (which is 100). We will assume that the standard deviation of the population is 16. (Read more: A related article  ≈ 4200 words.)

Our hypotheses are:

Ho: μ = 100 (mu = 100)

Ha: μ < 100 (mu < 100)

We want to be able to detect a difference of 5 points. In other words, if the true mean IQ among children with high blood lead levels is 5 (or more) points lower than 100, we want to have a good chance to detect that difference and reject the null hypothesis.

The difference of 5 points represents the effect size of interest in this problem. It represents the difference between the true mean and the null value that we would like to be able to detect.

We would like a power of around 80% and need to decide on a sample size for our study.

Using the interactive applet we can easily calculate and visualize the power of this test:

If n = 2 (this is the smallest possible sample size available, and much too small)


This is actually a fairly good chance considering we only used a sample size of 2, but this is not nearly enough for our target. Notice the probability of a Type II error is 1 – 0.111 = 0.889.

If n = 25 (this is still a relatively small sample)


By increasing the sample size to 25, we have increased the power of our test to 46%. We have a 46% chance of rejecting the null hypothesis when we take a sample of size 25 and the true population mean is 95 (5 points lower than 100). To hit our target will still need a larger sample.

It is not clearly illustrated, however, if you look at the x-axes you will see that the variability displayed in the distributions is decreasing as the sample size increases. This is the result seen in Module 9, as the sample size increases, the spread of the sampling distribution decreases.

It is this decrease in the variability of x-bar that is causing the increase in power in this example. We are not “moving” the center of the distributions, they are simply becoming less variable so that they overlap less as indicated in the image below.


See if you can find the answer. Using the applet, enter the values we have above for the null and alternative hypotheses, the standard deviation, and the alt. mean. You should not need to change the significance level but it should be set to 5%.

The example above illustrates the first factor affecting power discussed earlier – increasing the sample size results in an increase in the power of the hypothesis test when all else remains the same. This is a direct result of the fact that the variation of the statistic (in this case, x-bar) decreases as the sample size increases.

Now that you have learned to use this tool, we want to use it to illustrate two other factors affecting power. In the following activity we will illustrate:

  • If the true difference (often called the “effect size”) increases, the power of the hypothesis test increases.
  • If α (alpha) decreases, Power = 1 – β  = 1 – beta also decreases.

Click here to access the questions associated with these exercises.

This document is linked from Errors and Power.