This document is linked from What is Data?

]]>Before we jump into Exploratory Data Analysis, and really appreciate its importance in the process of statistical analysis, let’s take a step back for a minute and ask:

**Data** are pieces of information about **individuals** organized into **variables**.

- By an
**individual**, we mean a particular person or object. - By a
**variable**, we mean a particular characteristic of the individual.

A **dataset** is a set of data identified with a particular experiment, scenario, or circumstance.

Datasets are typically displayed in tables, in which rows represent individuals and columns represent variables.

The following dataset shows medical records for a sample of patients.

In this example,

- the
**individuals**are patients, - and the
**variables**are Gender, Age, Weight, Height, Smoking, and Race.

Each **row**, then, gives us all of the information about a particular **individual** (in this case, patient), and each **column** gives us information about a particular **characteristic** of all of the patients.

The rows in a dataset (representing **individuals**) might also be called **observations**, **cases**, or a description that is specific to the individuals and the scenario.

For example, if we were interested in studying flu vaccinations in school children across the U.S., we could collect data where each observation was a

- student
- school
- school district
- city
- county
- state

Each of these would result in a different way to investigate questions about flu vaccinations in school children.

In our course, we will present methods which can be used when the **observations** being analyzed are **independent of each other**. If the observations (rows in our dataset) are not independent, a more complex analysis is needed.Clear violations of independent observations occur when

- we have more than one row for a given individual such as if we gather the same measurements at many different times for individuals in our study
- individuals are paired or matched in some way.

As we begin this course, you should start with an awareness of the types of data we will be working with and learn to recognize situations which are more complex than those covered in this course.

The columns in a dataset (representing **variables**) are often grouped and labeled by their role in our analysis.

For example, in many studies involving people, we often collect **demographic** variables such as gender, age, race, ethnicity, socioeconomic status, marital status, and many more.

The **role** a variable plays in our analysis must also be considered.

- In studies where we wish to predict one variable using one or more of the remaining variables, the variable we wish to predict is commonly called the
**response**variable, the**outcome**variable, or the**dependent variable**.

- Any variable we are using to predict or explain differences in the outcome is commonly called an
**explanatory variable**, an**independent****variable**, a**predictor**variable, or a**covariate**.

**Note:** The word “**independent**” is used in statistics in numerous ways. Be careful to understand in what way the words “independent” or “independence” (as well as dependent or dependence) are used when you see them used in the materials.

- Here we have discussed
**independent observations**(also called cases, individuals, or subjects). - We have also used the term
**independent variable**as another term for our explanatory variables. - Later we will learn the formal probability definitions of
**independent events**and**dependent events**. - And when comparing groups we will define
**independent samples**and**dependent samples**.

For this first activity with data you will need EXCEL (or Open Office) to view the .xls file. You can view the .csv file in EXCEL or any text editor.

Very soon you will need SAS or SPSS (depending upon which course you are taking) and you will learn to import data from .xls and/or .csv files.

It is not necessary to have EXCEL to import .xls files, however, if you wish to view the original file you will need EXCEL or another program which can open or view these files.

In this course, when you are given data in .xls/.csv format, it is extremely important to check the raw dataset against your imported data as different versions of programs and different computer settings can cause issues with the data import process.

Clinical depression is the most common mental illness in the United States, affecting 19 million adults each year (Source: NIMH, 1999). Nearly 50% of individuals who experience a major episode will have a recurrence within 2-3 years. Researchers are interested in comparing therapeutic solutions that could delay or reduce the incidence of recurrence.

In a study conducted by the National Institutes of Health, 109 clinically depressed patients were separated into three groups, and each group was given one of two active drugs (imipramine or lithium) or no drug at all. For each patient, the dataset contains the treatment used, the outcome of the treatment, and several other interesting characteristics.

Here is a summary of the variables in our dataset:

**Hospt:**The patient’s hospital, represented by a code for each of the 5 hospitals (1, 2, 3, 5, or 6)

**Treat:**The treatment received by the patient (Lithium, Imipramine, or Placebo)

**Outcome:**Whether or not a recurrence occurred during the patient’s treatment (Recurrence or No Recurrence)

**Time:**Either the time (days) till recurrence, or if no recurrence, the length (days) of the patient’s participation in the study.

**AcuteT:**The time (days) that the patient was depressed prior to the study.

**Age:**The age of the patient in years, when the patient entered the study.

**Gender:**The patient’s gender (1 = Female, 2 = Male)

**Note:** In this dataset some of the categorical variables use numeric codes and others use a text description. Often, if numeric codes are to be used, then ALL of the categorical variables will be coded. When we work in software, we will learn how to have the program translate these codes for us in our analyses.

To open the data, right-click on the file name, depression.xls, and choose “Save Link As” (or “Save Target As”) to download the file to your computer. Then find the downloaded file and double-click it to open it in Excel (or Open Office, etc.).

This dataset is also available as a comma separated file (CSV), depression.csv which can be opened in any text editor, although the data are not as visually organized in this type of file.

In future assignments you will need to download datasets in this manner in order to import them, i.e. you will need to have the file saved to your computer.

In Excel, the dataset is in tabular form. Each row contains the values of the variables associated with a single individual, and the different variables are separated into columns. It is helpful if the columns are labeled with the variable names, as we have in this case.

Which variables are categorical and which are quantitative?

This document is linked from Types of Variables.

]]>**Where**, but eventually we all end up helping others deal with putting data files together.*do*data files come from? With any luck, they will come from someone else

View the Reading “Creating Data Files” (≈1200 words)

From the online version of Little Handbook of Statistical Practice, this reading contains a discussion about where data come from and some problems to avoid when creating datasets.

This document is linked from Summary (Unit 1).

]]>View Lecture Slides with Transcript – Role of Biostatistics in the Steps of a Research Project

This document is linked from Role of Biostatistics.

]]>View Lecture Slides with Transcript – Statistics Examples

This document is linked from Role of Biostatistics.

]]>Throughout the course, we will add to our understanding of the definitions, concepts, and processes which are introduced here. You are not expected to gain a full understanding of this process until much later in the course!

To really understand how this process works, we need to put it in a context. We will do that by introducing one of the central ideas of this course, the **Big Picture of Statistics**.

We will introduce the Big Picture by building it gradually and explaining each component.

At the end of the introductory explanation, once you have the full Big Picture in front of you, we will show it again using a concrete example.

The process of statistics starts when we identify what group we want to study or learn something about. We call this group the **population**.

Note that the word “population” here (and in the entire course) is not just used to refer to people; it is used in the more broad statistical sense, where population can refer not only to people, but also to animals, things etc. For example, we might be interested in:

- the opinions of the population of U.S. adults about the death penalty; or
- how the population of mice react to a certain chemical; or
- the average price of the population of all one-bedroom apartments in a certain city.

The **population**, then, is the entire group that is the target of our interest.

In most cases, the population is so large that as much as we might want to, there is absolutely no way that we can study all of it (imagine trying to get the opinions of all U.S. adults about the death penalty…).

A more practical approach would be to examine and collect data only from a sub-group of the population, which we call a **sample**. We call this first component, which involves choosing a sample and collecting data from it, **Producing Data**.

A **sample** is a s subset of the population from which we collect data.

It should be noted that since, for practical reasons, we need to compromise and examine only a sub-group of the population rather than the whole population, we should make an effort to choose a sample in such a way that it will represent the population well.

For example, if we choose a sample from the population of U.S. adults, and ask their opinions about a particular federal health care program, we do not want our sample to consist of only Republicans or only Democrats.

Once the data have been collected, what we have is a long list of answers to questions, or numbers, and in order to explore and make sense of the data, we need to summarize that list in a meaningful way.

This second component, which consists of summarizing the collected data, is called **Exploratory Data Analysis** or **Descriptive** **Statistics**.

Now we’ve obtained the sample results and summarized them, but we are not done. Remember that our goal is to study the population, so what we want is to be able to draw conclusions about the population based on the sample results.

Before we can do so, we need to look at how the sample we’re using may differ from the population as a whole, so that we can factor that into our analysis. To examine this difference, we use **Probability **which is the third component in the big picture.

The third component in the Big Picture of Statistics, **probability** is in essence the “machinery” that allows us to draw conclusions about the population based on the data collected in the sample.

Finally, we can use what we’ve discovered about our sample to draw conclusions about our population.

We call this final component in the process **Inference**.

This is the **Big Picture of Statistics**.

At the end of April 2005, a poll was conducted (by ABC News and the Washington Post), for the purpose of learning the opinions of U.S. adults about the death penalty.

**1. Producing Data:** A (representative) sample of 1,082 U.S. adults was chosen, and each adult was asked whether he or she favored or opposed the death penalty.

**2. Exploratory Data Analysis (EDA):** The collected data were summarized, and it was found that 65% of the sampled adults favor the death penalty for persons convicted of murder.

**3 and 4. Probability and Inference:** Based on the sample result (of 65% favoring the death penalty) and our knowledge of probability, it was concluded (with 95% confidence) that the percentage of those who favor the death penalty in the population is within 3% of what was obtained in the sample (i.e., between 62% and 68%). The following figure summarizes the example:

The structure of this entire course is based on the big picture.

The course will have 4 units; one for each of the components in the big picture.

As the figure below shows, even though it is second in the process of statistics, we will start this course with exploratory data analysis (EDA), continue to discuss producing data, then go on to probability, so that at the end we will be able to discuss inference.

The main reasons we begin with EDA is that we need to understand enough about what we want to do with our data before we can discuss the issues related to how to collect it!!

This also allows us to introduce many important concepts early in the course so that you will have ample time to master them before we return to inference at the end of the course.

The following figure summarizes the structure of the course.

As you will see, the Big Picture is the basis upon which the entire course is built, both conceptually and structurally.

We will refer to it often, and having it in mind will help you as you go through the course.

]]>