Glossary

"AND" Event

An outcome is in the event A AND B if the outcome is in both A AND B at the same time.

"OR" Event

An outcome is in the event A OR B if the outcome is in A or is in B or is in both A and B.

variability in samples

how much an estimate varies between samples

Analysis of variance

also referred to as ANOVA, is a method of testing whether or not the means of three or more populations are equal. The method is applicable if:

all populations of interest are normally distributed.
the populations have equal standard deviations.
samples (not necessarily of the same size) are randomly and independently selected from each population.

The test statistic for analysis of variance is the F-ratio.

average

a number that describes the central tendency of the data; there are a number of specialized averages, including the arithmetic mean, weighted mean, median, mode, and geometric mean.

Bernoulli Trial

an experiment with the following characteristics:
1. There are only two possible outcomes called “success” and “failure” for each trial.
2. The probability p of a success is the same for any trial (so the probability q = 1 − p of a failure is the same for any trial).

binomial distribution

A discrete random variable which arises from Bernoulli trials; there are a fixed number, n, of independent trials with two outcomes called success and failure with probability p and q respectively. The binomial random variable X is the number of successes in n trials, denoted [latex]X \sim B(n,p)[/latex]. The mean is [latex]\mu = np[\latex] and the standard deviation is [latex]\sigma = \sqrt{npq}[/latex]. The probability of exactly [latex]x[/latex] successes in n trials is [latex]P\left(X=x\right)=\left(\genfrac{}{}{0}{}{n}{x}\right){p}^{x}{q}^{n-x}[/latex].

binomial experiment

A statistical experiment that satisfies three conditions:
1. There are a fixed number of trials, n.
2. There are only two possible outcomes, called "success" and "failure," for each trial. The letter p denotes the probability of a success on one trial, and q denotes the probability of a failure on one trial.
3. The n trials are independent and are repeated using identical conditions.

binomial formula

A random variable that counts the number of successes in a fixed number (n) of independent Bernoulli trials each with probability of a success (p)

Blinding

not telling participants which treatment a subject is receiving

box plot

a graph that gives a quick picture of the middle 50% of the data

Central Limit Theorem

Given a random variable (RV) with known mean [latex]\mu[/latex] and known standard deviation σ. We are sampling with size n and we are interested in two new RVs - the sample mean, [latex]\overline{X}[/latex], and the sample sum, [latex]\Sigma X[/latex].
If the size n of the sample is sufficiently large, then [latex]\overline{X}~N\left(\mu \text{,}\frac{\sigma }{\sqrt{n}}\right)[/latex] and [latex]\Sigma X~N\left(n\mu ,\sqrt{n}\sigma \right)[/latex]. If the size n of the sample is sufficiently large, then the distribution of the sample means and the distribution of the sample sums will approximate a normal distribution regardless of the shape of the population. The mean of the sample means will equal the population mean and the mean of the sample sums will equal n times the population mean. The standard deviation of the distribution of the sample means, [latex]\frac{\sigma }{\sqrt{n}}[/latex], is called the standard error of the mean.

cluster sample

a method for selecting a random sample and dividing the population into groups (clusters); use simple random sampling to select a set of clusters. Every individual in the chosen clusters is included in the sample.

complement of event

The complement of event A consists of all outcomes that are NOT in A.

conditional probability

the likelihood that an event will occur given that another event has already occurred

confidence intervals

an interval estimate for an unknown population parameter. This depends on: (1) the desired confidence level, (2) information that is known about the distribution (for example, known standard deviation), and (3) the sample and its size.

confidence level

the percent expression for the probability that the confidence interval contains the true population parameter; for example, if the CL = 90%, then in 90 out of 100 samples the interval estimate will enclose the true population parameter.

contingency tables

the method of displaying a frequency distribution as a table with rows and columns to show how two variables may be dependent (contingent) upon each other; the table provides an easy way to calculate conditional probabilities.

continuity correction

When statisticians add or subtract .5 to values to improve approximation

continuous

a random variable (RV) whose outcomes are measured; the height of trees in the forest is a continuous RV.

control group

a group in a randomized experiment that receives an inactive treatment but is otherwise managed exactly as the other groups

Convenience sampling

a nonrandom method of selecting a sample; this method selects individuals that are easily accessible and may result in biased data.

correlation coefficient

a measure developed by Karl Pearson (early 1900s) that gives the strength of association between the independent variable and the dependent variable; the formula is:
[latex]r=\frac{n{\sum }^{\text{​}}xy-\left({\sum }^{\text{​}}x\right)\left({\sum }^{\text{​}}y\right)}{\sqrt{\left[n{\sum }^{\text{​}}{x}^{2}-{\left({\sum }^{\text{​}}x\right)}^{2}\right]\left[n{\sum }^{\text{​}}{y}^{2}-{\left({\sum }^{\text{​}}y\right)}^{2}\right]}}[/latex]

where n is the number of data points. The coefficient cannot be more than 1 and less than –1. The closer the coefficient is to ±1, the stronger the evidence of a significant linear relationship between x and y.

Cumulative relative frequency

The term applies to an ordered set of observations from smallest to largest. The cumulative relative frequency is the sum of the relative frequencies for all values that are less than or equal to the given value.

data

a set of observations (a set of possible outcomes); most data can be put into two groups: qualitative (an attribute whose value is indicated by a label) or quantitative (an attribute whose value is indicated by a number). Quantitative data can be separated into two subgroups: discrete and continuous. Data is discrete if it is the result of counting (such as the number of students of a given ethnic group in a class or the number of books on a shelf). Data is continuous if it is the result of measuring (such as distance traveled or weight of luggage)

Datum

a piece of information

decay parameter

The decay parameter describes the rate at which probabilities decay to zero for increasing values of x. It is the value m in the probability density function f(x) = me(-mx) of an exponential random variable. It is also equal to m = [latex]\frac{1}{\mu }[/latex], where μ is the mean of the random variable.

degrees of freedom

the number of objects in a sample that are free to vary

dependent

If two events are NOT independent, then we say that they are dependent.

discrete

a random variable (RV) whose outcomes are counted

double-blind experiment

the act of blinding both the subjects of an experiment and the researchers who work with the subjects

Empirical Rule

Roughly 68% of values are within 1 standard deviation of the mean, roughly 95% of values are within 2 standard deviations of the mean, and 99.7% of values are within 3 standard deviations of the mean

equally likely

Each outcome of an experiment has the same probability.

Error Bound (EBM)

the margin of error; depends on the confidence level, sample size, and known or estimated population standard deviation.

Error Bound (EBP)

the margin of error; depends on the confidence level, sample size, and the estimated (from the sample) proportion of successes.

event

a subset of the set of all outcomes of an experiment; the set of all outcomes of an experiment is called a sample space and is usually denoted by S. An event is an arbitrary subset in S. It can contain one outcome, two outcomes, no outcomes (empty subset), the entire sample space, and the like. Standard notations for events are capital letters such as A, B, C, and so on.

expected value

expected arithmetic average when an experiment is repeated many times; also called the mean. Notations: μ. For a discrete random variable (RV) with probability distribution function P(x),the definition can also be written in the form μ = [latex]\sum[/latex]xP(x).

experiment

a planned activity carried out under controlled conditions

experimental units

any individual or object to be measured

explanatory variable

the independent variable in an experiment; the value controlled by researchers

Exponential Distribution

a continuous random variable (RV) that appears when we are interested in the intervals of time between some random events, for example, the length of time between emergency arrivals at a hospital; the notation is X ~ Exp(m). The mean is μ = 1m and the standard deviation is σ = 1m. The probability density function is f(x) = me−mx, x ≥ 0 and the cumulative distribution function is P(X ≤ x) = 1 − e−mx.

first quartile

the value that is the median of the of the lower half of the ordered data set

frequency

the number of times a value of the data occurs

frequency polygon

looks like a line graph but uses intervals to display ranges of large amounts of data

frequency table

a data representation in which grouped data is displayed along with the corresponding frequencies

geometric distribution

a discrete random variable (RV) that arises from the Bernoulli trials; the trials are repeated until the first success. The geometric variable X is defined as the number of trials until the first success. Notation: X ~ G(p). The mean is μ = [latex]\frac{1}{p}[/latex] and the standard deviation is σ = [latex]\sqrt{\frac{1}{p}\left(\frac{1}{p}-1\right)}[/latex]. The probability of exactly x failures before the first success is given by the formula: P(X = x) = p(1 – p)^(x – 1)

geometric experiment

a statistical experiment with the following properties:
1. There are one or more Bernoulli trials with all failures except the last one, which is a success.
2. In theory, the number of trials could go on forever. There must be at least one trial.
3. The probability, p, of a success and the probability, q, of a failure do not change from trial to trial.

histogram

a graphical representation in x-y form of the distribution of data in a data set; x represents the data and y represents the frequency, or relative frequency. The graph consists of contiguous rectangles.

hypergeometric
hypergeometric experiment

a statistical experiment with the following properties:
1. You take samples from two groups.
2. You are concerned with a group of interest, called the first group.
3. You sample without replacement from the combined groups.
4. Each pick is not independent, since sampling is without replacement.
5. You are not dealing with Bernoulli Trials.

hypergeometric probability distribution

a discrete random variable (RV) that is characterized by:
1. A fixed number of trials.
2. The probability of success is not the same from trial to trial.
We sample from two groups of items when we are interested in only one group. X is defined as the number of successes out of the total number of items chosen. Notation: X ~ H(r, b, n), where r = the number of items in the group of interest, b = the number of items in the group not of interest, and n = the number of items chosen.

hypothesis

a statement about the value of a population parameter, in case of two hypotheses, the statement assumed to be true is called the null hypothesis (notation H0) and the contradictory statement is called the alternative hypothesis (notation Ha).

independent

The occurrence of one event has no effect on the probability of the occurrence of another event. Events A and B are independent if one of the following is true:
• P(A|B) = P(A)
• P(B|A) = P(B)
• P(A AND B) = P(A)P(B)

inferential statistics

also called statistical inference or inductive statistics; this facet of statistics deals with estimating a population parameter based on a sample statistic. For example, if four out of the 100 calculators sampled are defective we might infer that four percent of the production is defective.

informed consent

Any human subject in a research study must be cognizant of any risks or costs associated with the study. The subject has the right to know the nature of the treatments included in the study, their potential risks, and their potential benefits. Consent must be given freely by an informed, fit participant.

Institutional Review Boards (IRB)

a committee tasked with oversight of research programs that involve human subjects

interquartile range

or IQR, is the range of the middle 50 percent of the data values; the IQR is found by subtracting the first quartile from the third quartile.

intersection

The shared outcomes of two events

intervals

also called a class interval; an interval represents a range of data and is used when displaying large data sets

Law of Large Numbers

As the number of trials in a probability experiment increases, the difference between the theoretical probability of an event and the relative frequency probability approaches zero.

lurking variables

a variable that has an effect on a study even though it is neither an explanatory variable nor a response variable

mean

a number that measures the central tendency; a common name for mean is "average." The term "mean" is a shortened form of "arithmetic mean." By definition, the mean for a sample (denoted by [latex]\overline{x}[/latex]) is [latex]\overline{x}\text{ = }\frac{\text{Sum of all values in the sample}}{\text{Number of values in the sample}}[/latex], and the mean for a population (denoted by μ) is [latex]\mu \text{ = }\frac{\text{Sum of all values in the population}}{\text{Number of values in the population}}[/latex].

mean of the distribution

the long-term average of many trials of a statistical experiment

median

a number that separates ordered data into halves; half the values are the same number or smaller than the median and half the values are the same number or larger than the median. The median may or may not be part of the data.

memoryless property

For an exponential random variable X, the memoryless property is the statement that knowledge of what has occurred in the past has no effect on future probabilities. This means that the probability that X exceeds x + k, given that it has exceeded x, is the same as the probability that X would exceed k if we had no knowledge about it. In symbols we say that P(X > x + k|X > x) = P(X > k).

midpoint

the mean of an interval in a frequency table

mode

the value that appears most frequently in a set of data

mutually exclusive events

Two events are mutually exclusive if the probability that they both happen at the same time is zero. If events A and B are mutually exclusive, then P(A AND B) = 0.

nonsampling errors

an issue that affects the reliability of sampling data other than natural variation; it includes a variety of human errors including poor study design, biased sampling methods, inaccurate information provided by study participants, data entry errors, and poor analysis.

normal distribution

a continuous random variable (RV) with pdf [latex]f\text{(}x\text{)}=\frac{1}{\sigma \sqrt{2\pi }}{e}^{–{\left(x–\mu \right)}^{2}/2{\sigma }^{2}}[/latex], where μ is the mean of the distribution and σ is the standard deviation, notation: X ~ N(μ,σ). If μ = 0 and σ = 1, the RV is called the standard normal distribution.

Numerical variables

variables that take on values that are indicated by numbers

one-way ANOVA

a method of testing whether or not the means of three or more populations are equal; the method is applicable if:

all populations of interest are normally distributed.
the populations have equal standard deviations.
samples (not necessarily of the same size) are randomly and independently selected from each population.

The test statistic for analysis of variance is the F-ratio.

outcome

a particular result of an experiment

outliers

an observation that does not fit the rest of the data

p-value

the probability that an event will happen purely by chance assuming the null hypothesis is true. The smaller the p-value, the stronger the evidence is against the null hypothesis.

parameter

a numerical characteristic of a population

percentiles

a number that divides ordered data into hundredths; percentiles may or may not be part of the data. The median of the data is the second quartile and the 50th percentile. The first and third quartiles are the 25th and the 75th percentiles, respectively.

placebo treatment

an inactive treatment that has no real effect on the explanatory variable

point estimate

a single number computed from a sample and used to estimate a population parameter.

Poisson distribution

A discrete random variable that counts the number of times a certain event will occur in a specific interval; characteristics of the variable:
• The probability that the event occurs in a given interval is the same for all intervals.
• The events occur with a known mean and independently of the time since the last event.
The distribution is defined by the mean μ of the event in the interval. Notation: X ~ P(μ). The mean is μ = np. The standard deviation is [latex]\sigma \text{ = }\sqrt{\mu }[/latex]. The probability of having exactly x successes in r trials is P(X = x) = [latex]\left({e}^{-\mu }\right)\frac{{\mu }^{x}}{x!}[/latex]. The Poisson distribution is often used to approximate the binomial distribution, when n is “large” and p is “small” (a general rule is that n should be greater than or equal to 20 and p should be less than or equal to 0.05).

Poisson probability distribution

A discrete random variable that counts the number of times a certain event will occur in a specific interval; characteristics of the variable:
• The probability that the event occurs in a given interval is the same for all intervals.
• The events occur with a known mean and independently of the time since the last event.
The distribution is defined by the mean μ of the event in the interval. Notation: X ~ P(μ). The mean is μ = np. The standard deviation is [latex]\sigma \text{ = }\sqrt{\mu }[/latex]. The probability of having exactly x successes in r trials is P(X = x) = [latex]\left({e}^{-\mu }\right)\frac{{\mu }^{x}}{x!}[/latex]. The Poisson distribution is often used to approximate the binomial distribution, when n is “large” and p is “small” (a general rule is that n should be greater than or equal to 20 and p should be less than or equal to 0.05).

Probability

a number between zero and one, inclusive, that gives the likelihood that a specific event will occur; the foundation of statistics is given by the following 3 axioms (by A.N. Kolmogorov, 1930’s): Let S denote the sample space and A and B are two events in S. Then:
• 0 ≤ P(A) ≤ 1
• If A and B are any two mutually exclusive events, then P(A OR B) = P(A) + P(B).
• P(S) = 1

probability density function

A function that defines a continuous random variable, and the likelihood of an outcome

probability experiment

A random experiment where the result is not predetermined

proportion

a part, share, or number considered in comparative relation to a whole

Qualitative

an attribute whose value is indicated by a label

quantile

Points in a distribution that relate to the rank order of values in that distribution

Quantitative

an attribute whose value is indicated by a number

quartiles

the numbers that separate the data into quarters; quartiles may or may not be part of the data. The second quartile is the median of the data.

random assignment

the act of organizing experimental units into treatment groups using random methods

random sampling

a method of selecting a sample that gives every member of the population an equal chance of being selected.

Random Variable

A probability model quantifying a situation; can equal the values of the possible outcomes. Represented using a capital letter.

relative frequency

the ratio of the number of times a value of the data occurs in the set of all outcomes to the number of all outcomes to the total number of outcomes

representative sample

A representative sample is a subset of a population that seeks to accurately reflect the characteristics of the larger group

response variable

the dependent variable in an experiment; the value that is measured for change at the end of an experiment

sample space

the set of all possible outcomes of an experiment

sampling bias

not all members of the population are equally likely to be selected

sampling errors

the natural variation that results from selecting a sample to represent a larger population; this variation decreases as the sample size increases, so selecting larger samples reduces sampling error.

simple random sampling

a straightforward method for selecting a random sample; give each member of the population a number. Use a random number generator to select a set of labels. These randomly selected labels identify the members of your sample.

skewed

used to describe data that is not symmetrical; when the right side of a graph looks “chopped off” compared to the left side, we say it is “skewed to the left.” When the left side of the graph looks “chopped off” compared to the right side, we say the data is “skewed to the right.” Alternatively: when the lower values of the data are more spread out, we say the data are skewed to the left. When the greater values are more spread out, the data are skewed to the right.

standard deviation

a number that is equal to the square root of the variance and measures how far data values are from their mean; notation: s for sample standard deviation and σ for population standard deviation

standard deviation of a probability distribution

a number that measures how far the outcomes of a statistical experiment are from the mean of the distribution

standard error of the mean

the standard deviation of the distribution of the sample means, or [latex]\frac{\sigma }{\sqrt{n}}[/latex].

standard normal distribution

a continuous random variable (RV) with a mean of 0 and a standard deviation of 1 which z-scores follow: X ~ N(0, 1); when X follows the standard normal distribution, it is often noted as Z ~ N(0, 1).

statistics

the practice or science of collecting and analyzing numerical data in large quantities, especially for the purpose of inferring proportions in a whole from those in a representative sample.

stratified sample

a method for selecting a random sample used to ensure that subgroups of the population are represented adequately; divide the population into groups (strata). Use simple random sampling to identify a proportionate number of individuals from each stratum.

Student’s t-distribution

investigated and reported by William S. Gossett in 1908 and published under the pseudonym Student; the major characteristics of the random variable (RV) are: (1)It is continuous and assumes any real values; (2) The pdf is symmetrical about its mean of zero. However, it is more spread out and flatter at the apex than the normal distribution; (3) It approaches the standard normal distribution as n gets larger. There is a "family of t–distributions: each representative of the family is completely defined by the number of degrees of freedom, which is one less than the number of data.

systematic sample

a method for selecting a random sample; list the members of the population. Use simple random sampling to select a starting point in the population. Let k = (number of individuals in the population)/(number of individuals needed in the sample). Choose every kth individual in the list starting with the one that was randomly selected. If necessary, return to the beginning of the population list to complete your sample.

treatments

different values or components of the explanatory variable applied in an experiment

tree diagram

the useful visual representation of a sample space and events in the form of a “tree” with branches marked by possible outcomes together with associated probabilities (frequencies, relative frequencies)

Type I Error

The decision is to reject the null hypothesis when, in fact, the null hypothesis is true.

Type II Error

The decision is not to reject the null hypothesis when, in fact, the null hypothesis is false.

Uniform Distribution

a continuous random variable (RV) that has equally likely outcomes over the domain, a < x < b; it is often referred to as the rectangular distribution because the graph of the pdf has the form of a rectangle. Notation: X ~ U(a,b). The mean is μ = [latex]\frac{a+b}{2}[/latex] and the standard deviation is [latex]\sigma =\sqrt{\frac{{\left(b-a\right)}^{2}}{12}}[/latex]. The probability density function is f(x) = [latex]\frac{1}{b-a}[/latex] for a < x < b or a ≤ x ≤ b. The cumulative distribution is P(X ≤ x) = [latex]\frac{x-a}{b-a}[/latex].

union

The set of all outcomes in two events; outcomes can be in either or both events

variance

mean of the squared deviations from the mean; the square of the standard deviation. For a set of data, a deviation can be represented as x – [latex]\overline{x}[/latex] where x is a value of the data and [latex]\overline{x}[/latex] is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one

variation

mean of the squared deviations from the mean, or the square of the standard deviation; for a set of data, a deviation can be represented as x – [latex]\overline{x}[/latex] where x is a value of the data and [latex]\overline{x}[/latex] is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and one.

Venn diagram

the visual representation of a sample space and events in the form of circles or ovals showing their intersections

with replacement

Once a member of the population is selected for inclusion in a sample, that member is returned to the population for the selection of the next individual.

without replacement

A member of the population may be chosen for inclusion in a sample only once. If chosen, the member is not returned to the population before the next selection.

z-scores

the linear transformation of the form z = [latex]\frac{x\text{ }–\text{ }\mu }{\sigma }[/latex]; if this transformation is applied to any normal distribution X ~ N(μ, σ) the result is the standard normal distribution Z ~ N(0,1). If this transformation is applied to any specific value x of the RV with mean μ and standard deviation σ, the result is called the z-score of x. The z-score allows us to compare data that are normally distributed but scaled differently.

License

Icon for the Creative Commons Attribution-ShareAlike 4.0 International License

Introductory Statistics Copyright © 2024 by LOUIS: The Louisiana Library Network is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book