- Overview
- Probability
- Random Variables
- Diagnostic Test
- Expected Values/Mean
- Variance
- Binomial Distribution
- Normal Distribution
- Poisson Distribution
- Asymptotics
- Law of Large Numbers (LLN)
- Example - LLN for Normal and Bernoulli Distribution
- Central Limit Theorem
- Example - CLT with Bernoulli Trials (Coin Flips)
- Confidence Intervals - Normal Distribution/Z Intervals
- Confidence Interval - Bernoulli Distribution/Wald Interval
- Confidence Interval - Binomial Distribution/Agresti-Coull Interval
- Confidence Interval - Poisson Interval
- Confidence Intervals - T Distribution(Small Samples)
- Confidence Interval - Paired T Tests
- Independent Group
**t**Intervals - Same Variance - Independent Group t Intervals - Different Variance

- Hypothesis Testing
- Power
- Multiple Testing
- Resample Inference

**Statistical Inference**= generating conclusions about a population from a noisy sample- Goal = extend beyond data to population
- Statistical Inference = only formal system of inference we have
- many different modes, but
**two**broad flavors of inference (inferential paradigms):vs*Bayesian**Frequencist***Frequencist**= uses long run proportion of times an event occurs independent identically distributed repetitions- frequentist is what this class is focused on
- believes if an experiment is repeated many many times, the resultant percentage of success/something happening defines that population parameter

**Bayesian**= probability estimate for a hypothesis is updated as additional evidence is acquired

**statistic**= number computed from a sample of data- statistics are used to infer information about a population

**random variable**= outcome from an experiment- deterministic processes (variance/means) produce additional random variables when applied to random variables, and they have their own distributions

**Probability**= the study of quantifying the likelihood of particular events occurring- given a random experiment,
= population quantity that summarizes the randomness*probability*- not in the data at hand, but a conceptual quantity that exist in the population that we want to estimate

- given a random experiment,

- discovered by Russian mathematician Kolmogorov, also known as “Probability Calculus”
- probability = function of any set of outcomes and assigns it a number between 0 and 1
- \(0 \le P(E) \le 1\), where \(E\) = event

- probability that nothing occurs = 0 (impossible, have to roll dice to create outcome), that something occurs is 1 (certain)
- probability of outcome or event \(E\), \(P(E)\) = ratio of ways that \(E\) could occur to number of all possible outcomes or events
- probability of something = 1 - probability of the opposite occurring
- probability of the
**union**of any two sets of outcomes that have nothing in common (mutually exclusive) = sum of respective probabilities