Data representation, distributions, and statistical variability using sampling and inference techniques. Integrates probability models, compound events, bivariate patterns, and linear models to guide data-driven decision making.
A comprehensive practice session for the TSIA2 Math exam, focusing on quantitative reasoning, algebraic reasoning, geometry, and statistics.
A comprehensive 20-question practice test covering all four TSIA2 Math domains: Quantitative Reasoning, Algebraic Reasoning, Geometry, and Probability and Statistics.
A comprehensive introduction to the normal distribution, focusing on the empirical rule and calculating probabilities using z-score tables. Students will learn to visualize data distributions and find areas under the curve.
A lesson focused on calculating the mean, variance, and standard deviation of binomial probability distributions in real-world contexts.
A lesson on constructing and analyzing probability distributions for discrete random variables based on collected samples or simulations, aligned with TEKS S.5(C).
A Tier 2 intervention lesson focusing on using probability and expected value to analyze real-world decisions in medical testing, product quality, and sports strategy. Students use decision trees and justification templates to move from calculations to reasoned arguments.
A targeted small group intervention lesson focused on using probability to make and analyze real-world decisions in sports and medicine. Students will use expected value and tree diagrams to justify strategies like pulling a hockey goalie or interpreting medical tests.
A targeted small group intervention lesson focused on using simulations to determine statistical significance in randomized experiments. Students will analyze experimental data and use guided reasoning templates to connect simulation results to practical conclusions.
A comprehensive 20-question practice test and answer key designed to prepare students for the TSIA2 Mathematics assessment, focusing on algebraic, geometric, and statistical reasoning.
Students will learn to distinguish between permutations (order matters) and combinations (order doesn't matter) through a video-based discussion and a card-sorting activity.
Students will construct Pascal's Triangle and explore the mathematical logic of the symmetry of combinations ($nCr = nC(n-r)$) through visual patterns and algebraic reasoning.
A lesson on calculating permutations with indistinguishable objects, specifically focusing on repeating letters in words. Students will analyze the 'DAD' scramble, watch a tutorial on 'ALABAMA' and 'MISSISSIPPI', and calculate the permutations of their own names.
A deep dive into the logical foundations of combinatorics for advanced students, focusing on the recursive nature of factorials and the structural derivation of the permutation formula. Students transition from intuitive counting to formal proofs using pattern analysis and algebraic consistency.
Introduction to Martingales and the Optional Stopping Theorem, applying these concepts to fair games and boundary crossing probabilities.
Defines conditional expectation as a random variable measurable with respect to a sub-sigma-algebra, utilizing the Radon-Nikodym theorem.
Analysis of Monotone Convergence, Fatou's Lemma, and Dominated Convergence Theorems to determine when limits and expectations commute.
Focuses on the derivation and application of Markov, Chebyshev, Jensen, Hölder, and Minkowski inequalities to bound expected values.
Students define expectation using the Lebesgue integral, moving from simple functions to non-negative random variables and addressing the limitations of Riemann integration.
Focus on articulating conclusions in non-technical language and discussing Type I and Type II errors. Students differentiate between statistical and practical significance.
Students apply their skills to A/B testing data from digital marketing scenarios. They interpret computer output and determine if design changes resulted in statistically significant differences.
Students will apply arithmetic operations like multiplication, addition, and division to calculate Grade Point Averages (GPA), exploring the concept of weighted averages through a real-world academic lens.
A high-school level lesson for AP Calculus and Statistics students focusing on using Desmos for complex integrals and statistical calculations, emphasizing the balance between manual understanding and technological efficiency.
Students will learn to calculate and interpret standard deviation as a measure of consistency. This lesson uses a sports-themed scenario to compare data sets with identical means but different spreads.
Students step into the role of an investment advisor to evaluate the risk and consistency of three stocks using measures of dispersion (Range, IQR, and Standard Deviation). They will use their calculations to make a data-driven recommendation for a risk-averse client.
An AP Statistics lesson exploring how outliers impact measures of dispersion (Range, IQR, and Standard Deviation), featuring a video-based case study and a spreadsheet simulation to determine which statistics are 'resistant'.
A comprehensive introduction to calculating sample standard deviation by hand, focusing on the concept of variability and consistency using quiz score comparisons.
This lesson explores the conceptual and mathematical differences between population and sample standard deviation, focusing on the derivation and application of Bessel's correction (n-1) to ensure unbiased estimation. Students will analyze video demonstrations, perform comparative calculations, and conduct a sampling simulation to witness bias in action.
Students master the manual calculation of sample standard deviation before using spreadsheet software to automate the process, comparing the efficiency and accuracy of human vs. algorithmic computation.
Students explore the Root Mean Square (RMS) as a method for measuring effective magnitude, specifically applying it to deviations to understand its role as a precursor to standard deviation. The lesson uses a physics-inspired hook and compares RMS to other measures of center and spread.
A high-level statistics lesson exploring the relationship between different types of means (Arithmetic, Geometric, Harmonic, and Root Mean Square). Students will verify the Pythagoras Mean Inequality through hands-on calculation and data analysis.
A deep dive into the nuances of quartile calculations, comparing the interpolation method shown in educational videos with the standard algorithms used by graphing calculators. Students will analyze why statistical software might produce varying results for small datasets and explore the concept of linear interpolation.
A culminating case study where students analyze clinical trial data to determine treatment efficacy and write a formal statistical report.
An analysis of the validity conditions and robustness of t-procedures, specifically for small sample sizes and non-normal data.
Students perform hypothesis tests to determine if two population means differ, focusing on t-statistics and p-values.
Focuses on constructing and interpreting confidence intervals for the difference of two population means, including checking necessary conditions.
Introduction à la distribution normale, au calcul des scores Z et à l'utilisation de la table de probabilités pour l'inférence.
Apprendre à choisir le bon graphique pour le bon type de données : histogrammes, boîtes à moustaches et diagrammes de dispersion.
Calculer et interpréter la moyenne, la médiane et le mode, ainsi que la variance et l'écart-type pour décrire la forme d'une distribution.
Comprendre la nature des données : variables qualitatives vs quantitatives, échelles de mesure (nominale, ordinale, intervalle, rapport) et introduction à l'échantillonnage.
This lesson formalizes the 2-sample z-test, introducing the pooled proportion. Students calculate z-statistics and p-values to make decisions about the null hypothesis.
Students learn the formula for the standard error of the difference between two proportions and construct 2-sample z-intervals. Emphasis is placed on checking the Random, Independent, and Large Counts conditions.
Students use randomization techniques to simulate the sampling distribution of the difference between two proportions. This builds intuition for p-values and sampling variability before introducing formal formulas.
Introduces relative risk and odds ratios as alternative ways to compare rates, with a focus on applications in public health and epidemiology.
A culminating discussion on the replication crisis in science. Students synthesize their knowledge of power, effect size, and p-hacking to evaluate the credibility of published research and the importance of open science practices.
A project-based lesson where students design a theoretical A/B test, determining sample sizes and minimum detectable effects for digital products.
Applying power analysis to study design, students learn to determine required sample sizes before data collection. This lesson focuses on the practical use of power analysis software to justify research parameters in grant and project proposals.
Covers the construction and interpretation of confidence intervals for the difference between proportions, emphasizing the distinction between pooled and unpooled standard errors.
Focuses on the mechanics and application of the two-proportion z-test, including the use of pooled proportions in null hypothesis testing.
Students explore statistical power as the probability of detecting a real effect. Through visual simulations, they examine the dynamic relationships between sample size, effect size, alpha level, and the risk of Type II errors.
Students explore the sampling distribution of the difference between two sample proportions, establishing the foundation for inference through simulation and standard error calculation.
A technical session focused on calculating and interpreting Cohen's d for two-sample comparisons. Students learn to use benchmarks to evaluate effect magnitude and practice reporting these metrics alongside traditional inferential statistics.
Students investigate the limitations of p-values, particularly in large samples, and are introduced to Cohen's d as a measure of practical significance. Through case studies, they analyze when a statistically significant result might be trivial in a real-world context.
Students will investigate how changing bin sizes in a histogram can drastically alter the interpretation of a single data set of 50 temperatures. They will practice the 'inclusive lower limit' rule, watch a video demonstration, and engage in a 'Bin Size Battle' to determine which interval size provides the most honest representation of the data.
Synthesize combinatorics and probability rules to solve complex, non-intuitive problems like the Birthday Problem.
Address the critical theoretical distinction between mutually exclusive (disjoint) and independent events through proofs and counter-examples.
Master the general multiplication rule and algebraic manipulation of conditional probability formulas.
Explore the formal mathematical definition of independence and practice verifying independence using real datasets and contingency tables.
Review and deepen understanding of set notation (Union, Intersection, Complement) and use Venn diagrams to derive the General Addition Rule.
Students apply their knowledge of compound probability to debunk common misconceptions or scams in a final presentation project.
An exploration of how trends in sub-groups can disappear or reverse when data is aggregated, highlighting the impact of confounding variables.
Students use conditional probability and tree diagrams to solve the famous Monty Hall problem and understand how new information restricts sample space.
An analysis of statistical independence and the logical fallacy that past outcomes influence future independent events.
Students investigate the probability of shared birthdays in a group, learning about complement probability and the surprising frequency of 'coincidences'.
Students analyze real-world case studies, such as defect rates in manufacturing or lottery odds, where dependent probability is key. They calculate risks based on sequential failures or selections.
Using computational simulations to verify theoretical probability models and observe the Law of Large Numbers.
Combining probability with payoffs to calculate Expected Value and analyze house edges in gambling.
Formalizing the logic of drawing without replacement using the Hypergeometric distribution.
Applying combinatorics and the multiplication rule to calculate the probability of complex poker hands.
Establishing the standard 52-card deck as a sample space and differentiating between independent and dependent compound events.
A capstone workshop where students apply their knowledge to critique actual research abstracts, acting as peer reviewers to evaluate the validity of comparative claims.
An investigation into the replication crisis and unethical practices like p-hacking, emphasizing the ethical responsibility of statisticians in reporting findings.
Students learn to calculate and interpret Cohen's d, distinguishing between statistical significance (is there an effect?) and practical importance (how big is the effect?).
A simulation-based lesson where students manipulate sample size, alpha levels, and effect size to see how they influence a test's ability to detect a true difference between populations.
Students explore the trade-offs between Type I and Type II errors through a courtroom analogy and medical scenarios, understanding the real-world consequences of statistical decision-making.
Students explore the sampling distribution of the difference between two sample means and learn the variance summation rules for independent samples.
A station-based lab where students analyze computer output from various statistical packages. They identify procedures, interpret confidence intervals and p-values, and synthesize their learning.
Interpretation is the focus as students determine statistical significance based on whether an interval includes zero. They practice translating complex data into clear, non-technical reports.
Students master the 2-proportion z-interval, focusing on the specific standard error formulas required. The lesson clarifies the distinction between unpooled and pooled standard errors.
A technical workshop focused on the construction of 2-sample t-intervals for means. Students calculate margins of error manually to understand how sample size and standard deviation impact interval width.
Students use simulations to explore how mean differences and variance affect the overlap between two population distributions. They build an intuitive understanding of statistical distinctness before diving into calculations.
Part 2 of the simulation focusing on analysis, interpretation, and presenting findings to a simulated 'board of directors'.
Part 1 of a simulation where students act as consultants to clean, organize, and select the correct test for a messy real-world dataset.
A conceptual exploration of statistical power, investigating how sample size and effect size influence the ability to detect differences.
Explores the real-world impact and definitions of Type I and Type II errors within the context of comparative studies.
Students master the logic of selecting between 2-sample t, paired t, and 2-prop z tests based on data structure and study design.
A culminating simulation where students act as a review board, synthesizing p-values, errors, and effect sizes to make evidence-based funding recommendations.
An introduction to quantifying the magnitude of differences using Cohen's d and other effect size measures to provide context to statistical findings.
Students distinguish between statistical results and real-world impact by analyzing how massive sample sizes can produce significant results for negligible differences.
An exploration of Type I and Type II errors, their consequences in fields like medicine and technology, and the use of truth tables to visualize statistical outcomes.
Students explore the conceptual meaning of P-values as conditional probabilities and debate the selection of significance levels (alpha) in different contexts.
Students synthesize their learning by creating a consumer guide that provides mathematically-backed advice on when to purchase insurance versus when to self-insure.
Students compare different decision-making strategies, such as Maximin and Expected Value, to evaluate complex financial scenarios like car and travel insurance.
A simulation-based lesson where students act as an insurance company to understand risk pooling, premiums, and the impact of deductibles on solvency.
Students use expected value to analyze the financial viability of extended warranties on consumer electronics, determining when protection plans are mathematically sound.
Students categorize risks by frequency and severity, analyzing data to understand the difference between high-probability/low-impact events and low-probability/high-impact catastrophes.
Exploring Stochastic Gradient Descent (SGD) and its role in navigating high-dimensional, non-convex landscapes in machine learning.
Solving optimization problems under constraints using the method of Lagrange multipliers, focusing on the alignment of gradient vectors.
Implementation of iterative numerical methods, focusing on the geometry of convergence, learning rates, and momentum in gradient descent.
An examination of second-order derivatives via the Hessian matrix to understand surface curvature and classify critical points using eigenvalues.
Students analyze the gradient vector as a directional quantity, establishing its geometric relationship with level sets and proving it indicates the steepest ascent.
Application of stochastic modeling to queueing systems, using Little's Law and steady-state analysis to optimize performance in complex environments.
Transitioning to continuous-time Markov chains using generator matrices and solving Kolmogorov's differential equations for birth-death processes.
Investigation of the Poisson process, its relationship to the exponential distribution, and the implications of the memoryless property in continuous-time modeling.
Analysis of the long-term behavior of Markov chains, focusing on state classification (recurrence, transience) and the computation of stationary distributions.
Foundational concepts of Markov processes, the Markov property, and the mathematical framework of transition matrices and Chapman-Kolmogorov equations.
Computational synthesis of stochastic theory using Monte Carlo simulations, the Gillespie algorithm, and statistical inference of transition parameters.
Defining continuous-time Markov chains through infinitesimal generator matrices and deriving the Kolmogorov Forward and Backward equations.
Transitioning to continuous time with Poisson processes, exploring axiomatic definitions, the memoryless property, and inter-arrival time distributions.
Analyzing the long-term behavior of Markov chains through state classification, ergodicity, and stationary distribution calculations using linear algebra.
Introduction to discrete-time Markov chains, transition probability matrices, and the proof and application of Chapman-Kolmogorov equations.
A targeted Tier 2 intervention lesson focusing on constructing two-way frequency tables, calculating conditional probabilities, and testing for independence using a color-coded blueprint approach.
Students use their constructed models to extrapolate and answer questions about future events, solving trigonometric inequalities graphically.
Students use graphing calculators or regression software to fit trigonometric equations to data sets. They compare their hand-calculated models to the regression models.
Focusing on the x-axis, students determine the period of real-world cycles. They calculate the horizontal scaling factor and determine appropriate horizontal shifts.
Students learn the algebraic techniques to extract the midline and amplitude from a data table. They practice these calculations on various environmental data sets.
Students plot given data sets and identify the periodic nature of the data. They sketch a 'best fit' curve by hand to estimate the maximums, minimums, and cycle length.
Students analyze a contaminated dataset (e.g., historical climate data with sensor errors) using both OLS and robust methods. They must justify their choice of model and explain the nature of the identified outliers.
A comprehensive lesson focused on performing and interpreting linear regression analysis using real-world UN population and surface area data. Students will learn to identify variables, evaluate the impact of outliers, and draw conclusions based on correlation coefficients.
A targeted Tier 2 intervention lesson focused on distinguishing between correlation and causation through investigative scenarios, lurking variable identification, and sorting activities.
A Tier 2 small group intervention lesson focused on distinguishing correlation from causation through the lens of media headline analysis. Students use a 'detective' framework to identify lurking variables and evaluate claims.
A culminating presentation where students share a power function model derived from real-world data, justifying their choice of rational exponent.
Students learn the fundamentals of power regression and use data sets to fit functions of the form y = ax^b, where b is a rational number.
This lesson focuses on the precision of financial modeling when using rational exponents to calculate interest for partial compound periods.
Applying rational exponents to orbital mechanics, students use Kepler's Third Law to calculate planetary distances and periods.
Students investigate the non-linear relationship between animal mass and metabolic rate, evaluating Kleiber's Law (B = M^(3/4)) to understand biological scaling.
This lesson covers iterative robust methods used frequently in computer vision and engineering. Students implement RANSAC (Random Sample Consensus) to fit models to data with high contamination rates (up to 50% outliers).
Students synthesize their diagnostic skills by implementing cross-validation frameworks to assess model generalizability and prevent overfitting.
Students are introduced to robust regression techniques that modify the loss function. The lesson contrasts squared error loss (L2) with absolute error (L1) and Huber loss to reduce the impact of outliers.
Students master data transformations to linearize relationships and stabilize variance, focusing on the interpretative shifts required for log-based models.
Students investigate the normality assumption of errors using Q-Q plots and statistical tests, understanding how departures from normality impact inference in small vs. large samples.
This lesson details specific aggregate measures of influence. Students calculate Cook's Distance and DFFITS to quantitatively identify points that justify exclusion or special handling in regression analysis.
Students analyze residual plots and perform the Breusch-Pagan test to diagnose and understand the implications of heteroscedasticity in regression models.
A Tier 2 intervention lesson focused on distinguishing statistical significance (p-values) from practical significance (effect size) when evaluating data reports. Students learn to critically analyze headlines and data claims through scaffolded examples and structured comparison activities.
This lesson guides students through the application of the weighted mean formula to determine solution concentrations. It combines visual learning from video demonstrations with a paper-based lab simulation to reinforce the relationship between volume and concentration 'weight'.
Design and implement a Monte Carlo simulation study to test estimator robustness under finite sample constraints and non-normal population distributions.
Critically assess non-probability sampling and the 'Big Data Paradox' by comparing Mean Squared Error (MSE) across different sampling regimes.
Analyze the gap between target populations and sampling frames, focusing on the mathematical impacts of undercoverage and overcoverage on estimator variance.
Deconstruct selection bias through mathematical proofs and the historical lens of the 1936 Literary Digest poll, exploring how non-random exclusion alters conditional probabilities.
Examine the mathematical properties of estimators under simple random sampling (SRS), focusing on unbiasedness, consistency, and the Central Limit Theorem's role in sampling distributions.