Data representation, distributions, and statistical variability using sampling and inference techniques. Integrates probability models, compound events, bivariate patterns, and linear models to guide data-driven decision making.
A middle school inquiry project where students collect, analyze, and interpret local data to solve 'mysteries' in their community. Students learn to select appropriate graph types and draw evidence-based conclusions.
A Tier 2 intervention lesson focusing on describing data distributions using center, spread, and shape through hands-on data collection and guided dot plot analysis.
A Tier 2 intervention lesson focused on calculating and interpreting measures of center and variability within real-world contexts for 6th-grade students.
A targeted Tier 2 intervention lesson focusing on the construction and interpretation of dot plots, histograms, and box plots using a concrete-to-representational approach. Students build graphs from raw data and analyze distributions through guided questioning.
This Tier 2 intervention lesson helps students distinguish between measures of center (mean, median) and measures of variation (range). Students use hands-on balance point activities and physical sorting to understand how single numbers can describe entire data sets.
A Tier 2 intervention lesson focused on conceptual understanding of mean (as fair share), median (as middle value), and range (as spread) using hands-on manipulatives and guided sorting.
A Tier 2 intervention lesson for 6th grade students to master describing data distributions using center, spread, and shape (symmetric, skewed, uniform). Students will build data displays and analyze patterns using investigative vocabulary.
A Tier 2 intervention lesson focusing on drawing comparative inferences about two populations using measures of center and variability through a hands-on reaction time experiment.
A scaffolded lesson where students perform coin flips, dice rolls, and spinner experiments to approximate probability and observe long-run relative frequency.
A scaffolded intervention lesson where students compare two populations using mean, median, MAD, and IQR. Includes guided calculator practice and a collaborative data investigation.
A Tier 2 intervention lesson focusing on comparing two numerical data distributions. Students will learn to informally assess visual overlap and express the difference between centers as a multiple of variability (MAD).
A targeted intervention lesson focusing on comparing two data distributions using back-to-back dot plots, visual overlap, and measures of center and variability. Students will gain hands-on experience calculating differences in means relative to the Mean Absolute Deviation (MAD).
A Tier 2 intervention lesson for 7th grade students focusing on comparing data distributions. Students learn to calculate Mean Absolute Deviation (MAD) and express the difference between centers as a multiple of that variability.
A targeted intervention lesson for Grade 7 students on using mean and Mean Absolute Deviation (MAD) from random samples to make comparative inferences about two populations, specifically word lengths in different texts.
General administrative and planning resources for the Math Intensive Lab sequence, including the unit overview and sequence-wide instructional supports.
Connects 2D representations to 3D volumes by exploring nets of prisms and pyramids to calculate surface area conceptually.
Deepens understanding of area by decomposing complex shapes into rectangles and triangles using grid-based visual tools.
Interprets measures of center (mean, median, mode) through data visualization and 'What's the Same? What's Different?' statistical talks.
Provides hands-on challenges to master unit conversions within the same system, focusing on the relationship between larger and smaller units.
Focuses on translating real-world phrases into algebraic expressions and evaluating them using 'Contemplate then Calculate' strategies.
In this lesson, students explore compound events by representing sample spaces through multiple lenses: tree diagrams, outcome tables, and organized lists. They move from simple single events to complex multi-stage experiments, learning to calculate total outcomes and specific probabilities.
A comprehensive lesson on calculating possible outcomes using tree diagrams and factorials, using an exploration and pathfinding theme.
A Tier 2 intervention lesson for 7th grade students focusing on predicting relative frequency from probability and verifying with data collection. Students engage in hands-on simulations using tally charts and collaborative challenges.
A Tier 2 intervention lesson focused on designing and using simulations to estimate probabilities of compound events. Students use random number generators and physical tools to model real-world scenarios, analyze frequency, and evaluate their designs using a formal rubric.
A scaffolded lesson on finding probabilities of compound events using organized lists, tables, tree diagrams, and simulations.
A Tier 2 intervention lesson for 7th grade students focusing on creating sample spaces, assigning probabilities to outcomes, and evaluating the validity of probability models using real-world data and theoretical predictions.
A targeted Tier 2 intervention lesson focusing on representing sample spaces for compound events using organized lists and tree diagrams. Students will build 'blueprints' of all possible outcomes for various multi-step scenarios.
A 7th-grade math lesson where students apply simple and complementary probability concepts to design their own closet inventory and challenge peers with probability questions.
Students learn to construct and interpret probability tree diagrams using percentages and counts, grounded in a real-world census-themed activity. The lesson utilizes a detailed math tutorial video for direct instruction followed by a hands-on data interpretation task.
A comprehensive lesson on probability tree diagrams and compound events for 7th-grade students, featuring a video-based exploration of independent events and a hands-on creative activity.
Students will learn to distinguish between independent and dependent events by exploring how probabilities change when items are not replaced, using tree diagrams to visualize outcomes.
A middle school math lesson focused on identifying the linguistic cues that distinguish independent from dependent compound events in word problems. Students analyze scenarios to see how the sample space changes (or doesn't) based on specific keywords.
A visual exploration of compound probability, focusing on how 'without replacement' scenarios change the sample space. Students use a marble jar motif to track denominator shifts during a guided video viewing and collaborative practice.
A hands-on 7th-grade math lesson where students use colored counters to discover the difference between independent and dependent compound events, supported by a video walkthrough and data collection.
A Pre-Algebra lesson focusing on the addition (OR) and multiplication (AND) rules of probability, featuring a video-based instruction, a probability maze activity, and a reflective journal.
A middle school math lesson that transitions students from visual tree diagrams to the more efficient Fundamental Counting Principle for calculating total outcomes.
A math lesson where students apply the Fundamental Counting Principle to design and analyze menus, featuring a video case study on combining categories and tiered calculation challenges.
A lesson exploring the Fundamental Counting Principle through the lens of digital security and password strength, featuring a character customization warm-up and a security expert activity.
Students apply the Fundamental Counting Principle (FCP) to design a restaurant menu with over one million possible meal combinations, transitioning from simple decision-making to complex mathematical outcomes.
Students explore the Fundamental Counting Principle through a fashion-themed lens, using tree diagrams and paper dolls to visualize outcomes before formalizing the multiplication rule.
Students explore the relationship between theoretical and experimental probability through hands-on trials, data collection, and comparison of outcomes. This lesson covers CA Common Core standards for 7th-grade statistics and probability.
A high-energy lesson focused on mastering the NYS Grade 7 Math reference sheet through rapid recall and repeated practice. Students transition from studying the official tool to performing a 'brain dump' to build long-term memory for high-stakes testing.
A targeted intervention lesson focused on calculating expected value through game design. Students learn to weigh outcomes by multiplying payoff values by their probabilities to make data-driven decisions.
A Tier 2 intervention lesson focused on using random samples to make population inferences and understanding why different samples produce different results. Students analyze simulated data to gauge variability and develop reasoning skills for predicting population characteristics.
A Tier 2 intervention lesson for 7th grade students focusing on the probability scale (0-1) and classifying the likelihood of events using standard vocabulary. Includes hands-on sorting and progress monitoring.
A small-group intervention lesson where students investigate uniform and non-uniform probability models using physical dice and data collection. Includes guided discussion on theoretical vs. experimental results.
A targeted Tier 2 intervention lesson focusing on experimental probability and the law of large numbers through hands-on data collection and analysis. Students compare small-sample results with long-run relative frequencies to bridge the gap between experimental and theoretical outcomes.
A targeted small group intervention lesson where students connect numerical probability values to descriptive likelihood categories through hands-on sorting and collaborative discussion.
A Tier 2 intervention lesson focusing on creating probability models and comparing them to actual experimental results. Students explore uniform and non-uniform models through hands-on dice and spinner activities.
A Tier 2 intervention lesson focusing on the 0-1 probability scale, vocabulary of likelihood, and placing events on a number line. Students use hands-on experiments with dice and spinners to bridge conceptual gaps.
Students explore simple probability and sample space through a station-based lab, using dice, marbles, and spinners to calculate P(event) while addressing common misconceptions.
A Tier 2 intervention lesson focusing on the core logic of statistical inference. Students move from physical sampling to conceptual understanding of how samples represent populations.
A Tier 2 intervention lesson designed for small groups to master the concepts of representative sampling and valid generalizations through scaffolded real-world scenarios and hands-on simulation.
A Tier 2 intervention lesson for 7th grade students focusing on population sampling. Students use hands-on simulations with colored chips to understand how representative samples allow for valid inferences about a larger population.
A Tier 2 small group intervention lesson focusing on the foundations of random sampling, representative samples, and making valid inferences about a population. Students act as 'Data Detectives' to investigate how different sampling methods affect the reliability of their conclusions.
A Tier 2 intervention lesson focused on using random sampling to make inferences about a population. Students engage in hands-on data collection and compare multiple samples to understand variation and prediction accuracy.
A targeted intervention lesson where students generate multiple samples using dice simulations to observe variation in estimates and make data-backed predictions about a population.
The capstone of the unit where students work in squads to solve a real-world community or school issue using the data skills they've developed.
Focuses on transforming raw data into visual evidence. Students learn to choose the right chart for their 'case' and how to spot misleading visualizations.
Students learn the art of gathering high-quality data through surveys and observations, focusing on creating unbiased questions and selecting representative samples.
Students are introduced to the 'Data Detective' mindset, learning to distinguish between qualitative and quantitative data and identifying potential biases in datasets.
A hands-on exploration of theoretical versus experimental probability using a dice rolling lab and video analysis. Students compare mathematical expectations with real-world data to discover how sample size affects results.
Students explore how digital simulations run thousands of trials to create forecasts with confidence levels, like weather or sports models.
An introduction to updating probabilities based on new evidence, teaching students to pivot their predictions when conditions change.
Students use samples from 'mystery bags' to estimate the total population of items, applying proportional reasoning to real-world data.
Students aggregate class data to observe how relative frequency converges on theoretical probability as the number of trials increases, illustrating the Law of Large Numbers.
Students compare theoretical and experimental probability by rolling dice and analyzing why real-world results differ from mathematical predictions.
Students act as consultants to solve a real-world problem using a complex dataset. They synthesize cleaning, visualization, and modeling to provide actionable recommendations.
Students identify how missing or biased data collection leads to unfair models. They analyze ethics in data science through real-world case studies of algorithmic bias.
Uses visual sequences and function machines to help students identify, extend, and describe numerical patterns and rules.
A lesson where students compare two battery brands using double box plots to determine which is 'better' based on median performance, consistency (IQR), and maximum range. Students will learn to calculate five-number summaries and construct stacked box plots on a single number line.
A lesson on using Box-and-Whisker plots to compare two sets of basketball player data, focusing on consistency, range, and the Interquartile Range (IQR).
In the final lesson, students compare two distinct populations using mean and MAD to determine if differences between groups are significant relative to their variability.
Students interpret MAD values in real-world contexts like manufacturing and weather to determine what high or low variability means for consistency and reliability.
Students master the three-step algorithm for calculating Mean Absolute Deviation (MAD) using structured templates to organize their statistical work.
Using visual representations and physical movement, students explore the concept of 'deviation' by measuring the distance of data points from the arithmetic mean.
Students analyze datasets with identical means but different spreads to understand why measures of center alone are insufficient for describing data. They review range and discuss its limitations.
Students are presented with various sampling scenarios and must decide if the sample size and method are sufficient to draw a valid conclusion. They practice using language that reflects statistical uncertainty (likely, probable, estimated).
Students revisit Mean Absolute Deviation (MAD) to quantify the variability in their samples. They learn that a sample with high variability makes precise prediction harder, requiring larger sample sizes.
Students use random sampling to compare two distinct populations (e.g., height of 7th graders vs. 1st graders). They look at the overlap of the sample distributions to determine if the difference in means is significant or just due to random chance.
Students repeat the sampling process with larger sample sizes (n=20, n=50). They compare the dot plots of these means to the previous lesson's plots, visually recognizing that the data becomes less spread out and more clustered around the true mean.
Students draw multiple small samples (n=5) from a known population and plot the means on a dot plot. They observe high variability and discuss why small samples can produce 'extreme' results.
A Socratic seminar critiquing real-world media claims based on statistical principles of sample size and variability.
Using Mean Absolute Deviation (MAD) as a metric for data reliability and making population predictions based on sample consistency.
Students engage in a simulation where they 'test' a population for a rare trait using a 99% accurate test. They discover that rare traits result in many false alarms, introducing the base rate fallacy.
Students use two-way tables and Venn diagrams to find probabilities of an event given a specific condition, practicing how to 'filter' the sample space.
In a culminating assessment, students are given a complex scenario with three different decision paths. They must construct a tree, calculate probabilities for success for each path, and write a recommendation.
Students apply their skills to a real-world scenario, such as choosing a route to school based on traffic lights and train crossings. They map the probabilities of delays to choose the most efficient path.
Students move from counting branches to calculating probabilities along the branches using multiplication. They determine the likelihood of specific paths (e.g., outcome A then outcome B).
Through a marble-drawing experiment (with and without replacement), students discover how one event affects the probability of the next. They update their tree diagrams to reflect changing probabilities.
Students evaluate investment opportunities to build a portfolio that maximizes return within a specific risk budget.
Applying expected value to business scenarios, specifically choosing between indoor and outdoor event venues based on weather forecasts.
A simulation where students decide whether to purchase insurance for a device based on risk probability and replacement costs.
Introduction to expected value as a long-run average, calculating simple scenarios to determine if a decision is a 'good deal.'
Students learn to assign numerical values to outcomes, differentiating between the probability of an event and the value of its result.
Introduces integers and absolute value using 'Contextual Stories' and vertical number lines to build a visual understanding of numbers below zero.
Explores the conceptual meaning of ratios and proportional relationships using 'Double Number Lines' and visual models to bridge i-Ready 6th grade gaps.
A culminating project-based lesson where students design and execute their own simulation for a complex real-world mystery.
Focuses on sequential weather events and the difference between independent and dependent variables in compound probability models.
Investigates sequential probability through the lens of sports, determining if winning streaks are statistically significant or expected random occurrences.
Explores the intersection of biology and math by using Punnett squares and area models to calculate the compound probability of independent genetic traits.
Students define simulations and use random number generators (dice) to model real-world events, specifically focusing on the probability of guessing correctly on multiple-choice tests.
Students perform a final 'audit' of their game, presenting their findings on theoretical vs. experimental probability to the class.
Through play-testing and data collection, students generate experimental probability to compare against their theoretical models.
Students use the fundamental counting principle and probability rules to calculate the exact theoretical win rates for their game designs.
Students blueprint their own original games, focusing on creating compound mechanics and mapping out the resulting sample spaces.
Students investigate the concept of the 'house edge' by playing and analyzing simple carnival games involving multi-stage events.
Students use trend lines to extrapolate future values and discuss the reliability of long-term predictions. They learn about the utility and limits of predictive modeling.
Students learn to fit simple trend lines to scatter plots to model data direction. They practice interpolation to estimate missing values within a data range.