Sampling Distribution Calculator
Find the mean, standard error, z-score, and probability for any sampling distribution. Supports sample means, proportions, and differences between two groups.
Distribution Type
Parameters
Step-by-Step Solution
Sampling Distribution
Mean
100
Std Error
2.5000
95% Confidence Interval
95.1000 to 104.9000
Key Formulas
Standard Error Formulas
| Statistic | Mean of Distribution | Standard Error | CLT Condition |
|---|---|---|---|
| Sample Mean (x-bar) | mu | sigma / sqrt(n) | n ≥ 30 or normal population |
| Sample Proportion (p-hat) | p | sqrt(p(1-p)/n) | np ≥ 10 and n(1-p) ≥ 10 |
| Difference of Means | mu1 - mu2 | sqrt(s1^2/n1 + s2^2/n2) | Both n1, n2 ≥ 30 |
| Difference of Proportions | p1 - p2 | sqrt(p1(1-p1)/n1 + p2(1-p2)/n2) | All np ≥ 10 for both groups |
How Sampling Distributions Work
Imagine you survey 50 people and calculate their average height. Now imagine doing that survey again with a different group of 50. You would get a slightly different average each time. If you did this thousands of times and plotted all those averages, you would have a sampling distribution of the mean.
Why This Matters
In practice, you only take one sample. But the sampling distribution tells you how much your single result might differ from the truth. The standard error quantifies that uncertainty. A small standard error means your sample statistic is likely close to the real population value. A large one means there is more room for random sampling error.
The Central Limit Theorem
The CLT is the reason sampling distributions are so useful. It says that if your sample is large enough, the sampling distribution of the mean will be approximately normal (bell-shaped), no matter what the original population looks like. For means, "large enough" is usually n of 30 or more. For proportions, the rule is np at least 10 and n(1-p) at least 10.
Once you know the distribution is roughly normal, you can use z-scores and the standard normal table to find probabilities. That is how confidence intervals and hypothesis tests work under the hood.
Standard Error vs. Standard Deviation
These two get confused constantly. Standard deviation (sigma) measures how spread out the individual data points are. Standard error (SE) measures how spread out the sample means (or proportions) are across repeated samples. The relationship is simple: SE = sigma / sqrt(n). As your sample gets bigger, the standard error shrinks because averages become more stable.
Finding Probabilities
To find the probability that a sample mean falls below some value x, you convert to a z-score: z = (x - mu) / SE. Then look up that z-score in the normal distribution. A z-score of 1.96 corresponds to the 97.5th percentile, which is why 1.96 appears in 95% confidence intervals (you cut off 2.5% from each tail).
Comparing Two Groups
When comparing two independent samples, the sampling distribution of the difference has a mean equal to the difference of the population parameters and a standard error that combines both groups. For means: SE = sqrt(s1^2/n1 + s2^2/n2). You add the variances because independent random variables combine that way, even when you are subtracting the means.
Sample Size Planning
Doubling your sample size does not halve the standard error. It reduces it by a factor of sqrt(2), roughly 29%. To actually halve the standard error, you need four times the original sample size. This diminishing return is why researchers balance precision against the cost of collecting more data.