F-Statistic (ANOVA)
F = MSB / MSW
The F-statistic in one-way ANOVA (Analysis of Variance) is the ratio of the mean square between groups (MSB) to the mean square within groups (MSW). It tests whether the means of three or more groups are all equal. A large F indicates that at least one group mean differs significantly from the others.
Variables
The ratio of between-group variance to within-group variance
The variance between group means, calculated as SSB / (k - 1)
The variance within groups, calculated as SSW / (N - k)
The number of groups being compared
Example Calculation
Scenario
Three teaching methods are compared using test scores. Group means are 75, 82, and 88 with 10 students per group. SSB = 860, SSW = 2430. Test whether the teaching methods differ.
Given Data
Calculation
MSB = SSB/(k-1) = 860/2 = 430; MSW = SSW/(N-k) = 2430/27 = 90; F = MSB/MSW = 430/90
Result
F = 4.78 with df = (2, 27)
Interpretation
The F-statistic of 4.78 with (2, 27) degrees of freedom yields a p-value of approximately 0.017. At ฮฑ = 0.05, we reject the null hypothesis and conclude that at least one teaching method produces a significantly different mean score. A post-hoc test (e.g., Tukey HSD) can identify which pairs differ.
When to Use This Formula
- โComparing means across three or more independent groups
- โTesting the overall significance of a regression model
- โEvaluating experimental designs with multiple treatment levels
- โAny situation where multiple t-tests would inflate the Type I error rate
Common Mistakes
- โUsing multiple two-sample t-tests instead of ANOVA, which inflates the overall Type I error rate
- โForgetting to check assumptions: independence, normality within groups, and equal variances
- โInterpreting a significant F-test as meaning all groups differ (it only means at least one pair differs)
- โConfusing the degrees of freedom for the numerator (k - 1) and denominator (N - k)
Calculate This Formula Instantly
Snap a photo of any problem and get step-by-step solutions.
Download StatsIQFAQs
Common questions about this formula
A significant F-test tells you that at least one group mean is different, but not which ones. Use post-hoc comparison procedures such as Tukey's HSD, Bonferroni correction, or Scheffe's method to determine which specific pairs of groups differ significantly.
The three main assumptions are: (1) independence of observations within and between groups, (2) the data in each group are approximately normally distributed, and (3) the population variances are equal across groups (homogeneity of variance). Moderate violations of normality are tolerable with large samples, but unequal variances can be addressed with Welch's ANOVA.