The current post will focus on how to carry out between-subjects ANOVA using **Python**. As mentioned in an earlier post (Repeated measures ANOVA with Python) ANOVAs are commonly used in Psychology.

We start with some brief introduction on theory of ANOVA. If you are more interested in the four methods to carry out one-way ANOVA with Python click here. ANOVA is a means of comparing the ratio of systematic variance to unsystematic variance in an experimental study. Variance in the ANOVA is partitioned in to total variance, variance due to groups, and variance due to individual differences.

The ratio obtained when doing this comparison is known as the *F*-ratio. A one-way ANOVA can be seen as a regression model with a single categorical predictor. This predictor usually has two plus categories. A one-way ANOVA has a single factor with *J *levels. Each level corresponds to the groups in the independent measures design. The general form of the model, which is a regression model for a categorical factor with *J *levels, is:

There is a more elegant way to parametrize the model. In this way the group means are represented as deviations from the grand mean by grouping their coefficients under a single term. I will not go into detail on this equation:

As for all parametric tests the data need to be normally distributed (each groups data should be roughly normally distributed) for the *F*-statistic to be reliable. Each experimental condition should have roughly the same variance (i.e., homogeneity of variance), the observations (e.g., each group) should be independent, and the dependent variable should be measured on, at least, an interval scale.

## ANOVA using Python

In the four examples in this tutorial we are going to use the dataset “PlanthGrowth” that originally was available in R but can be downloaded using this link: PlanthGrowth. In the first three examples we are going to use Pandas DataFrame.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
import pandas as pd datafile="PlantGrowth.csv" data = pd.read_csv(datafile) #Create a boxplot data.boxplot('weight', by='group', figsize=(12, 8)) ctrl = data['weight'][data.group == 'ctrl'] grps = pd.unique(data.group.values) d_data = {grp:data['weight'][data.group == grp] \ for grp in pd.unique(data.group.values)} k = len(pd.unique(data.group)) # number of conditions N = len(data.values) # conditions times participants n = data.groupby('group').size()[0] #Participants in each condition |

Judging by the Boxplot there are differences in the dried weight for the two treatments. However, easy to visually determine whether the treatments are different to the control group.

### Using SciPy

We start with using SciPy and its method f_oneway from stats.

1 2 3 |
from scipy import stats F, p = stats.f_oneway(d_data['ctrl'], d_data['trt1'], d_data['trt2']) |

One problem with using SciPy is that following APA guidelines we should also effect size (e.g., eta squared) as well as Degree of freedom (DF). DFs needed for the example data is easily obtained:

4 5 6 |
DFbetween = k - 1 DFwithin = N - k DFtotal = N - 1 |

However, if we want to calculate eta-squared we need to do some more computations. Thus, the next section will deal with how to calculate a one-way ANOVA using the Pandas DataFrame and Python code.

### Calculating using Python (i.e., pure Python ANOVA)

A one-way ANOVA is quite easy to calculate so below I am going to show how to do it. First, we need to calculate the sum of squares between (SSbetween), sum of squares within (SSwithin), and sum of squares total (SSTotal).

#### Sum of Squares Between

We start with calculating the Sum of Squares between. Sum of Squares Between is the variability due to interaction between the groups. Sometimes known as the Sum of Squares of the Model.

1 2 |
SSbetween = (sum(data.groupby('group').sum()['weight']**2)/n) \ - (data['weight'].sum()**2)/N |

#### Sum of Squares Within

The variability in the data due to differences within people. The calculation of Sum of Squares Within can be carried out according to this formula:

3 4 |
sum_y_squared = sum([value**2 for value in data['weight'].values]) SSwithin = sum_y_squared - sum(data.groupby('group').sum()['weight']**2)/n |

#### Sum of Squares Total

Sum of Squares Total will be needed to calculate eta-squared later. This is the total variability in the data.

5 |
SStotal = sum_y_squared - (data['weight'].sum()**2)/N |

#### Mean Square Between

Mean square between is the sum of squares within divided by degree of freedom between.

6 |
MSbetween = SSbetween/DFbetween |

#### Mean Square Within

Mean Square within is also an easy calculation;

7 |
MSwithin = SSwithin/DFwithin |

#### Calculating the F-value` `

8 |
F = MSbetween/MSwithin |

To reject the null hypothesis we check if the obtained F-value is above the critical value for rejecting the null hypothesis. We could look it up in a F-value table based on the DFwithin and DFbetween. However, there is a method in SciPy for obtaining a p-value.

9 |
p = stats.f.sf(F, DFbetween, DFwithin) |

Finally, we are also going to calculate effect size. We start with the commonly used eta-squared (*η²* ):

10 |
eta_sqrd = SSbetween/SStotal |

However, eta-squared is somewhat biased because it is based purely on sums of squares from the sample. No adjustment is made for the fact that what we aiming to do is to estimate the effect size in the population. Thus, we can use the less biased effect size measure Omega squared:

11 |
om_sqrd = (SSbetween - (DFbetween * MSwithin))/(SStotal + MSwithin) |

The results we get from both the SciPy and the above method can be reported according to APA style; *F*(2, 27) = 4.846, *p *= .016, *η²* = .264. If you want to report Omega Squared: *ω ^{2}* = .204

### Using Statsmodels

The third method, using Statsmodels, is also easy. We start by using ordinary least squares method and then the anova_lm method. Also, if you are familiar with R-syntax. Statsmodels have a formula api where your model is very intuitively formulated. First, we import the api and the formula api. Second we, use ordinary least squares regression with our data. The object obtained is a fitted model that we later use with the anova_lm method to obtaine a ANOVA table.

1 2 3 4 5 6 7 8 |
import statsmodels.api as sm from statsmodels.formula.api import ols mod = ols('weight ~ group', data=data).fit() aov_table = sm.stats.anova_lm(mod, typ=2) print aov_table |

#### Output table:

sum_sq | df | F | PR(>F) | |
---|---|---|---|---|

group | 3.76634 | 2 | 4.846088 | 0.01591 |

Residual | 10.49209 | 27 |

As can be seen in the ANVOA table Statsmodels don’t provide an effect size . To calculate eta squared we can use the sum of squares from the table:

9 |
esq_sm = aov_table['sum_sq'][0]/(aov_table['sum_sq'][0]+aov_table['sum_sq'][1]) |

### Using pyvttbl anova1way

We can also use the method anova1way from the python package pyvttbl. This package also has a DataFrame method. We have to use this method instead of Pandas DataFrame to be able to carry out the one-way ANOVA.

1 2 3 4 5 6 |
from pyvttbl import DataFrame df=DataFrame() df.read_tbl(datafile) aov_pyvttbl = df.anova1way('weight', 'group') print aov_pyvttbl |

#### Output anova1way

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
Anova: Single Factor on weight SUMMARY Groups Count Sum Average Variance ============================================ ctrl 10 50.320 5.032 0.340 trt1 10 46.610 4.661 0.630 trt2 10 55.260 5.526 0.196 O'BRIEN TEST FOR HOMOGENEITY OF VARIANCE Source of Variation SS df MS F P-value eta^2 Obs. power =============================================================================== Treatments 0.977 2 0.489 1.593 0.222 0.106 0.306 Error 8.281 27 0.307 =============================================================================== Total 9.259 29 ANOVA Source of Variation SS df MS F P-value eta^2 Obs. power ================================================================================ Treatments 3.766 2 1.883 4.846 0.016 0.264 0.661 Error 10.492 27 0.389 ================================================================================ Total 14.258 29 POSTHOC MULTIPLE COMPARISONS Tukey HSD: Table of q-statistics ctrl trt1 trt2 ================================= ctrl 0 1.882 ns 2.506 ns trt1 0 4.388 * trt2 0 ================================= + p < .10 (q-critical[3, 27] = 3.0301664694) * p < .05 (q-critical[3, 27] = 3.50576984879) ** p < .01 (q-critical[3, 27] = 4.49413305084) |

As can be seen in the output from method anova1way we get a lot more information. Maybe of particular interest here is that we get results from a post-hoc test (i.e., Tukey HSD). Whereas the ANOVA only lets us know that there was a significant effect of treatment the post-hoc analysis reveal where this effect may be (between which groups).

That is it! In this tutorial you learned 4 methods that let you carry out one-way ANOVAs using Python. There are, of course, other ways to deal with the tests between the groups (e.g., the post-hoc analysis). One could carry out Multiple Comparisons (e.g., t-tests between each group. Just remember to correct for familywise error!) or Planned Contrasts. In conclusion, doing ANOVAs in Python is pretty simple.

Heck of a job there, it aboesutlly helps me out.