Press "Enter" to skip to content

Erik Marsja

Featured

How to Use Binder and Python for Reproducible Research

In this post we will learn how to create a binder so that our data analysis, for instance, can be fully reproduced by other researchers. That is, in this post we will learn how to use binder for reproducible research.

In previous posts, we have learned how to carry out data analysis (e.g., ANOVA) and visualization (e.g., Raincloud plots) using Python. The code we have used have been uploaded in the forms of Jupyter Notebooks.

Although this is great, we also need to make sure that we share our computational environment so our code can be re-run and produce the same output. That is, to have a fully reproducible example, we need a way to capture the different versions of the Python packages we’re using.

The Easiest Data Cleaning Method using Python & Pandas

In this post we are going to learn how to do simplify our data preprocessing work using the Python package Pyjanitor. More specifically, we are going to learn how to:

  • Add a column to a Pandas dataframe
  • Remove missing values
  • Remove an empty column
  • Cleaning up column names

That is, we are going to learn how clean Pandas dataframes using Pyjanitor. In all Python data manipulation examples, here we are also going to see how to carry out them using only Pandas functionality.

Repeated Measures ANOVA in R and Python using afex & pingouin

In this post we will learn how to carry out repeated measures Analysis of Variance (ANOVA) in R and Python. To be specific, we will use the R package afex and the Python package pingouin to carry out one-way and two-way ANOVA f or within subject’s design. The structure of the following data analysis tutorial is as follows; a brief introduction to (repeated measures) ANOVA, carrying out within-subjects ANOVA in R using afex and in Python using pingouin. In the end, there will be a comparison of the results and the pros and cons using R or Python for data analysis (i.e., ANOVA).

How to Make a Scatter Plot in Python using Seaborn

Data visualization is a big part of the process of data analysis. In this post, we will learn how make a scatter plot using Python and the package Seaborn. In detail, we will learn how to use the Seaborn methods scatterplot, regplot, lmplot, and pairplot to create scatter plots in Python.

More specifically, we will learn how to make scatter plots, change the size of the dots, change the markers, the colors, and change the number of ticks.  Furthermore, we will learn how to plot a regression line, add text, plot a distribution on a scatter plot, among other things. Finally, we will also learn how to save Seaborn plots in high resolution. That is, we learn how to make print-ready plots.

How to Read and Write JSON Files using Python and Pandas

In this post we will learn how to read and write JSON files using Python. In the first, part we are going to use the Python package json to create a JSON file and write a JSON file. In the next part we are going to use Pandas json method to load JSON files into Pandas dataframe. Here, we will learn how to read from a JSON file locally and from an URL as well as how to read a nested JSON file using Pandas.

Finally, as a bonus, we will also learn how to manipulate data in Pandas dataframes, rename columns, and plot the data using Seaborn.

9 Data Visualization Techniques You Should Learn in Python

With ever increasing volume of data, it is impossible to tell stories without visualizations. Data visualization is an art of how to turn numbers into useful knowledge. Using Python we can learn how to create data visualizations and present data in Python using the Seaborn package.

In this post we are going to learn how to use the following 9 plots:

  1. Scatter Plot
  2. Histogram
  3. Bar Plot
  4. Time Series Plot
  5. Box Plot
  6. Heat Map
  7. Correlogram
  8. Violin Plot
  9. Raincloud Plot

Python Data Visualization Tutorial: Seaborn

As previously mentioned in this Python Data Visualization tutorial we are mainly going to use Seaborn but also Pandas,  and Numpy. However, to create the Raincloud Plot we are going to have to use the Python package ptitprince.

Python Raincloud Plot using the ptitprince package

Probabilistic Programming in Python

Learn about probabilistic programming in this guest post by Osvaldo Martin, a researcher at The National Scientific and Technical Research Council of Argentina (CONICET) and author of Bayesian Analysis with Python: Introduction to statistical modeling and probabilistic programming using PyMC3 and ArviZ, 2nd Edition.

This post is based on an excerpt from the second chapter of the book that I have slightly adapted so it’s easier to read without having read the first chapter.

OpenSesame Tutorial: Using Image Stimuli

In this OpenSesame tutorial we will learn how to use images as stimuli and how to load the trials; including filenames, correct responses, and conditions from a pre-generated CSV file. To follow this tutorial you don’t need to know Python programming. However, we are going to generate the CSV file using a short Python script. This can be done manually, of course. See also this OpenSesame tutorial.

Python Pandas Groupby Tutorial

In this Pandas groupby tutorial we are going to learn how to organize Pandas dataframes by groups. More specifically, we are going to learn how to group by one and multiple columns. Furthermore, we are going to learn how calculate some basics summary statistics (e.g., mean, median), convert Pandas groupby to dataframe, calculate the percentage of observations in each group, and many more useful things.

First of all we are going to import pandas as pd, and read a CSV file, using the read_csv method, to a dataframe. In the example below, we use index_col=0 because the first row in the dataset is the index column.

Exploratory Data Analysis in Python Using Pandas, SciPy, and Seaborn

In this post we are going to learn how to explore data using Python, Pandas, and Seaborn. The data we are going to explore is data from a Wikipedia article. In this post we are actually going to learn how to parse data from a URL using Python Pandas. Furthermore, we are going to explore the scraped data by grouping it and by Python data visualization. More specifically, we will learn how to count missing values, group data to calculate the mean, and then visualize relationships between two variables, among other things.

In previous posts we have used Pandas to import data from Excel and CSV files. In this post, however, we are going to use Pandas read_html, because it has support for reading data from HTML from URLs (https or http). To read HTML Pandas use one of the Python libraries LXML, Html5Lib, or BeautifulSoup4. This means that you have to make sure that at least one of these libraries are installed. In the specific Pandas read_html example here, we use BeautifulSoup4 to parse the html tables from the Wikipedia article.