Press "Enter" to skip to content

Tag: Pandas

Pandas drop_duplicates(): How to Drop Duplicated Rows

In this post, we will learn how to use Pandas drop_duplicates() to remove duplicate records and combinations of columns from a Pandas dataframe. That is, we will delete duplicate data and only keep the unique values.

This Pandas tutorial will cover the following; what’s needed to follow the tutorial, importing Pandas, and how to create a dataframe fro a dictionary. After this, we will get into how to use Pandas drop_duplicates() to drop duplicate rows and duplicate columns.

How to use Pandas get_dummies to Create Dummy Variables in Python

In this post, we will learn how to use Pandas get_dummies() method to create dummy variables in Python. Dummy variables (or binary/indicator variables) are often used in statistical analyses as well as in more simple descriptive statistics. Towards the end of the post, there’s a link to a Jupyter Notebook containing all Pandas get_dummies() examples.

Dummy Coding for Regression Analysis

One statistical analysis in which we may need to create dummy variables in regression analysis. In fact, regression analysis requires numerical variables and this means that when we, whether doing research or just analyzing data, wishes to include a categorical variable in a regression model, supplementary steps are required to make the results interpretable.

Tutorial: How to Read Stata Files in Python with Pandas

In this post, we are going to learn how to read Stata (.dta) files in Python.

As previously described (in the read .sav files in Python post) Python is a general-purpose language that also can be used for doing data analysis and data visualization. One example of data visualization will be found in this post.

One potential downside, however, is that Python is not really user-friendly for data storage. This has, of course, lead to that our data many times are stored using Excel, SPSS, SAS, or similar software. See, for instance, the posts about reading .sav, and sas files in Python:

How to Read SAS Files in Python with Pandas

In this post, we are going to learn how to read SAS (.sas7bdat) files in Python.

As previously described (in the read .sav files in Python post) Python is a general-purpose language that also can be used for doing data analysis and data visualization.

The example .sas7bdat file that we are going to load into a Pandas dataframe using Python.

One potential downside, however, is that Python is not really user-friendly for data storage. This has, of course, lead to that our data many times are stored using Excel, SPSS, SAS, or similar software. See, for instance, the posts about reading .sav, .dta, and .xlxs files in Python:

How to use iloc and loc for Indexing and Slicing Pandas Dataframes

In this post, we are going to work with Pandas iloc, and loc. More specifically, we are going to learn slicing and indexing by iloc and loc examples.

Once we have a dataset loaded as a Pandas dataframe, we often want to start accessing specific parts of the data based on some criteria. For instance, if our dataset contains the result of an experiment comparing different experimental groups, we may want to calculate descriptive statistics for each experimental group separately.

How to Read & Write SPSS Files in Python using Pandas

In this post, we are going to learn 1) how to read SPSS (.sav) files in Python, and 2) how to write to SPSS (.sav) files using Python. 

Python is a great general-purpose language as well as for carrying out statistical analysis and data visualization. However, Python is not really user-friendly when it comes to data storage. Thus, often our data will be archived using Excel, SPSS or similar software.

For example, learn how to import data from other file types, such as Excel, SPSS, and Stata in the following two posts:

SPSS interface
Overview of the .sav file in SPSS

If we ever need to learn how to read a file in Python in other formats, such a text file, it is doable. To read a file in Python without any libraries we just use the open() method.

Python MANOVA Made Easy using Statsmodels

In previous posts, we learned how to use Python to detect group differences on a single dependent variable. However, there may be situations in which we are interested in several dependent variables. In these situations, the simple ANOVA model is inadequate.

One way to examine multiple dependent variables using Python would, of course, be to carry out multiple ANOVA. That is, one ANOVA for each of these dependent variables. However, the more tests we conduct on the same data, the more we inflate the family-wise error rate (the greater chance of making a Type I error).

This is where MANOVA comes in handy. MANOVA, or Multivariate Analysis of Variance, is an extension of Analysis of Variance (ANOVA). However, when using MANOVA we have two, or more, dependent variables.

MANOVA and ANOVA is similar when it comes to some of the assumptions. That is, the data have to be:

  • normally distributed dependent variables
  • equal covariance matrices)

In this post will learn how carry out MANOVA using Python (i.e., we will use Pandas and Statsmodels). Here, we are going to use the Iris dataset which can be downloaded here.

The Easiest Data Cleaning Method using Python & Pandas

In this post, we are going to learn how to do simplify our data preprocessing work using the Python package Pyjanitor. More specifically, we are going to learn how to:

  • Add a column to a Pandas dataframe
  • Remove missing values
  • Remove an empty column
  • Cleaning up column names

Clean Data in Python

That is, we are going to learn how clean Pandas dataframes using Pyjanitor. In all Python data manipulation examples, here we are also going to see how to carry out them using only Pandas functionality.

How to Read and Write JSON Files using Python and Pandas

In this post, we will learn how to read and write JSON files using Python. In the first part, we are going to use the Python package json to create and read a JSON file as well as write a JSON file. After that, we are going to use Pandas read_json method to load JSON files into Pandas dataframe. Here, we will learn how to read from a JSON file locally and from an URL as well as how to read a nested JSON file using Pandas.

Finally, as a bonus, we will also learn how to manipulate data in Pandas dataframes, rename columns, and plot the data using Seaborn.

Exploratory Data Analysis in Python Using Pandas, SciPy, and Seaborn

In this post, we are going to learn how to explore data using Python, Pandas, and Seaborn. The data we are going to explore is data from a Wikipedia article. In this post we are actually going to learn how to parse data from a URL using Python Pandas. Furthermore, we are going to explore the scraped data by grouping it and by Python data visualization. More specifically, we will learn how to count missing values, group data to calculate the mean, and then visualize relationships between two variables, among other things.

In previous posts, we have used Pandas to import data from Excel and CSV files. In this post, however, we are going to use Pandas read_html, because it has support for reading data from HTML from URLs (https or http). To read HTML Pandas use one of the Python libraries LXML, Html5Lib, or BeautifulSoup4. This means that you have to make sure that at least one of these libraries are installed. In the specific Pandas read_html example here, we use BeautifulSoup4 to parse the HTML tables from the Wikipedia article.