The minimum and maximum value of each column.The standard deviation, or how spread out the data is.Row count, which aligns to what the shape attribute showed us.From here you can see the following statistics: Notice, it only shows the statistics on the numerical columns. This shows some descriptive statistics on the data set. Next, we can look at some descriptive statistics of the data frame with the describe method. Due to pandas using Numpy behind the scenes, it interprets strings as objects. df.dtypesĮven though the first four columns are objects, we can see from the data that it’s text. To show that, we can call dtypes attribute on the data frame to see what each column types are. This looks a lot like an Excel spreadsheet, doesn’t it? Under the hood, the data frame is a two-dimensional data structure and each column can have different types. By default, this returns the top five rows, but it can take in a parameter of how many rows to return. To get a quick idea of what the data looks like, we can call the head function on the data frame. It’s not that big of a data set, but even small data sets can yield some good insights. There are 43 rows and six columns in our data set. We can look at the number of rows and columns to get a quick idea of how big our data is. The DataFrame allows us to do quite a bit of analysis on the data. But now what do we do with it? Pandas takes the data and creates a DataFrame data structure with it. If there are no errors when executing then the file loaded with no errors. We can use pandas to read in the CSV file with the read_csv method. To load the data into pandas, we must first import the packages that we’ll be using.