I would display all information of my data frame which contains more than 100 columns with .info() from pandas but it won't :
data_train.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 85529 entries, 0 to 85528
Columns: 110 entries, ID to TARGET
dtypes: float64(40), int64(19), object(51)
memory usage: 71.8+ MB
I would like it displays like this :
data_train.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10886 entries, 0 to 10885
Data columns (total 12 columns):
datetime 10886 non-null object
season 10886 non-null int64
holiday 10886 non-null int64
workingday 10886 non-null int64
weather 10886 non-null int64
temp 10886 non-null float64
atemp 10886 non-null float64
humidity 10886 non-null int64
windspeed 10886 non-null float64
casual 10886 non-null int64
registered 10886 non-null int64
count 10886 non-null int64
dtypes: float64(3), int64(8), object(1)
memory usage: 1020.6+ KB
But the problem seems to be the high number of columns from my previous data frame. I would like to show all values including non null values (NaN).
You can pass optional arguments verbose=True and show_counts=True (null_counts=True deprecated since pandas 1.2.0) to the .info() method to output information for all of the columns
pandas >=1.2.0: data_train.info(verbose=True, show_counts=True)
pandas <1.2.0: data_train.info(verbose=True, null_counts=True)
Related
While removing the zero and null values from pandas dataframe, the datatype of field gets changed.
df.info()
Output :
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10866 entries, 0 to 10865
Data columns (total 2 columns):
budget 10866 non-null int64
revenue 10866 non-null int64
dtypes: int64(2)
memory usage: 509.4+ KB
After running below code to remove zero and null values the datatype got changed.
temp_list_to_check_zero_values=['budget', 'revenue']
df[temp_list_to_check_zero_values] = df[temp_list_to_check_zero_values].replace(0, np.NAN)
df.info()
Output :
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3854 entries, 0 to 10848
Data columns (total 2 columns):
budget 3854 non-null float64
revenue 3854 non-null float64
dtypes: float64(2)
memory usage: 210.8+ KB
To preserve the datatype, we used applymap
df[temp_list_to_check_zero_values] = df[temp_list_to_check_zero_values].applymap(np.int64)
But got error :
ModuleNotFoundError: No module named 'pandas.core.apply'
Do we need to install any specific library for using applymap() ?
Upgrading your pandas should solve the issue.
Try pip install pandas --upgrade
I have a dataframe (see link for image) and I've listed the info on the data frame. I use the pivot_table function to sum the total number of births for each year. The issue is that when I try to plot the dataframe, the y-axis values range from 0 to 2.0 instead of the minimum and maximum values from the M and F columns.
To verify that it's not my environment, I created a simple dataframe, with just a few values and plot the line graph for that dataframe and it works as expected. Does anyone know why this is happening? Attempting to set the values using ylim or yticks is not working. Ultimately, I will have to try other graphing utilities like matplotlib, but I'm curious as to why it's not working for such a simple dataframe and dataset.
Visit my github page for a working example <git#github.com:stevencorrea-chicago/stackoverflow_question.git>
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1690784 entries, 0 to 1690783
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 1690784 non-null object
1 sex 1690784 non-null object
2 births 1690784 non-null int64
3 year 1690784 non-null Int64
dtypes: Int64(1), int64(1), object(2)
memory usage: 53.2+ MB
new_df = df.pivot_table(values='births', index='year', columns='sex', aggfunc=sum)
new_df.info()
<class 'pandas.core.frame.DataFrame'>
Index: 131 entries, 1880 to 2010
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 F 131 non-null int64
1 M 131 non-null int64
dtypes: int64(2)
memory usage: 3.1+ KB
How to create a subset of the data that contains a random sample of 200 observations (database create form a csv file)
Data columns (total 10 columns):
longitude 20640 non-null float64
latitude 20640 non-null float64
housing_median_age 20640 non-null float64
total_rooms 20640 non-null float64
total_bedrooms 20433 non-null float64
population 20640 non-null float64
households 20640 non-null float64
median_income 20640 non-null float64
median_house_value 20640 non-null float64
ocean_proximity 20640 non-null object
How to determine the correlations between housing values(median_house_value) and the other variables and display in descending order.
df.corr() gives me all the correlations. How to make it show only the median house value?
For the sample,
df = df.sample(200)
For the correlation, just do
df.corr()['median_house_value'].sort_values(ascending=False)
I am using two different data sets (linked below) which observe geographic location. I am trying to drop all observations where State is a territory or other non-State (excluding DC). The 'State' variable/column is a non-null object in both dataframes. I include .info() on them to show data types and number of observations.
Earlier in my code I use .isin() for this purpose for a different dataframe and it works:
zipcodes = pd.read_csv('zipcode.csv')
zipcodes.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 42522 entries, 0 to 42521
Data columns (total 12 columns):
Zipcode 42522 non-null int64
ZipCodeType 42522 non-null object
City 42522 non-null object
State 42522 non-null object
LocationType 42522 non-null object
Lat 41874 non-null float64
Long 41874 non-null float64
Location 42521 non-null object
Decommisioned 42522 non-null bool
TaxReturnsFiled 28879 non-null float64
EstimatedPopulation 28879 non-null float64
TotalWages 28844 non-null float64
dtypes: bool(1), float64(5), int64(1), object(5)
memory usage: 2.8+ MB
drops =['PR','AP','AA','VI','GU','FM','MP','MH','PW','AS','AE']
zipcodes = zipcodes[~zipcodes['State'].isin(drops)]
zipcodes.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 41656 entries, 12 to 42521
Data columns (total 12 columns):
Zipcode 41656 non-null int64
ZipCodeType 41656 non-null object
City 41656 non-null object
State 41656 non-null object
LocationType 41656 non-null object
Lat 41656 non-null float64
Long 41656 non-null float64
Location 41656 non-null object
Decommisioned 41656 non-null bool
TaxReturnsFiled 28879 non-null float64
EstimatedPopulation 28879 non-null float64
TotalWages 28844 non-null float64
dtypes: bool(1), float64(5), int64(1), object(5)
memory usage: 3.1+ MB
The observations for which State is an object in the list drops have now been dropped.
However, when I try to do the same with another data set, the same .isin() code does not drop the observations for which State is in the list drops (I have to create the State variable by splitting an included variable and then adding it as a column, which I suspect may be causing my issue, but the resulting variable/column is still a non-null object):
cty_nbrs = pd.read_excel('county_adjacency.xlsx')
cty_nbrs.fillna(method='ffill', inplace=True)
cty_nbrs.join(pd.DataFrame({'State':cty_nbrs.County.str.split(",").str.get(1)}))
cty_nbrs.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 22200 entries, 0 to 22199
Data columns (total 5 columns):
County 22200 non-null object
ctyfips 22200 non-null int32
Neighbours 22200 non-null object
nbrfips 22200 non-null int64
State 22200 non-null object
dtypes: int32(1), int64(1), object(3)
memory usage: 520.4+ KB
cty_nbrs= cty_nbrs[~cty_nbrs['State'].isin(drops)]
cty_nbrs.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 22200 entries, 0 to 22199
Data columns (total 5 columns):
County 22200 non-null object
ctyfips 22200 non-null int32
Neighbours 22200 non-null object
nbrfips 22200 non-null int64
State 22200 non-null object
dtypes: int32(1), int64(1), object(3)
memory usage: 520.4+ KB
The offending observations have not been dropped here. In case this is not evident:
cty_nbrs['State'].value_counts().tail()
AS 7
MP 6
DC 6
VI 5
GU 1
Name: State, dtype: int64
Those realizations of State are all elements of the list drops which were earlier dropped from zipcodes using the same code. What am I missing here?
zipcode.csv
county_adjacency.csv
What is the difference between an h5 and hdf file? Should I use one over the other? I tried doing a timeit with the following two codes and it took about 3 minutes and 29 seconds per loop and with a 240mb file. I had an error eventually on the second code but the file size was above 300mb on disk.
hdf = pd.HDFStore('combined.h5')
hdf.put('table', df, format='table', complib='blosc', complevel=5, data_columns=True)
hdf.close()
df.to_hdf('combined.hdf', 'table', format='table', mode='w', complib='blosc', complevel=5)
Also, I had an error message which said:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types
This is due to string columns which are objects because of blank values. If I do .astype(str), all of the blanks are replaced with nan (a string which even appears in output files). Do I worry about the error message and fill in the blanks and replace them again with np.nan later, or just ignore it?
Here is the df.info() to show that there are some columns with nulls. I can't remove these rows but I could temporarily fill them with something if required.
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1387276 entries, 0 to 657406
Data columns (total 12 columns):
date 1387276 non-null datetime64[ns]
start_time 1387276 non-null datetime64[ns]
end_time 313190 non-null datetime64[ns]
cola 1387276 non-null object
colb 1387276 non-null object
colc 1387276 non-null object
cold 476816 non-null object
cole 1228781 non-null object
colx 1185679 non-null object
coly 313190 non-null object
colz 1387276 non-null int64
colzz 1387276 non-null int64
dtypes: datetime64[ns](3), int64(2), object(7)
memory usage: 137.6+ MB