I have a pandas dataframe that has columns:
'video' and 'link' of click values
with an index of datetime. For some reason, when I use semilogy and boxplot with the video series, I get the error
ValueError: Data has no positive values, and therefore can not be log-scaled.
but when I do it on the 'link' series I can draw the boxplot correctly.
I have verified that both the 'video' and 'link' series has NaN values and positive values.
Any thoughts on why this is occurring? Below is what I've done to verify that this is the case
Below is sample code:
#get all the not null values of video to show that there are positive
temp=a.types_pivot[a.types_pivot['video'].notnull()]
print temp
#get a count of all the NaN values to show both 'video' and 'link' has NaN
count = 0
for item in a.types_pivot['video']:
if(item.is_integer() == False):
count += 1
#try to draw the plots
print "there is %s nan values in video" % (count)
fig=plt.figure(figsize=(6,6),dpi=50)
ax=fig.add_subplot(111)
ax.semilogy()
plt.boxplot(a.types_pivot['video'].values)
Here is relevant output from the code for video series
type link video
created_time
2011-02-10 15:00:51+00:00 NaN 5
2011-02-17 17:50:38+00:00 NaN 5
2011-03-22 14:04:56+00:00 NaN 5
there is 5463 nan values in video
I run the same exact code except I do
a.types_pivot['link']
and I am able to draw the boxplot.
Below is the relevant output from the link series
Index: 5269 entries, 2011-01-24 20:03:58+00:00 to 2012-06-22 16:56:30+00:00
Data columns:
link 5269 non-null values
photo 0 non-null values
question 0 non-null values
status 0 non-null values
swf 0 non-null values
video 0 non-null values
dtypes: float64(6)
there is 216 nan values in link
Using the describe function
a.types_pivot['video'].describe()
<pre>
count 22.000000
mean 16.227273
std 15.275040
min 1.000000
25% 5.250000
50% 9.500000
75% 23.000000
max 58.000000
</pre>
Note: I'm unable to upload images due to some issue with imgur. I'll try again later.
Take advantage of pandas matplotlib helper / wrappers by calling pd.DataFrame.boxplot(). I believe this will take care of the NaN values for you. It will also put both Series in the same plot so you can easily compare data.
Example
Create a dataframe with some NaN values and negative values
In [7]: df = pd.DataFrame(np.random.rand(10, 5))
In [8]: df.ix[2:4,3] = np.nan
In [9]: df.ix[2:3,4] = -0.45
In [10]: df
Out[10]:
0 1 2 3 4
0 0.391882 0.776331 0.875009 0.350585 0.154517
1 0.772635 0.657556 0.745614 0.725191 0.483967
2 0.057269 0.417439 0.861274 NaN -0.450000
3 0.997749 0.736229 0.084077 NaN -0.450000
4 0.886303 0.596473 0.943397 NaN 0.816650
5 0.018724 0.459743 0.472822 0.598056 0.273341
6 0.894243 0.097513 0.691781 0.802758 0.785258
7 0.222901 0.292646 0.558909 0.220400 0.622068
8 0.458428 0.039280 0.670378 0.457238 0.912308
9 0.516554 0.445004 0.356060 0.861035 0.433503
Note that I can count the number of NaN values like so:
In [14]: df[3].isnull().sum() # Count NaNs in the 4th column
Out[14]: 3
A box plot is simply:
In [16]: df.boxplot()
You could create a semi-log boxplot, for example, by:
In [23]: np.log(df).boxplot()
Or, more generally, modify / transform to you heart's content, and then boxplot.
In [24]: df_mod = np.log(df).dropna()
In [25]: df_mod.boxplot()
Related
pd.DataFrame([["Stress", "NaN"], ["NaN", "Pregnancy"], ["Alcohol", "Pregnancy"]], columns=['causes', 'causes.2'])
I have a sample dataset here, technically, these columns should have been merged to one but for some reason, they weren't. now, I am tasked to make a pie chart and I do know how to do that with one column hence I want to merge these columns into a single column with a distinct name.
I tried using df.stack().reset_index()
but that gives me a weird object I do not know how to manipulate:
level_0 level_1 0
0 0 causes Stress
1 0 causes.2 NaN
2 1 causes NaN
3 1 causes.2 Pregnancy
4 2 causes Alcohol
5 2 causes.2 Pregnancy
Anyone know how I could achieve this?
I plan on using for the pie chart:
values = df["Cause of...."].value_counts()
ax = values.plot(kind="pie", autopct='%1.1f%%', shadow=True, legend=True, title="", ylabel='', labeldistance=None)
ax.legend(bbox_to_anchor=(1, 1.02), loc='upper left')
plt.show()
You can flatten using the underlying numpy array and create a new Series:
pd.Series(df.to_numpy().ravel(), name='causes')
Output:
0 Stress
1 NaN
2 NaN
3 Pregnancy
4 Alcohol
5 Pregnancy
Name: causes, dtype: object
If you have many columns, you need to select only the ones you want to flatten, for example selecting by name:
pd.Series(df.filter(like='causes').to_numpy().ravel(), name='causes')
I have the following dataframe (time-series of returns truncated for succinctness):
import pandas as pd
import numpy as np
df = pd.DataFrame({'return':np.array([np.nan, np.nan, np.nan, 0.015, -0.024, 0.033, 0.021, 0.014, -0.092])})
I'm trying to start the index (i.e., "base-100") at the last NaN before the first return - while at the same time keep the NaNs preceding the 100 value in place - (thinking in terms of appending to existing dataframe and for graphing purposes).
I only have found a way to create said index when there are no NaNs in the return vector:
df['index'] = 100*np.exp(np.nan_to_num(df['return'].cumsum()))
Any ideas - thx in advance!
If your initial array is
zz = np.array([np.nan, np.nan, np.nan, 0.015, -0.024, 0.033, 0.021, 0.014, -0.092])
Then you can obtain your desired output like this (although there's probably a more optimized way to do it):
np.concatenate((zz[:np.argmax(np.isfinite(zz))],
100*np.exp(np.cumsum(zz[np.isfinite(zz)]))))
Use Series.isna, change order by indexing and get index of last NaN by Series.idxmax:
idx = df['return'].isna().iloc[::-1].idxmax()
Pass to DataFrame.loc, repalce missing value and use cumulative sum:
df['return'] = df.loc[idx:, 'return'].fillna(100).cumsum()
print (df)
return
0 NaN
1 NaN
2 100.000
3 100.015
4 99.991
5 100.024
6 100.045
7 100.059
8 99.967
You can use Series.isna with Series.cumsum and compare by max, then replace last NaN by Series.fillna and last use cumulative sum:
s = df['return'].isna().cumsum()
df['return'] = df['return'].mask(s.eq(s.max()), df['return'].fillna(100)).cumsum()
print (df)
return
0 NaN
1 NaN
2 100.000
3 100.015
4 99.991
5 100.024
6 100.045
7 100.059
8 99.967
I've got a pandas DataFrame filled mostly with real numbers, but there is a few nan values in it as well.
How can I replace the nans with averages of columns where they are?
This question is very similar to this one: numpy array: replace nan values with average of columns but, unfortunately, the solution given there doesn't work for a pandas DataFrame.
You can simply use DataFrame.fillna to fill the nan's directly:
In [27]: df
Out[27]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 NaN -2.027325 1.533582
4 NaN NaN 0.461821
5 -0.788073 NaN NaN
6 -0.916080 -0.612343 NaN
7 -0.887858 1.033826 NaN
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
In [28]: df.mean()
Out[28]:
A -0.151121
B -0.231291
C -0.530307
dtype: float64
In [29]: df.fillna(df.mean())
Out[29]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 -0.151121 -2.027325 1.533582
4 -0.151121 -0.231291 0.461821
5 -0.788073 -0.231291 -0.530307
6 -0.916080 -0.612343 -0.530307
7 -0.887858 1.033826 -0.530307
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
The docstring of fillna says that value should be a scalar or a dict, however, it seems to work with a Series as well. If you want to pass a dict, you could use df.mean().to_dict().
Try:
sub2['income'].fillna((sub2['income'].mean()), inplace=True)
In [16]: df = DataFrame(np.random.randn(10,3))
In [17]: df.iloc[3:5,0] = np.nan
In [18]: df.iloc[4:6,1] = np.nan
In [19]: df.iloc[5:8,2] = np.nan
In [20]: df
Out[20]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 NaN -0.985188 -0.324136
4 NaN NaN 0.238512
5 0.769657 NaN NaN
6 0.141951 0.326064 NaN
7 -1.694475 -0.523440 NaN
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
In [22]: df.mean()
Out[22]:
0 -0.251534
1 -0.040622
2 -0.841219
dtype: float64
Apply per-column the mean of that columns and fill
In [23]: df.apply(lambda x: x.fillna(x.mean()),axis=0)
Out[23]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 -0.251534 -0.985188 -0.324136
4 -0.251534 -0.040622 0.238512
5 0.769657 -0.040622 -0.841219
6 0.141951 0.326064 -0.841219
7 -1.694475 -0.523440 -0.841219
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
Although, the below code does the job, BUT its performance takes a big hit, as you deal with a DataFrame with # records 100k or more:
df.fillna(df.mean())
In my experience, one should replace NaN values (be it with Mean or Median), only where it is required, rather than applying fillna() all over the DataFrame.
I had a DataFrame with 20 variables, and only 4 of them required NaN values treatment (replacement). I tried the above code (Code 1), along with a slightly modified version of it (code 2), where i ran it selectively .i.e. only on variables which had a NaN value
#------------------------------------------------
#----(Code 1) Treatment on overall DataFrame-----
df.fillna(df.mean())
#------------------------------------------------
#----(Code 2) Selective Treatment----------------
for i in df.columns[df.isnull().any(axis=0)]: #---Applying Only on variables with NaN values
df[i].fillna(df[i].mean(),inplace=True)
#---df.isnull().any(axis=0) gives True/False flag (Boolean value series),
#---which when applied on df.columns[], helps identify variables with NaN values
Below is the performance i observed, as i kept on increasing the # records in DataFrame
DataFrame with ~100k records
Code 1: 22.06 Seconds
Code 2: 0.03 Seconds
DataFrame with ~200k records
Code 1: 180.06 Seconds
Code 2: 0.06 Seconds
DataFrame with ~1.6 Million records
Code 1: code kept running endlessly
Code 2: 0.40 Seconds
DataFrame with ~13 Million records
Code 1: --did not even try, after seeing performance on 1.6 Mn records--
Code 2: 3.20 Seconds
Apologies for a long answer ! Hope this helps !
If you want to impute missing values with mean and you want to go column by column, then this will only impute with the mean of that column. This might be a little more readable.
sub2['income'] = sub2['income'].fillna((sub2['income'].mean()))
# To read data from csv file
Dataset = pd.read_csv('Data.csv')
X = Dataset.iloc[:, :-1].values
# To calculate mean use imputer class
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imputer.fit(X[:, 1:3])
X[:, 1:3] = imputer.transform(X[:, 1:3])
Directly use df.fillna(df.mean()) to fill all the null value with mean
If you want to fill null value with mean of that column then you can use this
suppose x=df['Item_Weight'] here Item_Weight is column name
here we are assigning (fill null values of x with mean of x into x)
df['Item_Weight'] = df['Item_Weight'].fillna((df['Item_Weight'].mean()))
If you want to fill null value with some string then use
here Outlet_size is column name
df.Outlet_Size = df.Outlet_Size.fillna('Missing')
Pandas: How to replace NaN (nan) values with the average (mean), median or other statistics of one column
Say your DataFrame is df and you have one column called nr_items. This is: df['nr_items']
If you want to replace the NaN values of your column df['nr_items'] with the mean of the column:
Use method .fillna():
mean_value=df['nr_items'].mean()
df['nr_item_ave']=df['nr_items'].fillna(mean_value)
I have created a new df column called nr_item_ave to store the new column with the NaN values replaced by the mean value of the column.
You should be careful when using the mean. If you have outliers is more recommendable to use the median
Another option besides those above is:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
It's less elegant than previous responses for mean, but it could be shorter if you desire to replace nulls by some other column function.
using sklearn library preprocessing class
from sklearn.impute import SimpleImputer
missingvalues = SimpleImputer(missing_values = np.nan, strategy = 'mean', axis = 0)
missingvalues = missingvalues.fit(x[:,1:3])
x[:,1:3] = missingvalues.transform(x[:,1:3])
Note: In the recent version parameter missing_values value change to np.nan from NaN
I use this method to fill missing values by average of a column.
fill_mean = lambda col : col.fillna(col.mean())
df = df.apply(fill_mean, axis = 0)
You can also use value_counts to get the most frequent values. This would work on different datatypes.
df = df.apply(lambda x:x.fillna(x.value_counts().index[0]))
Here is the value_counts api reference.
Why do simple DataFrame op DataFrame operations result in a union'ed DataFrame? Pandas documentation mentions unionizing because of alignment issues. I don't see any alignment issues with df1 and df2. Aren't alignment issues about different shapes, different dtypes, or different indexes?
df1 = pd.DataFrame([[1,2],[3,4]],columns=list('AB'))
df2 = pd.DataFrame([[5,6],[7,8]],columns=list('CD'))
>> df1*df2
A B C D
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
Another source of alignment issues is non-matching column names. Here, alignment requires identical column names. Either make the column names the same or use .values. Using .values on just the right-hand DataFrame will retain the DataFrame type.
>> df1*df2.values
A B
0 5 12
1 21 32
When reindexing, say, 1 minute data to daily data (e.g. and index for daily prices at 16:00), if there is a situation that there is no 1 minute data for the 16:00 timestamp on a day, we would want to forward fill from the last non-null 1min data. In the following case, there is no 1min data before 16:00 on the 13th, and the last 1min data comes from 10th.
When using reindex with method='ffill', wouldn't one expect the following code to fill in the value on the 13th at 16:00? Inspecting daily1 shows that it is missing however.
import pandas as pd
import numpy as np
hf_index = pd.date_range(start='2013-05-09 9:00', end='2013-05-13 23:59', freq='1min')
hf_prices = np.random.rand(len(hf_index))
hf = pd.DataFrame(hf_prices, index=hf_index)
hf.ix['2013-05-10 18:00':'2013-05-13 18:00',:]=np.nan
hf.plot()
ind_daily = pd.date_range(start='2013-05-09 16:00', end='2013-05-13 16:00', freq='B')
print(ind_daily.values)
daily1 = hf.reindex(index=ind_daily, method='ffill')
To fill as one (or rather I) would expect, I need to do this:
daily2 = daily1.fillna(method='ffill')
If this is the case, what is the fill method in reindex actually doing. It is not clear to me just from the pandas documentation. It seems to me I should not have to do the above line.
I write my comment on the github here as well:
The current behavior in my opinion makes more sense. 'nan' values can be valid "actual" values in some scenarios. the concept of an actual 'nan' value should be different from 'nan' value because of changing index. If I have a dataframe like this:
A B C
1 1.242 NaN 0.110
3 NaN -0.185 -0.209
5 -0.581 1.483 NaN
and i want to keep all nan as nan, it makes much more sense to have:
df.reindex( [2, 4, 6], method='ffill' )
A B C
2 1.242 NaN 0.110
4 NaN -0.185 -0.209
6 -0.581 1.483 NaN
just take whatever value there is ( nan or not nan ) and fill forward until the next available index. Reindexing should not enforce a mandatory fillna on the data.
This is completely different from
df.reindex( [2, 4, 6], method=None )
which produces
A B C
2 NaN NaN NaN
4 NaN NaN NaN
6 NaN NaN NaN
Here is an example:
np.nan can just mean not applicable; say i have hourly data, and on weekends some calculations are just not applicable. I will fill nan for those columns during the weekends. now if I reindex to finer index, say every minute, the reindex will pick the last value from Friday, and fill it out for the whole weekend. This is wrong.
in reindexing a dataframe, forward flll means just take whatever value there is ( nan or not nan ) and fill forward until the next available index. A 'nan' value can be just an actual valid observation which you want to keep as is.
Reindexing should not enforce a mandatory fillna on the data.