Pandas / Numpy conditional calculation with NaN values - pandas

I'm dealing with incomplete data and would like to assign scoring to different rows.
For example:
Bluetooth and WLAN are non integers but I would like to assign the value of 1 if data is available. 0 if there is no data (or NaN).
Samsung's score would be 1 + 1 + 4 = 6
Nokia's score would be 0 + 0 + 5 = 5
Bluetooth WLAN Rating Score
Apple Class-A USB-A NaN
Samsung Class-B USB-B 4
Nokia NaN NaN 5
I'm using Pandas at the moment but I'm not sure if Pandas alone is capable without Numpy.
Thanks a lot!

import pandas as pd
import numpy as np
data = {'Bluetooth': ['class-A', 'class-B', np.nan], 'WLAN': ['usb-A', 'usb-B', np.nan],'Rating': [np.nan, 4, 5]}
df = pd.DataFrame(data)
df = df.replace(np.nan, 0)
df = df.apply(lambda x: pd.to_numeric(x, errors='coerce')).fillna(1)
df['score'] = df.sum(axis=1)
print(df.head())
Output:
Bluetooth WLAN Rating score
0 1.0 1.0 0.0 2.0
1 1.0 1.0 4.0 6.0
2 0.0 0.0 5.0 5.0

try this :
import pandas as pd
import numpy as np
df['Nan_count']=df.isnull().sum(axis=1)
df['score']=-df['Nan_count']+df['Rating'].replace(np.nan,0)+2
With this solution we do need to change the Nan in our dataframe et as computation is pretty low also

Related

Pandas pivot table with prefix to columns

I have a dataframe:
df = C1 A1. A2. A3. Type
A 1. 5. 2. AG
A 7. 3. 8. SC
And I want to create:
df = C1 A1_AG A1_SC A2_AG A2_SC
A 1. 7. 5. 3
How can it be done?
You can rather use a melt and transpose:
(df.melt('Type')
.assign(col=lambda d: d['Type']+'_'+d['variable'])
.set_index('col')[['value']].T
)
Output:
col AG_A1 SC_A1 AG_A2 SC_A2 AG_A3 SC_A3
value 1 7 5 3 2 8
with additional columns(s):
(df.melt(['C1', 'Type'])
.assign(col=lambda d: d['Type']+'_'+d['variable'])
.pivot(index=['C1'], columns='col', values='value')
.reset_index()
)
Output:
col C1 AG_A1 AG_A2 AG_A3 SC_A1 SC_A2 SC_A3
0 A 1 5 2 7 3 8
Use DataFrame.set_index with DataFrame.unstack:
df = df.set_index(['C1','Type']).unstack()
df.columns = df.columns.map(lambda x: f'{x[0]}_{x[1]}')
df = df.reset_index()
print (df)
C1 A1_AG A1_SC A2_AG A2_SC A3_AG A3_SC
0 A 1.0 7.0 5.0 3.0 2.0 8.0
One convenience option with pivot_wider from pyjanitor:
# pip install pyjanitor
import pandas as pd
import janitor
df.pivot_wider(index = 'C1', names_from = 'Type')
C1 A1_AG A1_SC A2_AG A2_SC A3_AG A3_SC
0 A 1.0 7.0 5.0 3.0 2.0 8.0
Of course, you can skip the convenience function and use pivot directly:
result = df.pivot(index='C1', columns='Type')
result.columns = result.columns.map('_'.join)
result.reset_index()
C1 A1_AG A1_SC A2_AG A2_SC A3_AG A3_SC
0 A 1.0 7.0 5.0 3.0 2.0 8.0

Pandas: If condition on multiple columns having null values and fillna with 0

I have a below dataframe, and my requirement is that, if both columns have np.nan then no change, if either of column has empty value then fill na with 0 value. I wrote this code but why its not working. Please suggest.
import pandas as pd
import numpy as np
data = {'Age': [np.nan, np.nan, 22, np.nan, 50,99],
'Salary': [217, np.nan, 262, 352, 570, np.nan]}
df = pd.DataFrame(data)
print(df)
cond1 = (df['Age'].isnull()) & (df['Salary'].isnull())
if cond1 is False:
df['Age'] = df['Age'].fillna(0)
df['Salary'] = df['Salary'].fillna(0)
print(df)
You can just assign it with update
c = ['Age','Salary']
df.update(df.loc[~df[c].isna().all(1),c].fillna(0))
df
Out[341]:
Age Salary
0 0.0 217.0
1 NaN NaN
2 22.0 262.0
3 0.0 352.0
4 50.0 570.0
5 99.0 0.0
c1 = df['Age'].isna()
c2 = df['Salary'].isna()
df[np.c_[c1 & ~c2, ~c1 & c2]]=0
df
Age Salary
0 0.0 217.0
1 NaN NaN
2 22.0 262.0
3 0.0 352.0
4 50.0 570.0
5 99.0 0.0
tmp=df.loc[(df['Age'].isna() & df['Salary'].isna())]
df.fillna(0,inplace=True)
df.loc[tmp.index]=np.nan
This might be a bit less sophisticated than the other answers but worked for me:
I first save the row(s) containing both Nan values at the same time
then fillna the original df as per normal
then set np.nan back to the location where we saved both rows containing Nan at the same time
Get the rows that are all nulls and use where to exclude them during the fill:
bools = df.isna().all(axis = 1)
df.where(bools, df.fillna(0))
Age Salary
0 0.0 217.0
1 NaN NaN
2 22.0 262.0
3 0.0 352.0
4 50.0 570.0
5 99.0 0.0
Your if statement won't work because you need to check each row for True or False; cond1 is a series, and cannot be compared correctly to False (it will just return False, which is not entirely true), there can be multiple False and True in the series.
An inefficient way would be to traverse the rows:
for row, index in zip(cond1, df.index):
if not row:
df.loc[index] = df.loc[index].fillna(0)
apart from the inefficiency, you are keeping track of index positions; the pandas options try to abstract the process while being quite efficient, since the looping is in C

Plotting interval of data in dataframe

A bit new to python so maybe code could be improved.
I have a txt file with x and y values, separated by some NaN in between.
Data goes from -x to x and then comes back (x to -x) but with somewhat different values of y, say:
x=np.array([-0.02,-0.01,0,0.01,0.02,NaN,1,NaN,0.02,0.01,0,-0.01,-0.02])
And I would like to plot (matplotlib) up to the first NaN with certain format, x=1 with other format, and last set of data with a third different format (color, marker, linewidth...).
Of course the data I have is a bit more complex, but I guess is a simple useful approximation.
Any idea?
I'm using pandas as my data manipulation tool
You can create a group label taking the cumsum of where x is null. Then you can define a dictionary keyed by the label with values being a dictionary containing all of the plotting parameters. Use groupby to plot each group separately, unpacking all the parameters to set the arguments for that group.
Sample Data
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
x = np.array([-0.02,-0.01,0,0.01,0.02,np.NaN,1,np.NaN,0.02,0.01,0,-0.01,-0.02])
df = pd.DataFrame({'x': x})
Code
df['label'] = df.x.isnull().cumsum().where(df.x.notnull())
plot_params = {0: {'lw': 2, 'color': 'red', 'marker': 'o'},
1: {'lw': 6, 'color': 'black', 'marker': 's'},
2: {'lw': 9, 'color': 'blue', 'marker': 'x'}}
fig, ax = plt.subplots(figsize=(3,3))
for label, gp in df.groupby('label'):
gp.plot(y='x', **plot_params[label], ax=ax, legend=None)
plt.show()
This is what df looks like for reference after defining the group label
print(df)
x label
0 -0.02 0.0
1 -0.01 0.0
2 0.00 0.0
3 0.01 0.0
4 0.02 0.0
5 NaN NaN
6 1.00 1.0
7 NaN NaN
8 0.02 2.0
9 0.01 2.0
10 0.00 2.0
11 -0.01 2.0
12 -0.02 2.0

Iterating over rows and columns in Pandas

I am trying to fill mean values of columns for all NaNs values in the column.
import numpy as np
import pandas as pd
table = pd.DataFrame({'A':[1,2,np.nan],
'B':[3,np.nan, np.nan],
'C':[4,5,6]})
def impute_missing_values(table):
for column in table:
for value in column:
if value == 'NaN':
value = column.mean(skipna=True)
else:
value = value
impute_missing_values(table)
table
Why I am getting an error for this code?
IIUC:
table.fillna(table.mean())
Output:
A B C
0 1.0 3.0 4
1 2.0 3.0 5
2 1.5 3.0 6
Okay, I am adding this as another answer because this isn't something I recommend at all. Using pandas methods vectorizes operations for better performance.
Using loops is not recommended when possible to avoid.
However, here is a quick fix to your code:
import pandas as pd
import numpy as np
import math
table = pd.DataFrame({'A':[1,2,np.nan],
'B':[3,np.nan, np.nan],
'C':[4,5,6]})
def impute_missing_values(df):
for column in df:
for idx, value in df[column].iteritems():
if math.isnan(value):
df.loc[idx,column] = df[column].mean(skipna=True)
else:
pass
return df
impute_missing_values(table)
table
Output:
A B C
0 1.0 3.0 4
1 2.0 3.0 5
2 1.5 3.0 6
You can try the SimpleImputer from scikit learn (https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html#sklearn.impute.SimpleImputer) using the mean option.
import pandas as pd
from sklearn.impute import SimpleImputer
table = pd.DataFrame({'A':[1,2,np.nan],
'B':[3,np.nan, np.nan],
'C':[4,5,6]})
print(table, '\n')
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
table_means = pd.DataFrame(imp.fit_transform(table), columns = {'C','B','A'})
print(table_means)
The print commands results in:
A B C
0 1.0 3.0 4
1 2.0 NaN 5
2 NaN NaN 6
A C B
0 1.0 3.0 4.0
1 2.0 3.0 5.0
2 1.5 3.0 6.0
To correct your code (as per my comment below):
def impute_missing_values(table):
for column in table:
table.loc[:,column] = np.where(table[column].isna(), table[column].mean(), table[column])
return table

How can I plot rows less than 0 as a different color?

I have a dataframe with one column of data. I'd like to visualize the data such that all the bars above the horizontal axis are blue, and those below it are red.
How can I accomplish this?
You can use where for selecting values above and below 0 to new columns b and c:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
data = np.random.randn(10)
df = pd.DataFrame({'a':data})
df['b'] = df.a.where(df.a >= 0)
df['c'] = df.a.where(df.a < 0)
print (df)
a b c
0 1.624345 1.624345 NaN
1 -0.611756 NaN -0.611756
2 -0.528172 NaN -0.528172
3 -1.072969 NaN -1.072969
4 0.865408 0.865408 NaN
5 -2.301539 NaN -2.301539
6 1.744812 1.744812 NaN
7 -0.761207 NaN -0.761207
8 0.319039 0.319039 NaN
9 -0.249370 NaN -0.249370
#plot to same figure
ax = df.b.plot.bar(color='b')
df.c.plot.bar(ax=ax, color='r')
plt.show()
Using numpy.where you can get indices at which data is below 0: np.where(x < 0) and over 0: np.where(x >= 0), thus you will get two not overlapping arrays, which you can visualize, using different colors.
Actually, pandas frame has its own equivalent of numpy.where, please look at this question: pandas equivalent of np.where