Pandas - manipulating column based on multiple condition on dataframe - pandas

I have a Pandas dataframe with two columns, column fruit has values 0 or 1 and the other column as qty which has missing values with float values.
Now, I need to overwrite the qty column with a condition such as if fruit column has value as 1 and if qty column is missing then replace the qty column with 0 else same value as qty.
Any help is appreciated.
Regards,
Dj

Use:
df = pd.DataFrame({
'qty': [np.nan,np.nan,10,23, np.nan],
'fruit': [1,0,1,0,1]
})
print (df)
qty fruit
0 NaN 1
1 NaN 0
2 10.0 1
3 23.0 0
4 NaN 1
mask = (df['fruit'] == 1) & (df['qty'].isna())
df['qty'] = np.where(mask, 0, df['qty'])
Another solutions:
df.loc[mask, 'qty'] = 0
df['qty'] = df['qty'].mask(mask, 0)
print (df)
qty fruit
0 0.0 1
1 NaN 0
2 10.0 1
3 23.0 0
4 0.0 1

df1 = df[df.fruit == 1].fillna(0)
df.update(df1)
print(df)
df is your original dataframe .
you can add df.fruit = df.fruit.astype(int) before printing the df if you like.

Related

Pandas transformation, duplicate index values to column values

I have the following pandas dataframe:
0
0
A 0
B 0
C 0
C 4
A 1
A 7
Now there are some index letter (A and C) that appear multiple times. I want the values of these index letters on a extra column beside instead of a extra row. The desired pandas dataframe looks like:
0 1 3
0
A 0 1 7
B 0 np.nan np.nan
C 0 4 np.nan
Anything would help!
IIUC, you need to add a helper column:
(df.assign(group=df.groupby(level=0).cumcount())
.set_index('group', append=True)[0] # 0 is the name of the column here
.unstack('group')
)
or:
(df.reset_index()
.assign(group=lambda d: d.groupby('index').cumcount())
.pivot('index', 'group', 0) # col name here again
)
output:
group 0 1 2
A 0.0 1.0 7.0
B 0.0 NaN NaN
C 0.0 4.0 NaN

Sum columns in pandas having string and number

I need to sum column and column b, which contain string in 1st row
>>> df
a b
0 c d
1 1 2
2 3 4
>>> df['sum'] = df.sum(1)
>>> df
a b sum
0 c d cd
1 1 2 3
2 3 4 7
I only need to add numeric values and get an output like
>>> df
a b sum
0 c d "dummyString/NaN"
1 1 2 3
2 3 4 7
I need to add only some columns
df['sum']=df['a']+df['b']
solution if mixed data - numeric with strings:
I think simpliest is convert non numeric values after sum by to_numeric to NaNs:
df['sum'] = pd.to_numeric(df[['a','b']].sum(1), errors='coerce')
Or:
df['sum'] = pd.to_numeric(df['a']+df['b'], errors='coerce')
print (df)
a b sum
0 c d NaN
1 1 2 3.0
2 3 4 7.0
EDIT:
Solutions id numbers are strings represenation - first convert to numeric and then sum:
df['sum'] = pd.to_numeric(df['a'], errors='coerce') + pd.to_numeric(df['b'], errors='coerce')
print (df)
a b sum
0 c d NaN
1 1 2 3.0
2 3 4 7.0
Or:
df['sum'] = (df[['a', 'b']].apply(lambda x: pd.to_numeric(x, errors='coerce'))
.sum(axis=1, min_count=1))
print (df)
a b sum
0 c d NaN
1 1 2 3.0
2 3 4 7.0

Warning with loc function with pandas dataframe

While working on a SO Question i came across a warning error using with loc, precise details are as belows:
DataFrame Samples:
First dataFrame df1 :
>>> data1 = {'Sample': ['Sample_A','Sample_D', 'Sample_E'],
... 'Location': ['Bangladesh', 'Myanmar', 'Thailand'],
... 'Year':[2012, 2014, 2015]}
>>> df1 = pd.DataFrame(data1)
>>> df1.set_index('Sample')
Location Year
Sample
Sample_A Bangladesh 2012
Sample_D Myanmar 2014
Sample_E Thailand 2015
Second dataframe df2:
>>> data2 = {'Num': ['Value_1','Value_2','Value_3','Value_4','Value_5'],
... 'Sample_A': [0,1,0,0,1],
... 'Sample_B':[0,0,1,0,0],
... 'Sample_C':[1,0,0,0,1],
... 'Sample_D':[0,0,1,1,0]}
>>> df2 = pd.DataFrame(data2)
>>> df2.set_index('Num')
Sample_A Sample_B Sample_C Sample_D
Num
Value_1 0 0 1 0
Value_2 1 0 0 0
Value_3 0 1 0 1
Value_4 0 0 0 1
Value_5 1 0 1 0
>>> samples
['Sample_A', 'Sample_D', 'Sample_E']
While i'm taking samples to preserve the column from it as follows it works but at the same time it produce warning ..
>>> df3 = df2.loc[:, samples]
>>> df3
Sample_A Sample_D Sample_E
0 0 0 NaN
1 1 0 NaN
2 0 1 NaN
3 0 1 NaN
4 1 0 NaN
Warnings:
indexing.py:1472: FutureWarning:
Passing list-likes to .loc or [] with any missing label will raise
KeyError in the future, you can use .reindex() as an alternative.
See the documentation here:
https://pandas.pydata.org/pandas-docs/stable/indexing.html#deprecate-loc-reindex-listlike
return self._getitem_tuple(key)
Would like to know about to handle this to a better way!
Use reindex like:
df3 = df2.reindex(columns=samples)
print (df3)
Sample_A Sample_D Sample_E
0 0 0 NaN
1 1 0 NaN
2 0 1 NaN
3 0 1 NaN
4 1 0 NaN
Or if want only intersected columns use Index.intersection:
df3 = df2[df2.columns.intersection(samples)]
#alternative
#df3 = df2[np.intersect1d(df2.columns, samples)]
print (df3)
Sample_A Sample_D
0 0 0
1 1 0
2 0 1
3 0 1
4 1 0

Re-index to insert missing rows in a multi-indexed dataframe

I have a MultiIndexed DataFrame with three levels of indices. I would like to expand my third level to contain all values in a given range, but only for the existing values in the two upper levels.
For example, assume the first level is name, the second level is date and the third level is hour. I would like to have rows for all 24 possible hours (even if some are currently missing), but only for the already existing names and dates. The values in new rows can be filled with zeros.
So a simple example input would be:
>>> import pandas as pd
>>> df = pd.DataFrame([[1,1,1,3],[2,2,1,4], [3,3,2,5]], columns=['A', 'B', 'C','val'])
>>> df.set_index(['A', 'B', 'C'], inplace=True)
>>> df
val
A B C
1 1 1 3
2 2 1 4
3 3 2 5
if the required values for C are [1,2,3], the desired output would be:
val
A B C
1 1 1 3
2 0
3 0
2 2 1 4
2 0
3 0
3 3 1 0
2 5
3 0
I know how to achieve this using groupby and applying a defined function for each group, but I was wondering if there was a cleaner way of doing this with reindex (I couldn't make this one work for a MultiIndex case, but perhaps I'm missing something)
Use -
partial_indices = [ i[0:2] for i in df.index.values ]
C_reqd = [1, 2, 3]
final_indices = [j+(i,) for j in partial_indices for i in C_reqd]
index = pd.MultiIndex.from_tuples(final_indices, names=['A', 'B', 'C'])
df2 = pd.DataFrame(pd.Series(0, index), columns=['val'])
df2.update(df)
Output
df2
val
A B C
1 1 1 3.0
2 0.0
3 0.0
2 2 1 4.0
2 0.0
3 0.0
3 3 1 0.0
2 5.0
3 0.0

Pandas: Delete duplicated items in a specific column

I have a panda dataframe (here represented using excel):
Now I would like to delete all dublicates (1) of a specific row (B).
How can I do it ?
For this example, the result would look like that:
You can use duplicated for boolean mask and then set NaNs by loc, mask or numpy.where:
df.loc[df['B'].duplicated(), 'B'] = np.nan
df['B'] = df['B'].mask(df['B'].duplicated())
df['B'] = np.where(df['B'].duplicated(), np.nan,df['B'])
Alternative if need remove duplicates rows by B column:
df = df.drop_duplicates(subset=['B'])
Sample:
df = pd.DataFrame({
'B': [1,2,1,3],
'A':[1,5,7,9]
})
print (df)
A B
0 1 1
1 5 2
2 7 1
3 9 3
df.loc[df['B'].duplicated(), 'B'] = np.nan
print (df)
A B
0 1 1.0
1 5 2.0
2 7 NaN
3 9 3.0
df = df.drop_duplicates(subset=['B'])
print (df)
A B
0 1 1
1 5 2
3 9 3