In pandas, how to reindex(fill 0) in level 2 in multiindex - pandas

I have a dataframe pivot table with 2 level of index: month and rating. The rating should be 1,2,3 (not to be confused with the columns 1,2,3). I found that for some months, the rating could be missing. E.g, (Population and 2021-10) only has rating 1,2. I need every month to have ratings 1,2,3. So I need to fill 0 for the missing rating index.
tbl = pd.pivot_table(self.df, values=['ID'], index=['month', 'risk'],
columns=["Factor"], aggfunc='count', fill_value=0)
tbl = tbl.droplevel(None, axis=1).rename_axis(None, axis=1).rename_axis(index={'month': None,
'Risk': 'Client Risk Rating'})
# show Low for rating 1, Moderate for rating 2, Potential High for rating 3
rating = {1: 'Low',
2: 'Moderate',
3: 'Potential High'
}
pop = {'N': 'Refreshed Clients', 'Y': 'Population'}
tbl.rename(index={**rating,**pop}, inplace=True)
tbl = tbl.applymap(lambda x: x.replace(',', '')).astype(np.int64)
tbl = tbl.div(tbl.sum(axis=1), axis=0)
# client risk rating may be missing (e.g., only 1,2).
# To draw, need to fill the missing client risk rating with 0
print("before",tbl)
tbl=tbl.reindex(pd.MultiIndex.from_product(tbl.index.levels), fill_value=0)
print("after pd.MultiIndex.from_product",tbl)
I have used pd.MultiIndex.from_product. It does not work when all data is missing one index. For example, population has Moderate, 2021-03 and 2021-04 have Low and Moderate. After pd.MultiIndex.from_product, population has Low and Moderate, but all are missing High. My question is to have every month with risk 1,2,3. It seems the index values are from data.

You can use pd.MultiIndex.from_product to create a full index:
>>> df
1 2 3
(Population) 1 0.436954 0.897747 0.387058
2 0.464940 0.611953 0.133941
2021-08(Refreshed) 1 0.496111 0.282798 0.048384
2 0.163582 0.213310 0.504647
3 0.008980 0.651175 0.400103
>>> df.reindex(pd.MultiIndex.from_product(df.index.levels), fill_value=0)
1 2 3
(Population) 1 0.436954 0.897747 0.387058
2 0.464940 0.611953 0.133941
3 0.000000 0.000000 0.000000 # New record
2021-08(Refreshed) 1 0.496111 0.282798 0.048384
2 0.163582 0.213310 0.504647
3 0.008980 0.651175 0.400103
Update
I wonder df=df.reindex([1,2,3],level='rating',fill_value=0) doesn't work because the new index values [1,2,3] cannot fill the missing values for the previous rating index. By using the from_product, it creates the product of two index.
In fact it works. I mean it has an effect but not the one you expect. The method reindex the level not the values. Let me show you:
# It seems there is not effect because you don't see 3 and 4 as expected?
>>> df.reindex([1, 2, 3, 4], level='ratings')
0 1 2
ratings
(Population) 1 0.536154 0.671380 0.839362
2 0.729484 0.512379 0.440018
2021-08(Refreshed) 1 0.279990 0.295757 0.405536
2 0.864217 0.798092 0.144219
3 0.214566 0.407581 0.736905
# But yes something happens
>>> df.reindex([1, 2, 3, 4], level='ratings').index.levels
FrozenList([['(Population)', '2021-08(Refreshed)'], [1, 2, 3, 4]])
The level has been reindexed ---^
# It's different from values
>>> df.reindex([1, 2, 3, 4], level='ratings').index.get_level_values('ratings')
Int64Index([1, 2, 1, 2, 3], dtype='int64', name='ratings')

Related

Generate a new column based on other columns' value

here is my sample data input and output:
df=pd.DataFrame({'A_flag': [1, 1,1], 'B_flag': [1, 1,0],'C_flag': [0, 1,0],'A_value': [5, 3,7], 'B_value': [2, 7,4],'C_value': [4, 2,5]})
df1=pd.DataFrame({'A_flag': [1, 1,1], 'B_flag': [1, 1,0],'C_flag': [0, 1,0],'A_value': [5, 3,7], 'B_value': [2, 7,4],'C_value': [4, 2,5], 'Final':[3.5,3,7]})
I want to generate another column called 'Final' conditional on A_flag, B_flag and C_flag:
(a) If number of three columns equal to 1 is 3, then 'Final'=median of (A_value, B_value, C_value)
(b) If the number of satisfied conditions is 2, then 'Final'= mean of those two
(c) If the number is 1, the 'Final' = that one
For example, in row 1, A_flag=1 and B_flag =1, 'Final'=A_value+B_value/2=5+2/2=3.5
in row 2, all three flags are 1 so 'Final'= median of (3,7,2) =3
in row 3, only A_flag=1, so 'Final'=A_value=7
I tried the following:
df.loc[df[['A_flag','B_flag','C_flag']].eq(1).sum(axis=1)==3, "Final"]= df[['A_flag','B_flag','C_flag']].median(axis=1)
df.loc[df[['A_flag','B_flag','C_flag']].eq(1).sum(axis=1)==2, "Final"]=
df.loc[df[['A_flag','B_flag','C_flag']].eq(1).sum(axis=1)==1, "Final"]=
I don't know how to subset the columns that for the second and third scenarios.
Assuming the order of flag and value columns match, you can first filter the flag and value like columns then mask the values in value columns where flag is 0, then calculate median along axis=1
flag = df.filter(like='_flag')
value = df.filter(like='_value')
df['median'] = value.mask(flag.eq(0).to_numpy()).median(1)
A_flag B_flag C_flag A_value B_value C_value median
0 1 1 0 5 2 4 3.5
1 1 1 1 3 7 2 3.0
2 1 0 0 7 4 5 7.0
When dealing with functions and dataframe, usually the easiest way to go is defining a function and then calling that function to the dataframe either by iterating over the columns or the rows. I think in your case this might work:
import pandas as pd
df = pd.DataFrame(
{
"A_flag": [1, 1, 1],
"B_flag": [1, 1, 0],
"C_flag": [0, 1, 0],
"A_value": [5, 3, 7],
"B_value": [2, 7, 4],
"C_value": [4, 2, 5],
}
)
def make_final_column(row):
flags = [(row['A_flag'], row['A_value']), (row['B_flag'], row['B_value']), (row['C_flag'], row['C_value'])]
met_condition = [row[1] for row in flags if row[0] == 1]
return sum(met_condition) / len(met_condition)
df["Final"] = df.apply(make_final_column, axis=1)
df
With numpy:
flags = df[["A_flag", "B_flag", "C_flag"]].to_numpy()
values = df[["A_value", "B_value", "C_value"]].to_numpy()
# Sort each row so that the 0 flags appear first
index = np.argsort(flags)
flags = np.take_along_axis(flags, index, axis=1)
# Rearrange the values to match the flags
values = np.take_along_axis(values, index, axis=1)
# Result
np.select(
[
flags[:, 0] == 1, # when all flags are 1
flags[:, 1] == 1, # when two flags are 1
flags[:, 2] == 1, # when one flag is 1
],
[
np.quantile(values, 0.5, axis=1), # median all of 3 values
np.mean(values[:, -2:], axis=1), # mean of the two 1-flag
values[:, 2], # value of the 1-flag
],
default=np.nan
)
Quite interesting solutions already. I have used a masked approach.
Explanation:
So, with the flag given already it becomes easy to find which values are important just by multiplying by the flag. There after mask the values which are zero in respective rows and find median over the axis.
>>> import numpy as np
>>> t_arr = np.array((df.A_flag * df.A_value, df.B_flag * df.B_value, df.C_flag * df.C_value)).T
>>> maskArr = np.ma.masked_array(t_arr, mask=x==0)
>>> df["Final"] = np.ma.median(maskArr, axis=1)
>>> df
A_flag B_flag C_flag A_value B_value C_value Final
0 1 1 0 5 2 4 3.5
1 1 1 1 3 7 2 3.0
2 1 0 0 7 4 5 7.0

Pandas - Merge data frames based on conditions

I would like to merge n data frames based on certain variables (external to the data frame).
Let me clarify the problem referring to an example.
We have two dataframes detailing the height and age of certain members of a population.
On top, we are given one array per data frame, containing one value per property (so array length = number of columns with numerical value in the data frame).
Consider the following two data frames
df1 = pd.DataFrame({'Name': ['A', 'B', 'C', 'D', 'E'],
'Age': [3, 8, 4, 2, 5], 'Height': [7, 2, 1, 4, 9]})
df2 = pd.DataFrame({'Name': ['A', 'B', 'D'],
'Age': [4, 6, 4], 'Height': [3,9, 2]})
looking as
( Name Age Height
0 A 3 7
1 B 8 2
2 C 4 1
3 D 2 4
4 E 5 9,
Name Age Height
0 A 4 3
1 B 6 9
2 D 4 2)
As mentioned, we also have two arrays, say
array1 = np.array([ 1, 5])
array2 = np.array([2, 3])
To make the example concrete, let us say each array contains the year in which the property was measured.
The output should be constructed as follows:
if an individual appears only in one dataframe, its properties are taken from said dataframe
if an individual appears in more than one data frame, for each property take the values from the data frame whose associated array has the corresponding higher value. So, for property i, compare array1[[i]] and array2[[i]], and take property values from dataframe df1 if array1[[i]] > array2[[i]], and viceversa.
In the context of the example, the rules are translated as, take the property which has been measured more recently, if more are available
The output given the example data frames should look like
Name Age Height
0 A 4 7
1 B 6 2
2 C 4 1
3 D 4 4
4 E 5 9
Indeed, for the first property "Age", as array1[[0]] < array2[[0]], values are taken from the second dataframe, for the available individuals (A, B, D). Remaining values come from the first dataframe.
For the second property "Height", as as array1[[1]] > array2[[1]], values come from the first dataframe, which already describes all the individuals.
At the moment I have some sort of solution based on looping over properties, but it is silly convoluted, I am wondering if any Pandas expert out there could help me towards an elegant solution.
Thanks for your support.
Your question is a bit confusing: array indexes start from 0 so I think in your example it should be [[0]] and [[1]] instead of [[1]] and [[2]].
You can first concatenate your dataframes to have all names listed, then loop over your columns and update the values where the corresponding array is greater (I added a Z row to df2 to show new rows are being added):
df1 = pd.DataFrame({'Name': ['A', 'B', 'C', 'D', 'E'],
'Age': [3, 8, 4, 2, 5], 'Height': [7, 2, 1, 4, 9]})
df2 = pd.DataFrame({'Name': ['A', 'B', 'D', 'Z'],
'Age': [4, 6, 4, 8], 'Height': [3,9, 2, 7]})
array1 = np.array([ 1, 5])
array2 = np.array([2, 3])
df1.set_index('Name', inplace=True)
df2.set_index('Name', inplace=True)
df3 = pd.concat([df1, df2[~df2.index.isin(df1.index)]])
for i, col in enumerate(df1.columns):
if array2[[i]] > array1[[i]]:
df3[col].update(df2[col])
print(df3)
Note: You have to set Name as index in order to update the right rows
Output:
Age Height
Name
A 4 7
B 6 2
C 4 1
D 4 4
E 5 9
Z 8 7
I you have more than two dataframes in a list, you'll have to store your arrays in a list as well and iterate over the dataframe list while keeping track of the highest array values in a new array.

How can I select the rows which contains some specific value in a dataframe using python?

I am quite new to python and coding, so sorry in advance if I may not be so clear.
I have a dataframe where the rows correspond to IDs (f.ied) and the columns to several values (ICD10 codes). I want to select the rows which contain specific ICD10 codes.
However, I could not find the right way to do so...I tried with loc and set but no luck...any help, please?
The dataframe is like that:
each rows corresponds to f.ied (IDs). I want to know which f.ied have specific codes: I20, I21, I22, I23, I24, I25.
df = pd.DataFrame({'feid': [2, 4, 8, 0],
'f42002': [2, 0, 0, 0],
'f42003': [10, 'I21', 1, 'J10']})
df = df.set_index('feid')
df
DataFrame
f42002 f42003
feid
2 2 10
4 0 I21
8 0 1
0 0 J10
Desired items
mylist = ['I21', 'J10']
for i in mylist:
print(df[(df['f42002']==i) | (df['f42003']==i)].index.values)
Result:
[4]
[0]

Plotting by groupby and average

I have a dataframe with multiple columns and rows. One column, say 'name' has several rows with names, the same name used multiple times. Other rows, say, 'x', 'y', 'z', 'zz' have values. I want to group by name and get the mean of each column (x,y,z,zz)for each name, then plot on a bar chart.
Using the pandas.DataFrame.groupby is an important data-wrangling stuff. Let's first make a dummy Pandas data frame.
df = pd.DataFrame({"name": ["John", "Sansa", "Bran", "John", "Sansa", "Bran"],
"x": [2, 3, 4, 5, 6, 7],
"y": [5, -3, 10, 34, 1, 54],
"z": [10.6, 99.9, 546.23, 34.12, 65.04, -74.29]})
>>>
name x y z
0 John 2 5 10.60
1 Sansa 3 -3 99.90
2 Bran 4 10 546.23
3 John 5 34 34.12
4 Sansa 6 1 65.04
5 Bran 7 54 -74.29
We can use the label of the column to group the data (here the label is "name"). Explicitly defining the by parameter can be omitted (c.f., df.groupby("name")).
df.groupby(by = "name").mean().plot(kind = "bar")
which gives us a nice bar graph.
Transposing the group by results using T (as also suggested by anky) yields a different visualization. We can also pass a dictionary as the by parameter to determine the groups. The by parameter can also be a function, Pandas series, or ndarray.
df.groupby(by = {1: "Sansa", 2: "Bran"}).mean().T.plot(kind = "bar")

Access Row Based on Column Value

I have the following pandas dataframe:
data = {'ID': [1, 2, 3], 'Neighbor': [3, 1, 2], 'x': [5, 6, 7]}
Now I want to create a new column 'y', which for each row is the value of the field x, from that row referenced by the neighbor column (ie that column, whose ID equals the value of Neighbor), e.g: For row 0 (ID 1), 'Neighbor' is 3, thus 'y' should be 7.
So the resulting dataframe should have the colum y = [7, 5, 6].
Can I solve this without using df.apply? (As this is rather time-consuming for my big dataframes.)
I would like to use sth like
df.loc[:, 'y'] = df.loc[df.Neighbor.eq(df.ID), 'x']}
but this returns NaN.
we can pass a dict from your ID and X columns then map these into your new column
your_dict_ = dict(zip(df['ID'],df['x']))
print(your_dict_)
{1: 5, 2: 6, 3: 7}
Then we can use .map to pass these your column using the Neighbor column as the key.
df['Y'] = df['Neighbor'].map(your_dict_)
print(df)
ID Neighbor x Y
0 1 3 5 7
1 2 1 6 5
2 3 2 7 6