translate Dataframe using crosswalk in julia - dataframe

I have a very large dataframe (original_df) with columns of codes
14 15
21 22
18 16
And a second dataframe (crosswalk) which maps 'old_codes' to 'new_codes'
14 104
15 105
16 106
18 108
21 201
22 202
Of course, the resultant df (resultant_df) that I would like would have values:
104 105
201 202
108 106
I am aware of two ways to accomplish this. First, I could iterate through each code in original_df, find the code in crosswalk, then rewrite the corresponding cell in original_df with the translated code from crosswalk. The faster and more natural option would be to leftjoin() each column of original_df on 'old_codes'. Unfortunately, it seems I have to do this separately for each column, and then delete each column after its conversion column has been created -- this feels unnecessarily complicated. Is there a simpler way to convert all of original_df at once using the crosswalk?

You can do the following (I am using column numbers as you have not provided column names):
d = Dict(crosswalk[!, 1] .=> crosswalk[!, 2])
resultant_df = select(original_df, [i => ByRow(x -> d[x]) for i in 1:ncol(original_df)], renamecols=false)

Related

Taking the average of of columns for similar rows

What I am trying to do if I have rows with the same prefix,fromMp, toMp
then I take the average of each TPCSpeed 1
for example I have
CF 116 117 54.8 56 50 50 50 50 50
CF 116 117 54.8 56 50 50 50 50 50
CF 116 117 54.8 56 50 50 50 50 50
so If the rows share the same from mp to mp prefix and suffix then I want to take the average TPC 1 of all the rows that share this for example for 116 117 I have TPC 1 (54.8+54.8+54.8)/(3)
I want to take the average of the tpc 1 column for all the rows which share the same info. If the rows do not share the same info I just want the tpc 1. Not sure how to do this maybe duplicates.
I am not sure how to to this in pandas
import pandas as pd
import numpy as np
result=pd.read_csv("result.csv")
a1=result.columns.get_loc("TPCSpeed1")
a2=result.columns.get_loc("TPCSpeed2")
a3=result.columns.get_loc("TPCSpeed3")
a4=result.columns.get_loc("TPCSpeed4")
a5=result.columns.get_loc("TPCSpeed5")
a6=result.columns.get_loc("TPCSpeed6")
a7=result.columns.get_loc("TPCSpeed7")
pre=result.columns.get_loc("Prefix")
suf=result.columns.get_loc("Suffix")
FromMp=result.columns.get_loc("FromMP")
ToMp=result.columns.get_loc("ToMP")
w1=[]
w2=[]
w3=[]
w4=[]
w5=[]
w6=[]
w7=[]
prefix=[]
suffix=[]
begin=[]
end=[]
for index,row in result.iterrows():
print(index)
c1=row[pre]
c2=row[suf]
c3=row[FromMp]
c4=row[ToMp]
prefix.append(c1)
suffix.append(c2)
begin.append(c3)
end.append(c4)
b1=row[a1]
w1.append(b1)
b2=row[a2]
w2.append(b2)
b3=row[a3]
w3.append(b3)
b4=row[a4]
w4.append(b4)
b5=row[a5]
w5.append(b5)
b6=row[a6]
w6.append(b6)
b7=row[a7]
w7.append(b7)
This is a good use for groupby().agg().
At it's simplest, you can try:
result.groupbby(['Prefix', 'FromMP', 'ToMP', 'Suffix').agg(np.mean)
This will collapse all rows that have the same values in all four named columns, and then replace them with a single row with the mean values in each of the other columns. You can use reset_index() to get back to the original dataframe.
The agg (aka aggregate) function is fairly flexible. You can treat columns differently. It doesn't have to be the average for everything.
https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html

Pivoting with grouby?

I wonder if you can help me to find a solution for the following problem. Given a data frame df1 like this
d1={'L':['aaa','bbb','ccc','aaa','bbb','ddd'],
'w':[1,5,9,13,17,21],
'x':[2,6,10,14,18,22],
'y':[3,7,11,15,19,23],
'z':[4,8,12,16,20,24]}
df1=pd.DataFrame(d1)
and two dictionaries to define grouping over columns and rows
dctRowGroups={'aaa':'A','bbb':'B','ccc':'A','ddd':'B'}
dctColGroups={'w':'ALPHA','x':'BETA','y':'ALPHA','z':'BETA'}
I wanted to aggregate over columns as a first step. Applying
g2=df1.groupby(dctColGroups,axis=1)
g2.sum()
results in
but I wanted to keep the 'L' column for the next step row-wise aggregation, i.e. the result should be a dataframe df2 more like this:
What do I need to code to make this happen?
As a next step, I want to aggregate df2 over the rows using the dctRowGroups dictionary
g3=df2.groupby(dctRowGroups,axis=0)
g3.sum()
to get a final result like this:
In what way can I do all these steps in as few lines of code as possible?
Appreciate your advice on this.
Thanks a lot
Willfried.
You can do:
Firstly create df2 and insert 'L' column by using insert() method:
df2=df1.groupby(dctColGroups,axis=1).sum()
df2.insert(0,'L',df1['L']) #use this only when the order matters
#OR(use anyone of the method either insert or assign)
df2=df2.assign(L=df1['L']) #otherwise use this
Finally use assign() ,map() and groupby() method:
result=df2.assign(L=df2['L'].map(dctRowGroups)).groupby('L').sum()
Outputs:
df2:
L ALPHA BETA
0 aaa 4 6
1 bbb 12 14
2 ccc 20 22
3 aaa 28 30
4 bbb 36 38
5 ddd 44 46
result:
ALPHA BETA
L
A 52 58
B 92 98

Averaging dataframes with many string columns and display back all the columns

I have struggled with this even after looking at the various past answers to no avail.
My data consists of columns numeric and non numeric. I'd like to average the numeric columns and display my data on the GUI together with the information on the non-numeric columns.The non numeric columns have info such as names,rollno,stream while the numeric columns contain students marks for various subjects. It works well when dealing with one dataframe but fails when I combine two or more dataframes in which it returms only the average of the numeric columns and displays it leaving the non numeric columns undisplayed. Below is one of the codes I've tried so far.
df=pd.concat((df3,df5))
dfs =df.groupby(df.index,level=0).mean()
headers = list(dfs)
self.marks_table.setRowCount(dfs.shape[0])
self.marks_table.setColumnCount(dfs.shape[1])
self.marks_table.setHorizontalHeaderLabels(headers)
df_array = dfs.values
for row in range(dfs.shape[0]):
for col in range(dfs.shape[1]):
self.marks_table.setItem(row, col,QTableWidgetItem(str(df_array[row,col])))
A working code should return averages in something like this
STREAM ADM NAME KCPE ENG KIS
0 EAGLE 663 FLOYCE ATI 250 43 5
1 EAGLE 664 VERONICA 252 32 33
2 EAGLE 665 MACREEN A 341 23 23
3 EAGLE 666 BRIDGIT 286 23 2
Rather than
ADM KCPE ENG KIS
0 663.0 250.0 27.5 18.5
1 664.0 252.0 26.5 33.0
2 665.0 341.0 17.5 22.5
3 666.0 286.0 38.5 23.5
Sample data
Df1 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[70,28,79],
'KIS':[37,82,79],
'MAT':[67,38,29]})
Df2 = pd.DataFrame({
'STREAM':[NORTH,SOUTH],
'ADM':[437,238,439],
'NAME':[JAMES,MARK,PETER],
'KCPE':[233,168,349],
'ENG':[40,12,56],
'KIS':[33,43,43],
'MAT':[22,58,23]})
Your question not clear. However guessing the origin of question based on content. I have modified your datframes which were not well done by adding a stream called 'CENTRAL', see
Df1 = pd.DataFrame({'STREAM':['NORTH','SOUTH', 'CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[70,28,79],'KIS':[37,82,79],'MAT':[67,38,29]})
Df2 = pd.DataFrame({ 'STREAM':['NORTH','SOUTH','CENTRAL'],'ADM':[437,238,439], 'NAME':['JAMES','MARK','PETER'],'KCPE':[233,168,349],'ENG':[40,12,56],'KIS':[33,43,43],'MAT':[22,58,23]})
I have assumed you want to merge the two dataframes and find avarage
df3=Df2.append(Df1)
df3.groupby(['STREAM','ADM','NAME'],as_index=False).sum()
Outcome

Creating new column based on condition and extracting respective value from other column. Pandas Dataframe

I am relatively new to this field and am working with a data set to find meaningful insights into customer behavior. My dataset looks like:
customerId week first_trip_week rides
0 156 44 36 2
1 164 44 38 6
2 224 42 36 5
3 224 43 36 4
4 224 44 36 5
What I want to do is create new columns week 44,week 43, week 42 and get the values in the "ride" column to be filled into the rows for the respective customer id. This is in the hope that I can eventually also make the customerId my index and can get denominations for different weeks. Help would be greatly appreciated!
Thank you!!
If I'm understanding you correctly, you want to create new columns in the same dataframe for weeks 44, 43, and 42 with the correct values for each customerId and NaN for those that don't have it. If your original dataframe has all the user data, I would first filter for dataframes that have the correct week number
week42DF = dataset.loc[dataset['week']==42,['customerId','rides']].rename(columns={'rides':'week42Rides'})
getting only the rides and customerId and renaming the former here to make things a little easier for us. Then left join the old dataframe and the new one on customerId
dataset = pd.merge(dataset,week42DF,how='left',on='customerId')
The users that are missing from week42DF will have NaN in the week42rides column in the merged dataset which you can then use the .fillna(0) method to replace with zeros. Do this for each week you require.
See Pandas' documentation on merge and the more general concatenate for more info.

Unexpected groupby result: some rows are missing

I am facing an issue with transforming my data using Pandas' groupby. I have a table (several million rows and 3 variables) that I am trying to group by "Date" variable.
Snippet from a raw table:
Date V1 V2
07_19_2017_17_00_06 10 5
07_19_2017_17_00_06 20 6
07_19_2017_17_00_08 15 3
...
01_07_2019_14_06_59 30 1
01_07_2019_14_06_59 40 2
The goal is to group rows with the same value of "Date" by applying a mean function over V1 and sum function over V2. So that the expected result resembles:
Date V1 V2
07_19_2017_17_00_06 15 11 # This row has changed
07_19_2017_17_00_08 15 3
...
01_07_2019_14_06_59 35 3 # and this one too!
My code:
df = df.groupby(['Date'], as_index=False).agg({'V1': 'mean', 'V2': 'sum'})
The output I am getting, however, is totally unexpected and I am can't find a reasonable explanation of why it happens. It seems like Pandas is only processing data from 01_01_2018_00_00_01 to 12_31_2018_23_58_40, instead of 07_19_2017_17_00_06 to 01_07_2019_14_06_59.
Date V1 V2
01_01_2018_00_00_01 30 3
01_01_2018_00_00_02 20 4
...
12_31_2018_23_58_35 15 3
12_31_2018_23_58_40 16 11
If you have any clue, I would really appreciate your input. Thank you!
I suspect that the issue is based around Pandas not recognizing the date format that I've used. A solution turned out to be quite simple: convert all of the dates into UNIX time format, divide by 60 and then, repeat the groupby procedure.