How can I iterate/ transpose /append a data frame to another one? - pandas

for row in range(1, len(df)):
try:
df_out, orthogroup, len_group = HOG_get_group_stats(df.loc[row, "HOG"])
temp_df = pd.DataFrame()
for id in range(len(df_out)):
print(" ")
temp_df = pd.concat([df, pd.DataFrame(df_out.iloc[id, :]).T], axis=1)
temp_df["HOG"] = orthogroup
temp_df["len_group"] = len_group
print(temp_df)
except:
print(row, "no")
Here I have a script that does the following:
Iterate over df and apply the HOG_get_group_stats function to the HOG column in df and then, get 3 variables as outputs. (Basically, the function creates some stats as a data frame called df_out, and extracts some information as two more columns called orthogroup, len_group)
Create an empty template called temp_df
Transpose the df_out data frame and make it one single row and then, concatenate with the df we used in the beginning as columns.
Add orthogroup, len_group columns to the end of the temp_df
Problem:
It prints out the data however, when I try to see the temp_df as a data frame it shows only a single row ( probably the last one) means that my concatenation of several data frames doesn't work.
Questions:
How can I iterate and then append a data frame as columns?
Is there an easier way to iterate over a data frame? (e.g. iterrow)
Is there a better way to transpose rows to columns in a data frame? ( e.g. pivot, melt)
Any help would be appreciated!!
You can find the sample files to df, df_out,temp_df and expected output_sample table here :
Sample_files

Related

Compile a count of similar rows in a Pandas Dataframe based on multiple column values

I have two Dataframes, one containing my data read in from a CSV file and another that has the data grouped by all of the columns but the last and reindexed to contain a column for the count of the size of the groups.
df_k1 = pd.read_csv(filename, sep=';')
columns_for_groups = list(df_k1.columns)[:-1]
k1_grouped = df_k1.groupby(columns_for_groups).size().reset_index(name="Count")
I need to create a series such that every row(i) in the series corresponds to row(i) in my original Dataframe but the contents of the series need to be the size of the group that the row belongs to in the grouped Dataframe. I currently have this, and it works for my purposes, but I was wondering if anyone knew of a faster or more elegant solution.
size_by_row = []
for row in df_k1.itertuples():
for group in k1_grouped.itertuples():
if row[1:-1] == group[1:-1]:
size_by_row.append(group[-1])
break
group_size = pd.Series(size_by_row)

How to apply function to each column and row of dataframe pandas

I have two dataframes.
df1 has an index list made of strings like (row1,row2,..,rown) and a column list made of strings like (col1,col2,..,colm) while df2 has k rows and 3 columns (char_1,char_2,value). char_1 contains strings like df1 indexes while char_2 contains strings like df1 columns. I only want to assign the df2 value to df1 in the right position. For example if the first row of df2 reads ['row3','col1','value2'] I want to assign value2 to df1 in the position ([2,0]) (third row and first column).
I tried to use two functions to slide rows and columns of df1:
def func1(val):
# first I convert the series to dataframe
val=val.to_frame()
val=val.reset_index()
val=val.set_index('index') # I set the index so that it's the right column
def func2(val2):
try: # maybe the combination doesn't exist
idx1=list(cou.index[df2[char_2]==(val2.name)]) #val2.name reads col name of df1
idx2=list(cou.index[df2[char_1]==val2.index.values[0]]) #val2.index.values[0] reads index name of df1
idx= list(reduce(set.intersection, map(set, [idx1,idx2])))
idx=int(idx[0]) # final index of df2 where I need to take value to assign to df1
check=1
except:
check=0
if check==1: # if index exists
val2[0]=df2['value'][idx] # assign value to df1
return val2
val=val.apply(func2,axis=1) #apply the function for columns
val=val.squeeze() #convert again to series
return val
df1=df1.apply(func1,axis=1) #apply the function for rows
I made the conversion inside func1 because without this step I wasn't able to work with series keeping index and column names so I wasn't able to find the index idx in func2.
Well the problem is that it takes forever. df1 size is (3'600 X 20'000) and df2 is ( 500 X 3 ) so it's not too much. I really don't understand the problem.. I run the code for the first row and column to check the result and it's fine and it takes 1 second, but now for the entire process I've been waiting for hours and it's still not finished.
Is there a way to optimize it? As I wrote in the title I only need to run a function that keeps column and index names and works sliding the entire dataframe. Thanks in advance!

How to broadcast a list of data into dataframe (Or multiIndex )

I have a big dataframe its about 200k of rows and 3 columns (x, y, z). Some rows doesn't have y,z values and just have x value. I want to make a new column that first set of data with z value be 1,second one be 2,then 3, etc. Or make a multiIndex same format.
Following image shows what I mean
Like this image
I made a new column called "NO." and put zero as initial value. Then
I tried to record the index of where I want the new column get a new value. with following code
df = pd.read_fwf(path, header=None, names=['x','y','z'])
df['NO.']=0
index_NO_changed = df.index[df['z'].isnull()]
Then I loop through it and change the number:
for i in range(len(index_NO_changed)-1):
df['NO.'].iloc[index_NO_changed[i]:index_NO_changed[i+1]]=i+1
df['NO.'].iloc[index_NO_changed[-1]:]=len(index_NO_changed)
But the problem is I get a warning that "
A value is trying to be set on a copy of a slice from a DataFrame
I was wondering
Is there any better way? Is creating multiIndex instead of adding another column easier considering size of dataframe?

Pandas: concatenating data frames in iterations

I want to concatenate data frames in a loop with pandas.concat. They have the same columns but different indexes and values and they are generated within the loop. In such way the output dataframe will 'grow' over iterations starting from empty data frame. For a list it will look like this:
a = []
for i in range(10):
a.append(i**2)
However, I found it is not advisable to make empty data frame. Is the only solution to get the first data frame before the loop and in the loop concat 2nd, 3rd, ... data frames?
Jarek
You could use append:
a = pd.DataFrame()
for i in range(10):
<your code here>
a = a.append(i)
Or concat:
a = pd.DataFrame()
for i in range(10):
<your code here>
a = pd.concat([a, i])

delete rows from pandas data frame that contains one of its columns as list , when one of its values match value in another compared list

delete rows from pandas data frame that contains one of its columns as list , when one of its values match value in another compared list column in another data frame.
here is the first data frame column: enter image description here
and the other data frame column is here: enter image description here
I have tried a lot of codes
Revdf=Revdf.drop(lambda x: [i for i in Revdf.AffiliationHistory if i in Authdf.Affiliations.values], axis=1)
or
Revdf=Revdf[~(Revdf.AffiliationHistory.isin(Authdf.Affiliations.values))]
but these can't help
There has to be an easier way, but i wrote a function for it and it works:
def remove_row(df1,x1,y1,df2,x2,y2):
assert type(df1.loc[x1,y1])==list,"type have to be list"
assert type(df2.loc[x2,y2])==list,"type have to be list"
flag =False
l1=df1.loc[x1,y1]
print(l1)
l2=df2.loc[x2,y2]
print(l2)
for i in l1:
if i in l2:
flag=True
break
if flag==True:
return df1.drop(x1)
else:
return df1
x is the row index, y is the column name, tried it on synthetic data and it works:
df1=pd.DataFrame({'col1':[0,0,0,0,1],
'col2':[[1,2,3,4],0,0,0,0]})
df2=pd.DataFrame({'col1':[0,0,0,0],
'col2':[[0,0,0,4],0,0,0]})
remove_row(df1,0,'col2',df2,0,'col2')
Also, i think a mistake you're making is this:
[1,2,3,4] in [0,1,2,3,4]
will return false, because you're asking if the second list contains the first.