improve performance of double loop in pandas - pandas

I have a dataframe consisting of numeric and categorical fields:
import pandas as pd
df2=pd.DataFrame({'col1':[1,2,3,4],'col2':[5,6,7,8], 'col3':['cat','cat','dog','bird']})
df2
And am calculating how similar each row is with the following code:
#calculate distance matrix comparing how similar two rows are
vals=[]
for i in range(len(df2)):
for j in range(len(df2)):
if(j<=i): continue
a=df2.iloc[i,:]
b=df2.iloc[j,:]
d0=(a[0]-b[0])**2
d1=(a[1]-b[1])**2
d2=np.where(a[2]==b[2],0,10)**2
row_values=(i,j, (d0 + d1 +d2)**0.5)
vals.append(row_values)
new_df = pd.DataFrame(vals, columns =['Row1','Row2','Difference'])
new_df
this works fine for a small dataframe, but when I implement it similarly to a dataframe that has 10k rows and 10 columns being used, it takes a very loooong time to compute.
Are there any suggestions on how to improve the processing power of this code?
I start with:
col1 col2 col3
0 1 5 cat
1 2 6 cat
2 3 7 dog
3 4 8 bird
and end up with:
Row1 Row2 Difference
0 0 1 1.414214
1 0 2 10.392305
2 0 3 10.862780
3 1 2 10.099505
4 1 3 10.392305
5 2 3 10.099505
I am calculating the distance between each row of data.

This is a distance matrix problem, so we can use distance_matrix and broadcasting. But note that this only works when your data is not too large.
from scipy.spatial import distance_matrix
# normal distance:
d01 = distance_matrix(df2[['col1','col2']].values, df2[['col1','col2']].values)**2
# category distance
d2 = x = df2['col3'].values[:,None] != df2['col3'].values
# the matrix
dist_mat = np.sqrt(d1 + x*100)
# we only care for the distance with row != col
np.triu(dist_mat)
Output:
array([[ 0. , 1.41421356, 10.39230485, 10.86278049],
[ 0. , 0. , 10.09950494, 10.39230485],
[ 0. , 0. , 0. , 10.09950494],
[ 0. , 0. , 0. , 0. ]])

Related

Creating a dataframe using roll-forward window on multivariate time series

Based on the simplifed sample dataframe
import pandas as pd
import numpy as np
timestamps = pd.date_range(start='2017-01-01', end='2017-01-5', inclusive='left')
values = np.arange(0,len(timestamps))
df = pd.DataFrame({'A': values ,'B' : values*2},
index = timestamps )
print(df)
A B
2017-01-01 0 0
2017-01-02 1 2
2017-01-03 2 4
2017-01-04 3 6
I want to use a roll-forward window of size 2 with a stride of 1 to create a resulting dataframe like
timestep_1 timestep_2 target
0 A 0 1 2
B 0 2 4
1 A 1 2 3
B 2 4 6
I.e., each window step should create a data item with the two values of A and B in this window and the A and B values immediately to the right of the window as target values.
My first idea was to use pandas
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html
But that seems to only work in combination with aggregate functions such as sum, which is a different use case.
Any ideas on how to implement this rolling-window-based sampling approach?
Here is one way to do it:
window_size = 3
new_df = pd.concat(
[
df.iloc[i : i + window_size, :]
.T.reset_index()
.assign(other_index=i)
.set_index(["other_index", "index"])
.set_axis([f"timestep_{j}" for j in range(1, window_size)] + ["target"], axis=1)
for i in range(df.shape[0] - window_size + 1)
]
)
new_df.index.names = ["", ""]
print(df)
# Output
timestep_1 timestep_2 target
0 A 0 1 2
B 0 2 4
1 A 1 2 3
B 2 4 6

Pandas column to row transformation for a Range

I am looking to transform the below df based on the range from range(merge_m,merge_end,merge_freq)
data={'cf':['cf1','cf2'],'Amount':['1000','2000'],
'merge_m':['1','3'],
'merge_end':['4','5'],
'merge_freq':['1','2']}
df=pd.DataFrame(data)
and I am looking to convert this into:
data_f={'m':[1,2,3,4,5],
'cf1':['1000','1000','1000','1000',0],
'cf2':[0,0,'2000',0,'2000']}
df_f=pd.DataFrame(data_f)
One way to solve this question, assuming I interpreted it correctly:
# convert to numeric dtypes:
df = df.transform(pd.to_numeric, errors="ignore")
# generate ranges based on the last three columns:
ranges = zip(df.merge_m, df.merge_end + df.merge_freq, df.merge_freq)
ranges = [
range(start, end, freq)
for start, end, freq in ranges
]
# Compute new dataframe:
(df
.loc[:, ["cf", "Amount"]]
.assign(m=ranges)
.explode("m")
.pivot( "m", "cf", "Amount")
.fillna(0, downcast="infer")
.rename_axis(columns=None)
.reset_index()
)
m cf1 cf2
0 1 1000 0
1 2 1000 0
2 3 1000 2000
3 4 1000 0
4 5 0 2000

Pandas Groupby -- efficient selection/filtering of groups based on multiple conditions?

I am trying to
filter dataframe groups in Pandas, based on multiple (any) conditions.
but I cannot seem to get to a fast Pandas 'native' one-liner.
Here I generate an example dataframe of 2*n*n rows and 4 columns:
import itertools
import random
n = 100
lst = range(0, n)
df = pd.DataFrame(
{'A': list(itertools.chain.from_iterable(itertools.repeat(x, n*2) for x in lst)),
'B': list(itertools.chain.from_iterable(itertools.repeat(x, 1*2) for x in lst)) * n,
'C': random.choices(list(range(100)), k=2*n*n),
'D': random.choices(list(range(100)), k=2*n*n)
})
resulting in dataframes such as:
A B C D
0 0 0 26 49
1 0 0 29 80
2 0 1 70 92
3 0 1 7 2
4 1 0 90 11
5 1 0 19 4
6 1 1 29 4
7 1 1 31 95
I want to
select groups grouped by A and B,
filtered groups down to where any values in the group are greater than 50 in both columns C and D,
A "native" Pandas one-liner would be the following:
test.groupby([test.A, test.B]).filter(lambda x: ((x.C>50).any() & (x.D>50).any()) )
which produces
A B C D
2 0 1 70 92
3 0 1 7 2
This is all fine for small dataframes (say n < 20).
But this solution takes quite long (for example, 4.58 s when n = 100) for large dataframes.
I have an alternative, step-by-step solution which achieves the same result, but runs much faster (28.1 ms when n = 100):
test_g = test.assign(key_C = test.C>50, key_D = test.D>50).groupby([test.A, test.B])
test_C_bool = test_g.key_C.transform('any')
test_D_bool = test_g.key_D.transform('any')
test[test_C_bool & test_D_bool]
but arguably a bit more ugly. My questions are:
Is there a better "native" Pandas solution for this task? , and
Is there a reason for the sub-optimal performance of my version of the "native" solution?
Bonus question:
In fact I only want to extract the groups and not together with their data. I.e., I only need
A B
0 1
in the above example. Is there a way to do this with Pandas without going through the intermediate step I did above?
This is similar to your second approach, but chained together:
mask = (df[['C','D']].gt(50) # in the case you have different thresholds for `C`, `D` [50, 60]
.all(axis=1) # check for both True on the rows
.groupby([df['A'],df['B']]) # normal groupby
.transform('max') # 'any' instead of 'max' also works
)
df.loc[mask]
If you don't want the data, you can forgo the transform:
mask = df[['C','D']].min(axis=1).gt(50).groupby([df['A'],df['B']]).any()
mask[mask].index
# out
# MultiIndex([(0, 1)],
# names=['A', 'B'])

How to make pandas work for cross multiplication

I have 3 data frame:
df1
id,k,a,b,c
1,2,1,5,1
2,3,0,1,0
3,6,1,1,0
4,1,0,5,0
5,1,1,5,0
df2
name,a,b,c
p,4,6,8
q,1,2,3
df3
type,w_ave,vac,yak
n,3,5,6
v,2,1,4
from the multiplication, using pandas and numpy, I want to the output in df1:
id,k,a,b,c,w_ave,vac,yak
1,2,1,5,1,16,15,18
2,3,0,1,0,0,3,6
3,6,1,1,0,5,4,7
4,1,0,5,0,0,11,14
5,1,1,5,0,13,12,15
the conditions are:
The value of the new column will be =
#its not a code
df1["w_ave"][1] = df3["w_ave"]["v"]+ df1["a"][1]*df2["a"]["q"]+df1["b"][1]*df2["b"]["q"]+df1["c"][1]*df2["c"]["q"]
for output["w_ave"][1]= 2 +(1*1)+(5*2)+(1*3)
df3["w_ave"]["v"]=2
df1["a"][1]=1, df2["a"]["q"]=1 ;
df1["b"][1]=5, df2["b"]["q"]=2 ;
df1["c"][1]=1, df2["c"]["q"]=3 ;
Which means:
- a new column will be added in df1, from the name of the column from df3.
- for each row of the df1, the value of a, b, c will be multiplied with the same-named q value from df2. and summed together with the corresponding value of df3.
-the column name of df1 , matched will column name of df2 will be multiplied. The other not matched column will not be multiplied, like df1[k].
- However, if there is any 0 in df1["a"], the corresponding output will be zero.
I am struggling with this. It was tough to explain also. My attempts are very silly. I know this attempt will not work. However, I have added this:
import pandas as pd, numpy as np
data1 = "Sample_data1.csv"
data2 = "Sample_data2.csv"
data3 = "Sample_data3.csv"
folder = '~Sample_data/'
df1 =pd.read_csv(folder + data1)
df2 =pd.read_csv(folder + data2)
df3 =pd.read_csv(folder + data3)
df1= df2 * df1
Ok, so this will in no way resemble your desired output, but vectorizing the formula you provided:
df2=df2.set_index("name")
df3=df3.set_index("type")
df1["w_ave"] = df3.loc["v", "w_ave"]+ df1["a"].mul(df2.loc["q", "a"])+df1["b"].mul(df2.loc["q", "b"])+df1["c"].mul(df2.loc["q", "c"])
Outputs:
id k a b c w_ave
0 1 2 1 5 1 16
1 2 3 0 1 0 4
2 3 6 1 1 0 5
3 4 1 0 5 0 12
4 5 1 1 5 0 13

Pandas: Imputing Missing Values to Data Frame

Suppose I have a data frame with some missing values, as below:
import pandas as pd
df = pd.DataFrame([[1,3,'NA',2], [0,1,1,3], [1,2,'NA',1]], columns=['W', 'X', 'Y', 'Z'])
print(df)
The variable Y is missing two values. Say I run some imputation model and come up with an estimate of what the two values should be:
to_impute = [2,1]
What is the best way of replacing the two NA's with those two values? I know of ways that are fairly roundabout, e.g. looping over to_impute and using df.iloc to add each value. But I'm hoping there is a concise and non-iterative way.
(This is something that is easy in R, and I'm hoping it can be easy in Pandas.)
In pandas NA should be NaN, 1st you need to replace it , then we can using fillna
df.Y=df.Y.replace('NA',np.nan)
df.Y=df.Y.fillna(pd.Series([1,2],index=df.index[df.Y.isnull()]))
df
Out[1375]:
W X Y Z
0 1 3 1.0 2
1 0 1 1.0 3
2 1 2 2.0 1
Let us treat your NA as str
df.loc[df.Y=='NA','Y']=[1,2]
df
Out[1380]:
W X Y Z
0 1 3 1 2
1 0 1 1 3
2 1 2 2 1