I am considering using a closure with the current state, to compute the rolling window (which in my case is of width 2), to answer my own question, which I have recently posed. Something on the lines of:
def test(init_value):
def my_fcn(x,y):
nonlocal init_value
actual_value = (x + y) * init_value
init_value = actual_value
return init_value
return my_fcn
where my_fcn is a dummy function used for testing. Therefore the function might be initialised thorugh actual_fcn = test(0); where we assume the initial value is zero, for example. Finally one could use the function through ddf.apply (where ddf is the actual dask dataframe).
Finally the question: this would work, if the order of the computations is preserved, otherwise everything would be scrambled. I have not tested it, since -even if it passes- I cannot be 100% sure it will always preserve the order. So, question is:
Does dask dataframe's apply method preserve rows order?
Any other ideas? Any help highly appreciated.
Apparently yes. I am using dask 1.0.0.
The following code:
import numpy as np
import pandas as pd
import dask.dataframe as dd
number_of_components = 30
df = pd.DataFrame(np.random.randint(0,number_of_components,size=(number_of_components, 4)), columns=list('ABCD'))
my_data_frame = dd.from_pandas(df, npartitions = 1 )
def sumPrevious( previousState ) :
def getValue(row):
nonlocal previousState
something = row['A'] - previousState
previousState = row['A']
return something
return getValue
given_func = sumPrevious(1)
out = my_data_frame.apply(given_func, axis = 1 , meta = float).compute()
behaves as expected. There is a big caveat: if the previous state is provided by reference (i.e.: it is some object of some class) then the user should be careful in using equality inside the nested function to update the previous state: since it will have side effects, if the state is passed by reference.
Rigorously, this example does not prove that order is preserved under any circumstances; so I would still be interested whether I can rely on this assumption.
Related
Polars DataFrame does not provide a method to update the value of a single cell currently. Instead, we have to the method DataFrame.apply or DataFrame.apply_at_idx that updates a whole column / Series. This can be very expensive in situations where an algorithm repeated update a few elements of some columns. Why is DataFrame designed in this way? Looking into the code, it seems to me that Series does provide inner mutability via the method Series._get_inner_mut?
As of polars >= 0.15.9 mutation of any data backed by number is constant complexity O(1) if data is not shared. That is numeric data and dates and duration.
If the data is shared we first must copy it, so that we become the solely owner.
import polars as pl
import matplotlib.pyplot as plt
from time import time
ts = []
ts_shared = []
clone_times = []
ns = []
for n in [1e3, 1e5, 1e6, 1e7, 1e8]:
s = pl.zeros(int(n))
t0 = time()
# we are the only owner
# so mutation is inplace
s[10] = 10
# time
t = time() - t0
# store datapoints
ts.append(t)
ns.append(n)
# clone is free
t0 = time()
s2 = s.clone()
t = time() - t0
clone_times.append(t)
# now there are two owners of the memory
# we write to it so we must copy all the data first
t0 = time()
s2[11] = 11
t = time() - t0
ts_shared.append(t)
plt.plot(ns, ts_shared, label="writing to shared memory")
plt.plot(ns, ts, label="writing to owned memory")
plt.plot(ns, clone_times, label="clone time")
plt.legend()
In rust this dispatches to set_at_idx2, but it is not released yet. Note that using the lazy engine this will all be done implicitly for you.
So I am trying to calculate a value from one column or another based based on which one has data available into a new column. This is the code I have right now. It doesn't seem to notice when there is no data present and always goes to the "else" statement. My dataframe is an imported excel file. Thanks for any advice!
def create_sulfide_col(row):
if row["Sulphate-S(HCL Leachable)_%S"] is None:
val = row["Total-S_%S"] - row["Sulphate-S(HCL Leachable)_%S"]
else:
val = ["Total-S_%S"]- df["Sulphate-S_%S"]
return val
df["Sulphide-S(calc)-C_%S"] = df.apply(lambda row: create_sulfide_col(row), axis='columns')
This is can be done by using numpy.where
Import numpy as np
df['newcol'] = np.where(df["Sulphate-S(HCL Leachable)_%S"].isna(),df["Total-S_%S"]- df["Sulphate-S(HCL Leachable)_%S"],df["Total-S_%S"]- df["Sulphate-S_%S"])
A simple line of code df.iloc[100] = df.iloc[500] gets very slow on a large DataFrame with mixed types due to the fact that pandas copies the entire columns (found it in the source code). What I don't get is why this behaviour is necessary and how to avoid it and force pandas to just update the relevant values if I am sure in advance that the dtypes are the same. When the DF is single-type then the copying doesn't take place and values are modified in-place.
I found a workaround that seems to have the desired effect but it works only on row numbers:
for c in df.columns:
df[c].array[100] = df[c].array[500]
It is literally 1000x faster than df.iloc[100] = df.iloc[500].
Here is how to reproduce the slowness of assignment:
import string
import itertools
import timeit
import numpy as np
import pandas as pd
data = list(itertools.product(range(200_000), string.ascii_uppercase))
df = pd.DataFrame(data, columns=['i', 'p'])
df['n1'] = np.random.randn(len(df))
df['n2'] = np.random.randn(len(df))
df['n3'] = np.random.randn(len(df))
df['n4'] = np.random.randn(len(df))
print(
timeit.timeit('df.loc[100] = df.loc[500]', number=100, globals=globals()) / 100
)
df_o = df.copy()
# Remove mixed types
for c in df_o.columns:
df_o[c] = df_o[c].astype('object')
print(
timeit.timeit('df_o.loc[100] = df_o.loc[500]', number=100, globals=globals()) / 100
)
This example alone shows 10x performance difference. I still don't fully understand why even with non-mixed types assigning a single row is quite slow.
While its easy to use pandas rolling method to apply standard formulas, but i find it hard if it involves multiple column with limited past rows. Using the following code to better elaborate: -
import numpy as np
import pandas as pd
#create dummy pandas
df=pd.DataFrame({'col1':np.arange(0,25),'col2':np.arange(100,125),'col3':np.nan})
def func1(shortdf):
#dummy formula
#use last row of col1 multiply by sum of col2
return (shortdf.col1.tail(1).values[0]+shortdf.col2.sum())*3.14
for idx, i in df.iterrows():
if idx>3:
#only interested in the last 3 rows from position of dataframe
df.loc[idx,'col3']=func1(df.iloc[idx-3:idx])
I currently use this iterrow method which needless to say is extremely slow. can anyone has better suggestion?
Option 1
So shift is the solution here. You do have to use rolling for the summation, and then shift that series after the addition and multiplication.
df = pd.DataFrame({'col1':np.arange(0,25),'col2':np.arange(100,125),'col3':np.nan})
ans = ((df['col1'] + df['col2'].rolling(3).sum()) * 3.14).shift(1)
You can check to see that ans is the same as df['col3'] by using ans.eq(df['col3']). Once you see that all but the first few are the same, just change ans to df['col3'] and you should be all set.
Option 2
Without additional information about the customized weight function, it is hard to help. However, this option may be a solution as it separates the rolling calculation at the cost of using more memory.
# df['col3'] = ((df['col1'] + df['col2'].rolling(3).sum()) * 3.14).shift(1)
s = df['col2']
stride = pd.DataFrame([s.shift(x).values[::-1][:3] for x in range(len(s))[::-1]])
res = pd.concat([df, stride], axis=1)
# here you can perform your custom weight function
res['final'] = ((res[0] + res[1] + res[2] + res['col1']) * 3.14).shift(1)
stride is adapted from this question and the calculation is concatenated row-wise to the original dataframe. In this way each column has the value needed to compute whatever it is you may need.
res['final'] is identical to option 1's ans
I'm interested in performing Linear interpolation using the scipy.interpolate library. The dataset looks somewhat like this:
DATAFRAME for interpolation between X, Y for different RUNs
I'd like to use this interpolated function to find the missing Y from this dataset:
DATAFRAME to use the interpolation function
The number of runs given here is just 3, but I'm running on a dataset that will run into 1000s of runs. Hence appreciate if you could advise how to use the iterative functions for the interpolation ?
from scipy.interpolate import interp1d
for RUNNumber in range(TotalRuns)
InterpolatedFunction[RUNNumber]=interp1d(X, Y)
As I understand it, you want a separate interpolation function defined for each run. Then you want to apply these functions to a second dataframe. I defined a dataframe df with columns ['X', 'Y', 'RUN'], and a second dataframe, new_df with columns ['X', 'Y_interpolation', 'RUN'].
interpolating_functions = dict()
for run_number in range(1, max_runs):
run_data = df[df['RUN']==run_number][['X', 'Y']]
interpolating_functions[run_number] = interp1d(run_data['X'], run_data['Y'])
Now that we have interpolating functions for each run, we can use them to fill in the 'Y_interpolation' column in a new dataframe. This can be done using the apply function, which takes a function and applies it to each row in a dataframe. So let's define an interpolate function that will take a row of this new df and use the X value and the run number to calculate an interpolated Y value.
def interpolate(row):
int_func = interpolating_functions[row['RUN']]
interp_y = int_func._call_linear([row['X'])[0] #the _call_linear method
#expects and returns an array
return interp_y[0]
Now we just use apply and our defined interpolate function.
new_df['Y_interpolation'] = new_df.apply(interpolate,axis=1)
I'm using pandas version 0.20.3, and this gives me a new_df that looks like this: