Calculating values that depend on the previous row - pandas

It seems that cumsum, cumprod and other cumulative operations can not be transformed. At present, it seems that the cumulative operation can only be done in a row-by-row cycle.
Data about 10 million lines, need to do cross-line calculation cycle, the computer can not run at all, consult the solution, thank you.
The calculations needed are as follows:
for i in range(1,10000000):
df.iloc[i,3] = df.iloc[i-1,3]*df[i,1]+df[i,2]

There probably is no pythonic way to do it without looping it in C/Java style.
Added: Thus, just do a loop. Or hack using global variables etc as follow:
prev_result = 0
def my_func(x):
global prev_result
prev_result = x.a * prev_result + x.b
return prev_result
df = pd.DataFrame({"a": [1, 2, 3], "b": [1, 2, 3]})
df["c"] = df.apply(my_func, axis=1)
# df["c"] is now [1, 4, 15]
# 0 x 1 + 1 = 1; 1 x 2 + 2 = 4; 4 x 3 + 3 = 15;
Edit: The followings are not cumulative and hence, does not answer the question.
That being said, #pythonic833 's solution:
df.shift(-1).iloc[:,3]*df.iloc[:,1]+df.iloc[:,2]
is quite a decent one.
If I were you, I'd just assign df["temp_column"] as df["third_column"].shift(-1)
df["temp_column"] = df["third_column"].shift(-1)
df["third_column"] = df["temp_column"] * df["first_column"] + df["second_column"]
My proposed solution is a bit easier to read at the cost of memory for a column.

Related

How to do an advanced multiplication with panda dataframe

I have a dataframe1 of 1802 rows and 29 columns (in code as df) - each row is a person and each column is a number representing their answer to 29 different questions.
I have another dataframe2 of 29 different coefficients (in code as seg_1).
Each column needs to be multiplied by the corresponding coefficient and this needs to be repeated for each participant.
For example - 1802 iterations of q1 * coeff1, 1802 iterations of q2 * coeff2 etc
So I should end up with 1802 * 29 = 52,258
but the answer doesn't seem to be this length and also the answers aren't what I expect - I think the loop is multiplying q1-29 by coeff1, then repeating this for coeff2 but that's not what I need.
questions = range(0, 28)
co = range(0, 28)
segment_1 = []
for a in questions:
for b in co:
answer = df.iloc[:,a] * seg_1[b]
segment_1.append([answer])
Proper encoding of the coefficients as a Pandas frame makes this a one-liner
df_person['Answer'] = (df_person * df_coeffs.values).sum(1)
and circumvents slow for-loops. In addition, you don't need to remember the number of rows in the given table 1802 and can use the code without changes even if you data grows larger.
For a minimum viable example, see:
# answer frame
df_person = pd.DataFrame({'Question_1': [10, 20, 15], 'Question_2' : [4, 4, 2], 'Question_3' : [2, -2, 1]})
# coefficient frame
seg_1 = [2, 4, -1]
N = len(df_person)
df_coeffs = pd.DataFrame({'C_1': [seg_1[0]] * N, 'C_2' : [seg_1[1]] * N, 'C_3' : [seg_1[2]] * N})
# elementwise multiplication & row-wise summation
df_person['Answer'] = (df_person * df_coeffs.values).sum(1)
giving
for the coefficient table df_coeffs
and answer table df_person

Pandas | How to effectively filter a column

I'm looking for a way to quickly and effectively filter through a dataframe column and remove values that don't meet a condition.
Say, I have a column with the numbers 4, 5 and 10. I want to filter the column and replace any numbers above 7 with 0. How would I go about this?
You're talking about two separate things - filtering and value replacement. They both have uses and end up being similar in nature but for filtering I'll point to this great answer.
Let's say our data frame is called df and looks like
A B
1 4 10
2 4 2
3 10 1
4 5 9
5 10 3
Column A fits your statement of a column only having values 4, 5, 10. If you wanted to replace numbers above 7 with 0, this would do it:
df["A"] = [0 if x > 7 else x for x in df["A"]]
If you read through the right-hand side it cleanly explains what it is doing. It helps to include parentheses to separate out the "what to do" with the "what you're doing it over":
df["A"] = [(0 if x > 7 else x) for x in df["A"]]
If you want to do a manipulation over multiple columns, then utilizing zip allows you to do it easily. For example, if you want the sum of columns A and B then:
df["sum"] = [x[0] + x[1] for x in zip(df["A"], df["B"])]
Take care when you overwrite data - this removes information. It's a good practice to have the transformed data in other columns so you can trace back when something inevitably goes wonky.
There is many options. One possibility for if then... is np.where
import pandas as pd
import numpy as np
df = pd.DataFrame({'x': [1, 200, 4, 5, 6, 11],
'y': [4, 5, 10, 24, 4 , 3]})
df['y'] = np.where(df['y'] > 7, 0, df['y'])

how to avoid split and sum of pieces in pytorch or numpy

I want to split a long vector into smaller unequal pieces, do a summation on each piece and gather the results into a new vector.
I need to do this in pytorch but I am also interested to see how this is done with numpy.
This can easily be accomplish by splitting the vector.
sizes = [3, 7, 5, 9]
X = torch.ones(sum(sizes))
Y = torch.tensor([s.sum() for s in torch.split(X, sizes)])
or with np.ones and np.split.
Is there a more efficient way to do this?
Edit:
Inspired by the first comment:
indices = np.cumsum([0]+sizes)[:-1]
Y = np.add.reduceat(X, indices.tolist())
solves it for numpy. I am still looking for a solution with pytorch.
index_add_ is your friend!
# inputs
sizes = torch.tensor([3, 7, 5, 9], dtype=torch.long)
x = torch.ones(sizes.sum())
# prepare an index vector for summation (what elements of x are summed to each element of y)
ind = torch.zeros(sizes.sum(), dtype=torch.long)
ind[torch.cumsum(sizes, dim=0)[:-1]] = 1
ind = torch.cumsum(ind, dim=0)
# prepare the output
y = torch.zeros(len(sizes))
# do the actual summation
y.index_add_(0, ind, x)

Numpy index of the maximum with reduction - numpy.argmax.reduceat

I have a flat array b:
a = numpy.array([0, 1, 1, 2, 3, 1, 2])
And an array c of indices marking the start of each "chunk":
b = numpy.array([0, 4])
I know I can find the maximum in each "chunk" using a reduction:
m = numpy.maximum.reduceat(a,b)
>>> array([2, 3], dtype=int32)
But... Is there a way to find the index of the maximum <edit>within a chunk</edit> (like numpy.argmax), with vectorized operations (no lists, loops)?
Borrowing the idea from this post.
Steps involved :
Offset all elements in a group by a limit-offset. Sort them globally, thus limiting each group to stay at their positions, but sorting the elements within each group.
In the sorted array, we would look for the last element, which would be the group max. Their indices would be the argmax after offsetting down for the group lengths.
Thus, a vectorized implementation would be -
def numpy_argmax_reduceat(a, b):
n = a.max()+1 # limit-offset
grp_count = np.append(b[1:] - b[:-1], a.size - b[-1])
shift = n*np.repeat(np.arange(grp_count.size), grp_count)
sortidx = (a+shift).argsort()
grp_shifted_argmax = np.append(b[1:],a.size)-1
return sortidx[grp_shifted_argmax] - b
As a minor tweak and possibly faster one, we could alternatively create shift with cumsum and thus have a variation of the earlier approach, like so -
def numpy_argmax_reduceat_v2(a, b):
n = a.max()+1 # limit-offset
id_arr = np.zeros(a.size,dtype=int)
id_arr[b[1:]] = 1
shift = n*id_arr.cumsum()
sortidx = (a+shift).argsort()
grp_shifted_argmax = np.append(b[1:],a.size)-1
return sortidx[grp_shifted_argmax] - b

Vectorized gather operation in numpy

Given this (sample) data
target_slots = np.array([1, 3, 1, 0, 8, 5, 8, 1, 1, 2])
dummy_elements = np.arange(10*D).reshape(10, D)
is there any way to express in a vectorized numpy expression the operation
gathered_results = np.zeros((num_slots, D))
for i, target in enumerate(target_slots):
gathered_results[target] += dummy_elements[i]
this operation looks like a bincount but instead of counting we sum the elements of another array.
(It is implied that np.max(target_slots)<num_slots and np.min(target_slots)>=0 and target_slots.shape[0] == D)
Approach #1
You are performing interval-ed summing selecting rows off dummy_elements and adding in at specific rows into the output array. So, one obvious choice of a vectorized solution would be with np.add.reduceat, like so -
sidx = target_slots.argsort()
out = np.zeros((num_slots, D))
unq, shift_idx = np.unique(target_slots[sidx],return_index=True)
out[unq] = np.add.reduceat(dummy_elements[sidx],shift_idx)
Approach #2
Alternatively, we can use np.bincount as well to perform these ID based summing operations. One way would be with a loop that iterates along the columns of dummy_elements and I think would be beneficial when the no. of such columns is comparatively smaller. The implementation would look like this -
out = np.zeros((num_slots, D))
L = target_slots.size
for i in range(D):
out[:,i] = np.bincount(target_slots,dummy_elements[:,i],minlength=L)
Approach #3
A vectorized version of the same would be like this -
L = target_slots.size
ids = (target_slots[:,None] + np.arange(D)*L).ravel('F')
out = np.bincount(ids,dummy_elements.ravel('F'),minlength=L*D).reshape(D,-1).T