Mumbojumbo .rolling() .max() .groupby() combination in python pandas - pandas

I am looking to do a "rolling" .max() .min() of B column "groupedby" date(column A values). However, trick is it should start on every row again so i can not use for example anything like df['MAX'] = df['B'].rolling(10).max().shift(-9) (couse i need to end it where group ends - every group can have different number of rows) or simply groupby A column (becouse i need that rolling max min with start on each row and end where each group ends - which means for row 1 column C is max of rows 1-4 in column B, for row 2 column C is max of rows 2-4 from column B, for row 3 column C is max of rows 3-4 from column B, for row 4 column C is max of row 4 from column B etc etc..). Hope it make sence - columns C and D are desired results. Thank you all in advance.
A B C(max) D(min)
1 2016-01-01 0 7 0
2 2016-01-01 7 7 3
3 2016-01-01 3 4 3
4 2016-01-01 4 4 4
5 2016-01-02 2 5 1
6 2016-01-02 5 5 1
7 2016-01-02 1 1 1
8 2016-01-03 1 4 1
9 2016-01-03 3 4 2
10 2016-01-03 4 4 2
11 2016-01-03 2 2 2

df['C_max'] = df.groupby('A')['B'].transform(lambda x: x[::-1].cummax()[::-1])
df['D_min'] = df.groupby('A')['B'].transform(lambda x: x[::-1].cummin()[::-1])
A B C(max) D(min) C_max D_min
1 2016-01-01 0 7 0 7 0
2 2016-01-01 7 7 3 7 3
3 2016-01-01 3 4 3 4 3
4 2016-01-01 4 4 4 4 4
5 2016-01-02 2 5 1 5 1
6 2016-01-02 5 5 1 5 1
7 2016-01-02 1 1 1 1 1
8 2016-01-03 1 4 1 4 1
9 2016-01-03 3 4 2 4 2
10 2016-01-03 4 4 2 4 2
11 2016-01-03 2 2 2 2 2

Related

Fill in Data Frame based on Previous Data

I am working on a project with a retailer where we are wanting to clean some data for reporting purposes.
The retailer has multiple stores and every week the staff in the stores would scan the different items on different displays (They scan the display first to let us know which display they are talking about). Also, they only scan displays that changed in that week, if a display was not changed then we assume that it stayed the same.
Right now we are working with 2 dataframes:
Hierarchy Data Frame Example:
This table basically has weeks 1 to 52 for every end cap (display) in every store. Let's assume the company only has 2 stores and 3 end caps in each store. Also different stores could have different End Cap codes but that shouldn't matter for our purposes (I don't think).
Week Store End Cap
0 1 1 A
1 1 1 B
2 1 1 C
3 1 2 A
4 1 2 B
5 1 2 D
6 2 1 A
7 2 1 B
8 2 1 C
9 2 2 A
10 2 2 B
11 2 2 D
Next we have the historical file with actual changes to be used to update the End Caps.
Week Store End Cap UPC
0 1 1 A 123456
1 1 1 B 789456
2 1 1 B 546879
3 1 1 C 423156
4 1 2 A 231567
5 1 2 B 456123
6 1 2 D 689741
7 2 1 A 321654
8 2 1 C 852634
9 2 1 C 979541
10 2 2 A 132645
11 2 2 B 787878
12 2 2 D 615432
To merge the two dataframes I used:
merged_df = pd.merge(hierarchy, hist, how='left', left_on=['Week','Store', 'End Cap'], right_on = ['Week','Store', 'End Cap'])
Which gave me:
Week Store End Cap UPC
0 1 1 A 123456.0
1 1 1 B 789456.0
2 1 1 B 546879.0
3 1 1 C 423156.0
4 1 2 A 231567.0
5 1 2 B 456123.0
6 1 2 D 689741.0
7 2 1 A 321654.0
8 2 1 B NaN
9 2 1 C 852634.0
10 2 1 C 979541.0
11 2 2 A 132645.0
12 2 2 B 787878.0
13 2 2 D 615432.0
Except for the one instance where it shows NAN. Store 1 end cap 2 in week 2 did not change and hence was not scanned. So it did not show up in the historical dataframe. In this case I would want to see the latest items that were scanned for that end cap at that store (see row 2 & 3 of the historical dataframe). So technically that could have also been scanned in Week 52 of last year but I just want to fill the NAN with the latest information to show that it did not change. How do I go about doing that?
The desired output would look like:
Week Store End Cap UPC
0 1 1 A 123456.0
1 1 1 B 789456.0
2 1 1 B 546879.0
3 1 1 C 423156.0
4 1 2 A 231567.0
5 1 2 B 456123.0
6 1 2 D 689741.0
7 2 1 A 321654.0
8 2 1 B 789456.0
9 2 1 B 546879.0
10 2 1 C 852634.0
11 2 1 C 979541.0
12 2 2 A 132645.0
13 2 2 B 787878.0
14 2 2 D 615432.0
Thank you!
EDIT:
Further to the above, I tried to sort the data and then forward fill which only partially fixed the issue I am having:
sorted_df = merged_df.sort_values(['End Cap', 'Store'], ascending=[True, True])
Week Store End Cap UPC
0 1 1 A 123456.0
7 2 1 A 321654.0
4 1 2 A 231567.0
11 2 2 A 132645.0
1 1 1 B 789456.0
2 1 1 B 546879.0
8 2 1 B NaN
5 1 2 B 456123.0
12 2 2 B 787878.0
3 1 1 C 423156.0
9 2 1 C 852634.0
10 2 1 C 979541.0
6 1 2 D 689741.0
13 2 2 D 615432.0
sorted_filled = sorted_df.fillna(method='ffill')
Gives me:
Week Store End Cap UPC
0 1 1 A 123456.0
7 2 1 A 321654.0
4 1 2 A 231567.0
11 2 2 A 132645.0
1 1 1 B 789456.0
2 1 1 B 546879.0
8 2 1 B 546879.0
5 1 2 B 456123.0
12 2 2 B 787878.0
3 1 1 C 423156.0
9 2 1 C 852634.0
10 2 1 C 979541.0
6 1 2 D 689741.0
13 2 2 D 615432.0
This output did add the 546879 to week 2 store1 End Cap B but it did not add the 789456 which I also need. I need it to add another row with that value as well.
You can also do it like this creating a helper column to handle duplicate UPC per store/week/end cap.
idxcols=['Week', 'Store', 'End Cap']
hist_idx = hist.set_index(idxcols + [hist.groupby(idxcols).cumcount()])
hier_idx = hierarchy.set_index(idxcols+[hierarchy.groupby(idxcols).cumcount()])
hier_idx.join(hist_idx, how='right')\
.unstack('Week')\
.ffill(axis=1)\
.stack('Week')\
.reorder_levels([3,0,1,2])\
.sort_index()\
.reset_index()\
.drop('level_3', axis=1)
Output:
Week Store End Cap UPC
0 1 1 A 123456.0
1 1 1 B 789456.0
2 1 1 B 546879.0
3 1 1 C 423156.0
4 1 2 A 231567.0
5 1 2 B 456123.0
6 1 2 D 689741.0
7 2 1 A 321654.0
8 2 1 B 789456.0
9 2 1 B 546879.0
10 2 1 C 852634.0
11 2 1 C 979541.0
12 2 2 A 132645.0
13 2 2 B 787878.0
14 2 2 D 615432.0
You could try something like this:
# New df without Nan values
df1 = merged_df[~merged_df["name"].isna()]
# New df with Nan values only
df2 = merged_df[merged_df["name"].isna()]
# Set previous week
df2["Week"] = df2["Week"] - 1
# For each W/S/EC in df2, grab corresponding UPC value in df1
# and append a new row (shifted back to current week) to df1
for week in df2["Week"].values:
for store in df2["Store"].values:
for cap in df2["Enc Cap"].values:
mask = (
(df1["Week"] == week)
& (df1["Store"] == store)
& (df1["End Cap"] == cap)
)
upc = df1.loc[mask, "UPC"].item()
row = [week + 1, store, cap, upc]
df1.loc[len(df1)] = row
sorted_df = df1.sort_values(by=["Week", "Store", "End Cap"])

If a column value does not have a certain number of occurances in a dataframe, how to duplicate rows at random until that count is met?

Say that this is what my dataframe looks like
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
I want every unique value in Column B to occur at least 3 times. So none of the rows with a B value of 5 are duplicated. The row with a column B value of 0 are duplicated twice. And the rest have one of their two rows duplicated at random.
Here is an example desired output
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 4 2
11 2 3
12 2 0
13 2 0
14 4 1
Edit:
The row chosen to be duplicated should be selected at random
To random pick rows, I would use groupby apply with sample on each group. x of lambda is each group of B, so I use reapeat - x.shape[0] to find number of rows need to create. There may be some cases group B already has more rows than 3, so I use np.clip to force negative values to 0. Sample on 0 row is the same as ignore it. Finally, reset_index and append back to df
repeats = 3
df1 = (df.groupby('B').apply(lambda x: x.sample(n=np.clip(repeats-x.shape[0], 0, np.inf)
.astype(int), replace=True))
.reset_index(drop=True))
df_final = df.append(df1).reset_index(drop=True)
Out[43]:
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 2 0
11 2 0
12 5 1
13 4 2
14 2 3

Comparing two dataframe and output the index of the duplicated row once

I need help with comparing two dataframes. For example:
The first dataframe is
df_1 =
0 1 2 3 4 5
0 1 1 1 1 1 1
1 2 2 2 2 2 2
2 3 3 3 3 3 3
3 4 4 4 4 4 4
4 2 2 2 2 2 2
5 5 5 5 5 5 5
6 1 1 1 1 1 1
7 6 6 6 6 6 6
The second dataframe is
df_2 =
0 1 2 3 4 5
0 1 1 1 1 1 1
1 2 2 2 2 2 2
2 3 3 3 3 3 3
3 4 4 4 4 4 4
4 5 5 5 5 5 5
5 6 6 6 6 6 6
May I know if there is a way (without using for loop) to find the index of the rows of df_1 that have the same row values of df_2. In the example above, my expected output is below
index =
0
1
2
3
5
7
The size of the column of the "index" variable above should have the same column size of df_2.
If the same row of df_2 repeated in df_1 more than once, I only need the index of the first appearance, thats why I don't need the index 4 and 6.
Please help. Thank you so much!
Tommy
Use DataFrame.merge with DataFrame.drop_duplicates and DataFrame.reset_index for convert index to column for avoid lost index values, last select column called index:
s = df_2.merge(df_1.drop_duplicates().reset_index())['index']
print (s)
0 0
1 1
2 2
3 3
4 5
5 7
Name: index, dtype: int64
Detail:
print (df_2.merge(df_1.drop_duplicates().reset_index()))
0 1 2 3 4 5 index
0 1 1 1 1 1 1 0
1 2 2 2 2 2 2 1
2 3 3 3 3 3 3 2
3 4 4 4 4 4 4 3
4 5 5 5 5 5 5 5
5 6 6 6 6 6 6 7
Check the solution
df1=pd.DataFrame({'0':[1,2,3,4,2,5,1,6],
'1':[1,2,3,4,2,5,1,6],
'2':[1,2,3,4,2,5,1,6],
'3':[1,2,3,4,2,5,1,6],
'4':[1,2,3,4,2,5,1,6],
'5':[1,2,3,4,2,5,1,6]})
df1=pd.DataFrame({'0':[1,2,3,4,5,6],
'1':[1,2,3,4,5,66],
'2':[1,2,3,4,5,6],
'3':[1,2,3,4,5,66],
'4':[1,2,3,4,5,6],
'5':[1,2,3,4,5,6]})
df1[df1.isin(df2)].index.values.tolist()
### Output
[0, 1, 2, 3, 4, 5, 6, 7]

which rows are duplicates to each other

I have got a database with a lot of columns. Some of the rows are duplicates (on a certain subset).
Now I want to find out which row duplicates which row and put them together.
For instance, let's suppose that the data frame is
id A B C
0 0 1 2 0
1 1 2 3 4
2 2 1 4 8
3 3 1 2 3
4 4 2 3 5
5 5 5 6 2
and subset is
['A','B']
I expect something like this:
id A B C
0 0 1 2 0
1 3 1 2 3
2 1 2 3 4
3 4 2 3 5
4 2 1 4 8
5 5 5 6 2
Is there any function that can help me do this?
Thanks :)
Use DataFrame.duplicated with keep=False for mask with all dupes, then flter by boolean indexing, sorting by DataFrame.sort_values and join together by concat:
L = ['A','B']
m = df.duplicated(L, keep=False)
df = pd.concat([df[m].sort_values(L), df[~m]], ignore_index=True)
print (df)
id A B C
0 0 1 2 0
1 3 1 2 3
2 1 2 3 4
3 4 2 3 5
4 2 1 4 8
5 5 5 6 2

Pandas count values inside dataframe

I have a dataframe that looks like this:
A B C
1 1 8 3
2 5 4 3
3 5 8 1
and I want to count the values so to make df like this:
total
1 2
3 2
4 1
5 2
8 2
is it possible with pandas?
With np.unique -
In [332]: df
Out[332]:
A B C
1 1 8 3
2 5 4 3
3 5 8 1
In [333]: ids, c = np.unique(df.values.ravel(), return_counts=1)
In [334]: pd.DataFrame({'total':c}, index=ids)
Out[334]:
total
1 2
3 2
4 1
5 2
8 2
With pandas-series -
In [357]: pd.Series(np.ravel(df)).value_counts().sort_index()
Out[357]:
1 2
3 2
4 1
5 2
8 2
dtype: int64
You can also use stack() and groupby()
df = pd.DataFrame({'A':[1,8,3],'B':[5,4,3],'C':[5,8,1]})
print(df)
A B C
0 1 5 5
1 8 4 8
2 3 3 1
df1 = df.stack().reset_index(1)
df1.groupby(0).count()
level_1
0
1 2
3 2
4 1
5 2
8 2
Other alternative may be to use stack, followed by value_counts then, result changed to frame and finally sorting the index:
count_df = df.stack().value_counts().to_frame('total').sort_index()
count_df
Result:
total
1 2
3 2
4 1
5 2
8 2
using np.unique(, return_counts=True) and np.column_stack():
pd.DataFrame(np.column_stack(np.unique(df, return_counts=True)))
returns:
0 1
0 1 2
1 3 2
2 4 1
3 5 2
4 8 2