How to sum values under GroupBy and consecutive date conditions? - pandas

Given table:
ID
LINE
SITE
DATE
UNITS
TOTAL
1
X
AAA
02-May-2017
12
30
2
X
AAA
03-May-2017
10
22
3
X
AAA
04-May-2017
22
40
4
Z
AAA
20-MAY-2017
15
44
5
Z
AAA
21-May-2017
8
30
6
Z
BBB
22-May-2017
10
32
7
Z
BBB
23-May-2017
25
52
8
K
CCC
02-Jun-2017
6
22
9
K
CCC
03-Jun-2017
4
33
10
K
CCC
12-Aug-2017
11
44
11
K
CCC
13-Aug-2017
19
40
12
K
CCC
14-Aug-2017
30
40
for each row if ID,LINE ,SITE equal to previous row (day) need to calculate as below (last day) and (last 3 days ) :
Note that is need to insure date are consecutive under "groupby" of ID,LINE ,SITE columns
ID
LINE
SITE
DATE
UNITS
TOTAL
Last day
Last 3 days
1
X
AAA
02-May-2017
12
30
0
0
2
X
AAA
03-May-2017
10
22
12/30
12/30
3
X
AAA
04-May-2017
22
40
10/22
(10+12)/(30+22)
4
Z
AAA
20-MAY-2017
15
44
0
0
5
Z
AAA
21-May-2017
8
30
15/44
15/44
6
Z
BBB
22-May-2017
10
32
0
0
7
Z
BBB
23-May-2017
25
52
10/32
10/32
8
K
CCC
02-Jun-2017
6
22
0
0
9
K
CCC
03-Jun-2017
4
33
6/22
6/22
10
K
CCC
12-Aug-2017
11
44
4/33
0
11
K
CCC
13-Aug-2017
19
40
11/44
(11/44)
12
K
CCC
14-Aug-2017
30
40
19/40
(11+19/44+40)

In this cases i usually do a for loop with groupby:
import pandas as pd
import numpy as np
#copied your table
table = pd.read_csv('/home/fm/Desktop/stackover.csv')
table.set_index('ID', inplace = True)
table[['Last day','Last 3 days']] = np.nan
for i,r in table.groupby(['LINE' ,'SITE']):
#First subset non sequential dates
limits_interval = pd.to_datetime(r['DATE']).diff() != '1 days'
#First element is a false positive, as its impossible to calculate past days from first day
limits_interval.iloc[0]=False
ids_subset = r.index[limits_interval].to_list()
ids_subset.append(r.index[-1]+1) #to consider all values
id_start = 0
for id_end in ids_subset:
r_sub = r.loc[id_start:id_end-1, :].copy()
id_start = id_end
#move all values one day off, if the database is as in your example (1 line per day) wont have problems
r_shifted = r_sub.shift(1)
r_sub['Last day']=r_shifted['UNITS']/r_shifted['TOTAL']
aux_units_cumsum = r_shifted['UNITS'].cumsum()
aux_total_cumsum = r_shifted['TOTAL'].cumsum()
r_sub['Last 3 days'] = aux_units_cumsum/aux_total_cumsum
r_sub.fillna(0, inplace = True)
table.loc[r_sub.index,:]=r_sub.copy()
You can make a function and apply in groupby, it would be cleaner: Apply function to pandas groupby. It would be more elegant.
Wish I could help you, good luck

Related

How to reshape, group by and rename Julia dataframe?

I have the following DataFrame :
Police Product PV1 PV2 PV3 PM1 PM2 PM3
0 1 AA 10 8 14 150 145 140
1 2 AB 25 4 7 700 650 620
2 3 AA 13 22 5 120 80 60
3 4 AA 12 6 12 250 170 120
4 5 AB 10 13 5 500 430 350
5 6 BC 7 21 12 1200 1000 900
PV1 is the item PV for year 1, PV2 for year 2, ....
I would like to combine reshaping and group by operations + some renaming stuffs to obtain the DF below :
Product Item Year1 Year2 Year3
0 AA PV 35 36 31
1 AA PM 520 395 320
2 AB PV 35 17 12
3 AB PM 1200 1080 970
4 BC PV 7 21 12
5 BC PM 1200 1000 900
It makes a group by operation on product name and reshape the DF to pass the item as a column and put the sum of each in new columns years.
I found a way to do it in Python but I am now looking for a solution passing my code in Julia.
No problem for the groupby operation, but I have more issues with the reshaping / renaming part.
If you have any idea, I would be very grateful.
Thanks for any help
Edit :
As you recommended, I have installed Julia 1.5 and updated the DataFrames pkg to 0.22 version. As a result, the code runs well. The only remaining issue is related to the non constant lenght of column names in my real DF, which makes the transform part of the code not completly suitable. I will search for a way to split char/num with regular expression.
Thanks a lot for your time and sorry for the mistakes on editing.
There are probably several ways how you can do it. Here is an example using in-built functions (also taking advantage of several advanced features at once, so if you have any questions regarding the code please comment and I can explain):
julia> using CSV, DataFrames, Chain
julia> str = """
Police Product PV1 PV2 PV3 PM1 PM2 PM3
1 AA 10 8 14 150 145 140
2 AB 25 4 7 700 650 620
3 AA 13 22 5 120 80 60
4 AA 12 6 12 250 170 120
5 AB 10 13 5 500 430 350
6 BC 7 21 12 1200 1000 900""";
julia> #chain str begin
IOBuffer
CSV.read(DataFrame, ignorerepeated=true, delim=" ")
groupby(:Product)
combine(names(df, r"\d") .=> sum, renamecols=false)
stack(Not(:Product))
transform!(:variable => ByRow(x -> (first(x, 2), last(x, 1))) => [:Item, :Year])
unstack([:Product, :Item], :Year, :value, renamecols = x -> Symbol("Year", x))
sort!(:Product)
end
6×5 DataFrame
Row │ Product Item Year1 Year2 Year3
│ String String Int64? Int64? Int64?
─────┼─────────────────────────────────────────
1 │ AA PV 35 36 31
2 │ AA PM 520 395 320
3 │ AB PV 35 17 12
4 │ AB PM 1200 1080 970
5 │ BC PV 7 21 12
6 │ BC PM 1200 1000 900
I used Chain.jl just to show how it can be employed in practice (but of course it is not needed).
You can add #aside show(_) annotation after any stage of the processing to see the results of the processing steps.
Edit:
Is this the regex you need (split non-digit characters followed by digit characters)?
julia> match(r"([^\d]+)(\d+)", "fsdfds123").captures
2-element Array{Union{Nothing, SubString{String}},1}:
"fsdfds"
"123"
Then just write:
ByRow(x -> match(r"([^\d]+)(\d+)", x).captures)
as your transformation

To prepare a dataframe with elements being repeated from a list in python

I have a list as primary = ['A' , 'B' , 'C' , 'D']
and a DataFrame as
df2 = pd.DataFrame(data=dateRange, columns = ['Date'])
which contains 1 date column starting from 01-July-2020 till 31-Dec-2020.
I created another column 'DayNum' which will contain the day number from the date like 01-July-2020 is Wednesday so the 'DayNum' column will have 2 and so on.
Now using the list I want to create another column 'primary' so that the DataFrame looks as follows:
In short, the elements on the list should repeat. You can say that this is a roster to show the name of the person on the roster on a weekly basis where Monday is the start (day 0) and Sunday is the end (day 6).
The output should be like this:
Date DayNum Primary
0 01-Jul-20 2 A
1 02-Jul-20 3 A
2 03-Jul-20 4 A
3 04-Jul-20 5 A
4 05-Jul-20 6 A
5 06-Jul-20 0 B
6 07-Jul-20 1 B
7 08-Jul-20 2 B
8 09-Jul-20 3 B
9 10-Jul-20 4 B
10 11-Jul-20 5 B
11 12-Jul-20 6 B
12 13-Jul-20 0 C
13 14-Jul-20 1 C
14 15-Jul-20 2 C
15 16-Jul-20 3 C
16 17-Jul-20 4 C
17 18-Jul-20 5 C
18 19-Jul-20 6 C
19 20-Jul-20 0 D
20 21-Jul-20 1 D
21 22-Jul-20 2 D
22 23-Jul-20 3 D
23 24-Jul-20 4 D
24 25-Jul-20 5 D
25 26-Jul-20 6 D
26 27-Jul-20 0 A
27 28-Jul-20 1 A
28 29-Jul-20 2 A
29 30-Jul-20 3 A
30 31-Jul-20 4 A
First compare column for 0 by Series.eq with cumulative sum by Series.cumsum for groups for each week, then use modulo by Series.mod with number of values in list and last map by dictioanry created by enumerate and list by Series.map:
primary = ['A','B','C','D']
d = dict(enumerate(primary))
df['Primary'] = df['DayNum'].eq(0).cumsum().mod(len(primary)).map(d)

cumulative product for specific groups of observations in pandas

I have a dataset of the following type
Date ID window var
0 1998-01-28 X -5 8.500e-03
1 1998-01-28 Y -5 1.518e-02
2 1998-01-29 X -4 8.005e-03
3 1998-01-29 Y -4 7.905e-03
4 1998-01-30 X -3 -5.497e-03
... ... ... ...
3339 2016-12-19 Y 3 -4.365e-04
3340 2016-12-20 X 4 3.628e-03
3341 2016-12-20 Y 4 6.608e-03
3342 2016-12-21 X 5 -2.467e-03
3343 2016-12-21 Y 5 -2.651e-03
My aim is to calculate the cumulative product of the variable var according to the variable window. The idea is that for every date, I have identified a window of 5 days around that date /the variable window goes from -5 to 5). Now, I want to calculate the cumulative product in the window that belongs to a specific date. For example, the first date (1998-01-28) has a value of windows of -5, and thus represent the starting point for the calculation of the cumprod. I want to have a new variable called cumprod which is exactly var on the date in which window is -5, then it is the cumprod between the value of varat -5 and -4, and so on until window is equal to 5. This defines the value of cumprod for the first group of dates, where every group is defined by consecutive dates in a way that var starts at -5 and ends at 5. I shall then repeat this for any group of date. I will therefore obtain something like
Date ID window var cumprod
0 1998-01-28 X -5 8.500e-03 8.500e-03
1 1998-01-28 Y -5 1.518e-02 1.518e-02
2 1998-01-29 X -4 8.005e-03 6.80425e-05
3 1998-01-29 Y -4 7.905e-03 0.00011999790000000002
4 1998-01-30 X -3 -5.497e-03
... ... ... ...
3339 2016-12-19 Y 3 -4.365e-04
3340 2016-12-20 X 4 3.628e-03
3341 2016-12-20 Y 4 6.608e-03
3342 2016-12-21 X 5 -2.467e-03
3343 2016-12-21 Y 5 -2.651e-03
where I gave an example in of cumprod for the first 2 dates.
How could I achieve this? I was thinking to find a way to attach an identifier to every group of dates and then run some sort of cumprod() method using .groupby(group_identifier). I can't think of how to do it though. Would it be possible to simplify it by using a rolling function on window? Any other kind of approach is very welcomed.
I suggest the following
import numpy as np
import pandas as pd
np.random.seed(123)
df = pd.DataFrame({"Date": pd.date_range("1998-01-28", freq="d", periods=22),
"window": np.concatenate([np.arange(-5,6,1),np.arange(-5,6,1)]),
"var": np.random.randint(1,10,22)
})
My df is similar to yours:
Date window var
0 1998-01-28 -5 3
1 1998-01-29 -4 3
2 1998-01-30 -3 7
3 1998-01-31 -2 2
4 1998-02-01 -1 4
5 1998-02-02 0 7
6 1998-02-03 1 2
7 1998-02-04 2 1
8 1998-02-05 3 2
9 1998-02-06 4 1
10 1998-02-07 5 1
11 1998-02-08 -5 4
12 1998-02-09 -4 5
Then I create a grouping variable and transform var usingcumprod:
df = df.sort_values("Date") # My df is already sorted by Date given the way
# I created it, but I add this to make sure yours is sorted by date
df["group"] = (df["window"] == -5).cumsum()
df = pd.concat([df, df.groupby("group")["var"].transform("cumprod")], axis=1)
And the result is :
Date window var group var
0 1998-01-28 -5 3 1 3
1 1998-01-29 -4 3 1 9
2 1998-01-30 -3 7 1 63
3 1998-01-31 -2 2 1 126
4 1998-02-01 -1 4 1 504
5 1998-02-02 0 7 1 3528
6 1998-02-03 1 2 1 7056
7 1998-02-04 2 1 1 7056
8 1998-02-05 3 2 1 14112
9 1998-02-06 4 1 1 14112
10 1998-02-07 5 1 1 14112
11 1998-02-08 -5 4 2 4
12 1998-02-09 -4 5 2 20
13 1998-02-10 -3 1 2 20

'Series' objects are mutable, thus they cannot be hashed trying to sum columns and datatype is float

I am tryning to sum all values in a range of columns from the third to last of several thousand columns using:
day3prep['D3counts'] = day3prep.sum(day3prep.iloc[:, 2:].sum(axis=1))
dataframe is formated as:
ID G1 Z1 Z2 ...ZN
0 50 13 12 ...62
1 51 62 23 ...19
dataframe with summed column:
ID G1 Z1 Z2 ...ZN D3counts
0 50 13 12 ...62 sum(Z1:ZN in row 0)
1 51 62 23 ...19 sum(Z1:ZN in row 1)
I've changed the NaNs to 0's. The datatype is float but I am getting the error:
'Series' objects are mutable, thus they cannot be hashed
You only need this part:
day3prep['D3counts'] = day3prep.iloc[:, 2:].sum(axis=1)
With some random numbers:
import pandas as pd
import random
random.seed(42)
day3prep = pd.DataFrame({'ID': random.sample(range(10), 5), 'G1': random.sample(range(10), 5),
'Z1': random.sample(range(10), 5), 'Z2': random.sample(range(10), 5), 'Z3': random.sample(range(10), 5)})
day3prep['D3counts'] = day3prep.iloc[:, 2:].sum(axis=1)
Output:
> day3prep
ID G1 Z1 Z2 Z3 D3counts
0 1 2 0 8 8 16
1 0 1 9 0 6 15
2 4 8 1 3 3 7
3 9 4 7 5 7 19
4 6 3 6 6 4 16

inserting an empty line in between every two elements a column (data frame + pandas)

My data frame looks something like this:
Games
0 CAR 20
1 DEN 21
2 TB 31
3 ATL 24
4 SD 27
5 KC 33
6 CIN 23
7 NYJ 22
import pandas as pd
df =pd.read_csv('weekone.txt',)
df.columns=['Games']
I'm trying to put a blank line in between every two elements (teams).
So I want it to look like this:
Games
0 CAR 20
1 DEN 21
2 TB 31
3 ATL 24
4 SD 27
5 KC 33
6 CIN 23
7 NYJ 22
But when I'm using this loop
for i in df2.index:
if (df2.index[i])%2 == 1:
df2.Games[i]=df2.Games[i]+('\n')
else:
df2.Games[i] = df2.Games[i]
I'm getting an output like this:
Games
0 CAR 20
1 DEN 21\n
2 TB 31
3 ATL 24\n
4 SD 27
5 KC 33\n
6 CIN 23
7 NYJ 22\n
What am I doing wrong? Thanks.
you can do it this way:
In [172]: x
Out[172]:
Games
0 CAR 20
1 DEN 21
2 TB 31
3 ATL 24
4 SD 27
5 KC 33
6 CIN 23
7 NYJ 22
In [173]: %paste
empty_line = pd.DataFrame([''], columns=x.columns, index=[''])
rslt = x.loc[:1]
g = x.groupby(x.index//2)
for i in range(1, len(g)):
rslt = pd.concat([rslt.append(empty_line), g.get_group(i)])
## -- End pasted text --
In [174]: rslt
Out[174]:
Games
0 CAR 20
1 DEN 21
2 TB 31
3 ATL 24
4 SD 27
5 KC 33
6 CIN 23
7 NYJ 22
the index's dtype is object now:
In [178]: rslt.index.dtype
Out[178]: dtype('O')
or having -1 as an index for empty lines:
In [175]: %paste
empty_line = pd.DataFrame([''], columns=x.columns, index=[-1])
rslt = x.loc[:1]
g = x.groupby(x.index//2)
for i in range(1, len(g)):
rslt = pd.concat([rslt.append(empty_line), g.get_group(i)])
## -- End pasted text --
In [176]: rslt
Out[176]:
Games
0 CAR 20
1 DEN 21
-1
2 TB 31
3 ATL 24
-1
4 SD 27
5 KC 33
-1
6 CIN 23
7 NYJ 22
index dtype:
In [181]: rslt.index.dtype
Out[181]: dtype('int64')