New Column With Repeated Value from a different column - pandas

I have the following code. I need to add a column deaths_last_tuesday which shows the deaths from last Tuesday, for each day.
import pandas as pd
data = {'date': ['2014-05-01', '2014-05-02', '2014-05-03', '2014-05-04', '2014-05-05', '2014-05-06', '2014-05-07',
'2014-05-08', '2014-05-09', '2014-05-10', '2014-05-11', '2014-05-12', '2014-05-13', '2014-05-14',
'2014-05-15', '2014-05-16', '2014-05-17', '2014-05-18', '2014-05-19', '2014-05-20'],
'battle_deaths': [34, 25, 26, 15, 15, 14, 26, 25, 62, 41, 23, 56, 23, 34, 23, 67, 54, 34, 45, 12]}
df = pd.DataFrame(data, columns=['date', 'battle_deaths'])
df['date'] = pd.to_datetime(df['date'])
df['day_of_week'] = df['date'].dt.dayofweek
df = df.set_index('date')
df.sort_index()
battle_deaths day_of_week deaths_last_tuesday
date
2014-05-01 34 3
2014-05-02 25 4 24
2014-05-03 26 5 24
2014-05-04 15 6 24
2014-05-05 15 0 24
2014-05-06 14 1 24
2014-05-07 26 2 24
2014-05-08 25 3 24
2014-05-09 62 4 25
2014-05-10 41 5 25
2014-05-11 23 6 25
2014-05-12 56 0 25
I want to do this so that I want to compare the deaths of each day with the deaths of the previous Tuesday.

Use:
df['deaths_last_tuesday'] = df['battle_deaths'].where(df['day_of_week'].eq(3)).ffill().shift()
print (df)
battle_deaths day_of_week deaths_last_tuesday
date
2014-05-01 34 3 NaN
2014-05-02 25 4 34.0
2014-05-03 26 5 34.0
2014-05-04 15 6 34.0
2014-05-05 15 0 34.0
2014-05-06 14 1 34.0
2014-05-07 26 2 34.0
2014-05-08 25 3 34.0
2014-05-09 62 4 25.0
2014-05-10 41 5 25.0
2014-05-11 23 6 25.0
2014-05-12 56 0 25.0
2014-05-13 23 1 25.0
2014-05-14 34 2 25.0
2014-05-15 23 3 25.0
2014-05-16 67 4 23.0
2014-05-17 54 5 23.0
2014-05-18 34 6 23.0
2014-05-19 45 0 23.0
2014-05-20 12 1 23.0
Explanation:
First compare by eq (==):
print (df['day_of_week'].eq(3))
date
2014-05-01 True
2014-05-02 False
2014-05-03 False
2014-05-04 False
2014-05-05 False
2014-05-06 False
2014-05-07 False
2014-05-08 True
2014-05-09 False
2014-05-10 False
2014-05-11 False
2014-05-12 False
2014-05-13 False
2014-05-14 False
2014-05-15 True
2014-05-16 False
2014-05-17 False
2014-05-18 False
2014-05-19 False
2014-05-20 False
Name: day_of_week, dtype: bool
Then create missing values for not matched values by where:
print (df['battle_deaths'].where(df['day_of_week'].eq(3)))
date
2014-05-01 34.0
2014-05-02 NaN
2014-05-03 NaN
2014-05-04 NaN
2014-05-05 NaN
2014-05-06 NaN
2014-05-07 NaN
2014-05-08 25.0
2014-05-09 NaN
2014-05-10 NaN
2014-05-11 NaN
2014-05-12 NaN
2014-05-13 NaN
2014-05-14 NaN
2014-05-15 23.0
2014-05-16 NaN
2014-05-17 NaN
2014-05-18 NaN
2014-05-19 NaN
2014-05-20 NaN
Name: battle_deaths, dtype: float64
Forwrd fill missing values:
print (df['battle_deaths'].where(df['day_of_week'].eq(3)).ffill())
date
2014-05-01 34.0
2014-05-02 34.0
2014-05-03 34.0
2014-05-04 34.0
2014-05-05 34.0
2014-05-06 34.0
2014-05-07 34.0
2014-05-08 25.0
2014-05-09 25.0
2014-05-10 25.0
2014-05-11 25.0
2014-05-12 25.0
2014-05-13 25.0
2014-05-14 25.0
2014-05-15 23.0
2014-05-16 23.0
2014-05-17 23.0
2014-05-18 23.0
2014-05-19 23.0
2014-05-20 23.0
Name: battle_deaths, dtype: float64
And last shift:
print (df['battle_deaths'].where(df['day_of_week'].eq(3)).ffill().shift())
date
2014-05-01 NaN
2014-05-02 34.0
2014-05-03 34.0
2014-05-04 34.0
2014-05-05 34.0
2014-05-06 34.0
2014-05-07 34.0
2014-05-08 34.0
2014-05-09 25.0
2014-05-10 25.0
2014-05-11 25.0
2014-05-12 25.0
2014-05-13 25.0
2014-05-14 25.0
2014-05-15 25.0
2014-05-16 23.0
2014-05-17 23.0
2014-05-18 23.0
2014-05-19 23.0
2014-05-20 23.0
Name: battle_deaths, dtype: float64

Related

Filtering pandas dataframe on condition over NaN

I have a datetime dataframe in pandas like this:
date value1 value2 name
0 2020-08-27 07:30:00 28.0 27.0 A
1 2020-08-27 08:00:00 28.2 27.0 A
2 2020-08-27 09:00:00 NaN 27.5 A
3 2020-08-27 09:30:00 29.0 NaN A
4 2020-08-27 10:30:00 NaN NaN A
5 2020-08-27 11:00:00 29.8 27.0 A
6 2020-08-27 11:30:00 30.0 27.0 A
7 2020-08-27 12:00:00 30.0 27.0 A
8 2020-08-27 12:30:00 30.0 27.0 A
9 2020-08-27 13:30:00 30.0 27.0 A
10 2020-08-27 07:30:00 28.0 27.0 B
11 2020-08-27 08:00:00 28.2 27.0 B
12 2020-08-27 09:00:00 NaN 27.5 B
13 2020-08-27 09:30:00 29.0 NaN B
14 2020-08-27 10:30:00 NaN NaN B
15 2020-08-27 11:00:00 29.8 NaN B
16 2020-08-27 11:30:00 30.0 27.0 B
17 2020-08-27 12:00:00 30.0 27.0 B
18 2020-08-27 12:30:00 30.0 27.0 B
19 2020-08-27 13:30:00 30.0 27.0 B
I wish to remove entry for all name for which number of NaN in any column is 3 or more. I am able to calculate NaN in each column.
df.drop('name', 1).isna().groupby(df.name, sort=False).sum().reset_index()
How can I use this to filter df:
My expected output is:
date value1 value2 name
0 2020-08-27 07:30:00 28.0 27.0 A
1 2020-08-27 08:00:00 28.2 27.0 A
2 2020-08-27 09:00:00 NaN 27.5 A
3 2020-08-27 09:30:00 29.0 NaN A
4 2020-08-27 10:30:00 NaN NaN A
5 2020-08-27 11:00:00 29.8 27.0 A
6 2020-08-27 11:30:00 30.0 27.0 A
7 2020-08-27 12:00:00 30.0 27.0 A
8 2020-08-27 12:30:00 30.0 27.0 A
9 2020-08-27 13:30:00 30.0 27.0 A
>>> df.set_index("name") \
.loc[df[["value1", "value2"]] \
.isna() \
.groupby(df["name"]) \
.sum() \
.max(axis="columns") < 3]
date value1 value2
name
A 2020-08-27 07:30:00 28.0 27.0
A 2020-08-27 08:00:00 28.2 27.0
A 2020-08-27 09:00:00 NaN 27.5
A 2020-08-27 09:30:00 29.0 NaN
A 2020-08-27 10:30:00 NaN NaN
A 2020-08-27 11:00:00 29.8 27.0
A 2020-08-27 11:30:00 30.0 27.0
A 2020-08-27 12:00:00 30.0 27.0
A 2020-08-27 12:30:00 30.0 27.0
A 2020-08-27 13:30:00 30.0 27.0

Daily calculations in intraday data

Let's say I have a DataFrame with date_time index:
date_time a b
2020-11-23 04:00:00 10 5
2020-11-23 05:00:00 11 5
2020-11-23 06:00:00 12 5
2020-11-24 04:30:00 13 6
2020-11-24 05:30:00 14 6
2020-11-24 06:30:00 15 6
2020-11-25 06:00:00 16 7
2020-11-25 07:00:00 17 7
2020-11-25 08:00:00 18 7
"a" column is intraday data (every row - different value). "b" column - DAILY data - same data during the current day.
I need to make some calculations with "b" (daily) column and create "c" column with the result. For example, sum for two last days.
Result:
date_time a b c
2020-11-23 04:00:00 10 5 NaN
2020-11-23 05:00:00 11 5 NaN
2020-11-23 06:00:00 12 5 NaN
2020-11-24 04:30:00 13 6 11
2020-11-24 05:30:00 14 6 11
2020-11-24 06:30:00 15 6 11
2020-11-25 06:00:00 16 7 13
2020-11-25 07:00:00 17 7 13
2020-11-25 08:00:00 18 7 13
I guesss I should use something like
df['c'] = df.resample('D').b.rolling(3).sum ...
but I got "NaN" values in "c".
Could you help me? Thanks!
One thing you can do is to drop duplicates on the date and work on that:
# get the dates
df['date'] = df['date_time'].dt.normalize()
df['c'] = (df.drop_duplicates('date')['b'] # drop duplicates on dates
.rolling(2).sum() # rolling sum
)
df['c'] = df['c'].ffill() # fill the missing data
Output:
date_time a b date c
0 2020-11-23 04:00:00 10 5 2020-11-23 NaN
1 2020-11-23 05:00:00 11 5 2020-11-23 NaN
2 2020-11-23 06:00:00 12 5 2020-11-23 NaN
3 2020-11-24 04:30:00 13 6 2020-11-24 11.0
4 2020-11-24 05:30:00 14 6 2020-11-24 11.0
5 2020-11-24 06:30:00 15 6 2020-11-24 11.0
6 2020-11-25 06:00:00 16 7 2020-11-25 13.0
7 2020-11-25 07:00:00 17 7 2020-11-25 13.0
8 2020-11-25 08:00:00 18 7 2020-11-25 13.0

7 days hourly mean with pandas

I need some help calculating a 7 days mean for every hour.
The timeseries has a hourly resolution and I need the 7 days mean for each hour e.g. for 13 o'clock
date, x
2020-07-01 13:00 , 4
2020-07-01 14:00 , 3
.
.
.
2020-07-02 13:00 , 3
2020-07-02 14:00 , 7
.
.
.
I tried it with pandas and a rolling mean, but rolling includes last 7 days.
Thanks for any hints!
Add a new hour column, grouping by hour column, and then add
The average was calculated over 7 days. This is consistent with the intent of the question.
df['hour'] = df.index.hour
df = df.groupby(df.hour)['x'].rolling(7).mean().reset_index()
df.head(35)
hour level_1 x
0 0 2020-07-01 00:00:00 NaN
1 0 2020-07-02 00:00:00 NaN
2 0 2020-07-03 00:00:00 NaN
3 0 2020-07-04 00:00:00 NaN
4 0 2020-07-05 00:00:00 NaN
5 0 2020-07-06 00:00:00 NaN
6 0 2020-07-07 00:00:00 48.142857
7 0 2020-07-08 00:00:00 50.285714
8 0 2020-07-09 00:00:00 60.000000
9 0 2020-07-10 00:00:00 63.142857
10 1 2020-07-01 01:00:00 NaN
11 1 2020-07-02 01:00:00 NaN
12 1 2020-07-03 01:00:00 NaN
13 1 2020-07-04 01:00:00 NaN
14 1 2020-07-05 01:00:00 NaN
15 1 2020-07-06 01:00:00 NaN
16 1 2020-07-07 01:00:00 52.571429
17 1 2020-07-08 01:00:00 48.428571
18 1 2020-07-09 01:00:00 38.000000
19 2 2020-07-01 02:00:00 NaN
20 2 2020-07-02 02:00:00 NaN
21 2 2020-07-03 02:00:00 NaN
22 2 2020-07-04 02:00:00 NaN
23 2 2020-07-05 02:00:00 NaN
24 2 2020-07-06 02:00:00 NaN
25 2 2020-07-07 02:00:00 46.571429
26 2 2020-07-08 02:00:00 47.714286
27 2 2020-07-09 02:00:00 42.714286
28 3 2020-07-01 03:00:00 NaN
29 3 2020-07-02 03:00:00 NaN
30 3 2020-07-03 03:00:00 NaN
31 3 2020-07-04 03:00:00 NaN
32 3 2020-07-05 03:00:00 NaN
33 3 2020-07-06 03:00:00 NaN
34 3 2020-07-07 03:00:00 72.571429

Dataframe rolling mean, replaces column, how do I keep the original column and add the rolling mean as a new column?

I have a DF:
tbname stat_day count
0 calc_10 2020-05-01 0
1 calc_10 2020-05-02 0
2 calc_10 2020-05-03 0
<snip>
49 calc_10 2020-06-19 361
50 calc_10 2020-06-20 506
51 calc_10 2020-06-21 0
52 calc_10 2020-06-22 0
53 calc_12 2020-05-01 0
54 calc_12 2020-05-02 0
<snip>
73 calc_12 2020-05-21 0
74 calc_12 2020-05-22 0
75 calc_12 2020-05-23 0
<snip>
I then group it and get a rolling mean:
gp=df_tsd.groupby(['tbname'])
df_gp=gp.rolling(30,on='stat_day').mean()
I'd like to keep the count column and add a RMA column, but the rolling().mean() replaces the count column with the rolling value:
stat_day count
tbname
calc_10 0 2020-05-01 NaN
1 2020-05-02 NaN
2 2020-05-03 NaN
<snip> 41 2020-06-11 0.000000
42 2020-06-12 249.533333
43 2020-06-13 777.333333
44 2020-06-14 1310.333333
45 2020-06-15 1841.700000
46 2020-06-16 2235.933333
47 2020-06-17 2259.933333
48 2020-06-18 2283.200000
49 2020-06-19 2295.233333
50 2020-06-20 2312.100000
51 2020-06-21 2312.100000
52 2020-06-22 2312.100000
Update:
Your code works! (naturally), I added some tweaks:
df_tsd['RDA']=df_tsd.groupby('tbname')['count'].transform(lambda x: x.rolling(7).mean())
print(df_tsd.groupby('tbname').tail(30).round({'RDA':0}).to_string(index=False))
And this is the output:
tbname stat_day count RDA
calc_10 2020-05-24 0 0.0
calc_10 2020-05-25 0 0.0
calc_10 2020-05-26 0 0.0
calc_10 2020-05-27 0 0.0
calc_10 2020-05-28 0 0.0
calc_10 2020-05-29 0 0.0
calc_10 2020-05-30 0 0.0
<snip>
calc_10 2020-06-12 7486 1069.0
calc_10 2020-06-13 15834 3331.0
calc_10 2020-06-14 15990 5616.0
calc_10 2020-06-15 15941 7893.0
calc_10 2020-06-16 11827 9583.0
calc_10 2020-06-17 720 9685.0
<snip>
calc_12 2020-06-02 1959 280.0
calc_12 2020-06-03 1582 506.0
calc_12 2020-06-04 0 506.0
My code (which doesn't quite work) ends up without the new rolling column, but the output is neat, note the control break on the tbname:
stat_day count
tbname
calc_10 23 2020-05-24 0.0
24 2020-05-25 0.0
25 2020-05-26 0.0
26 2020-05-27 0.0
<snip>
calc_12 70 2020-05-18 0.0
71 2020-05-19 0.0
72 2020-05-20 0.0
<snip>
88 2020-06-05 506.0
89 2020-06-06 506.0
You don't have to do it in two steps. You can use the transform function to do it in one line and just add the moving average column to your base dataframe.
I changed your data a bit to make an example. See below for base data, the command, and the output that I think you're looking for. I used a 3 day moving average to make it easier to display.
Then if you want to display it with the grouping, do what you originally did, but with a 1-day moving window.
>>> df
tbname statday count
0 calc_10 2020-05-01 1
1 calc_10 2020-05-02 2
2 calc_10 2020-05-03 3
3 calc_10 2020-05-04 4
4 calc_10 2020-05-05 5
5 calc_10 2020-05-06 6
6 calc_10 2020-05-07 7
7 calc_10 2020-05-08 8
8 Calc_11 2020-05-01 1
9 Calc_11 2020-05-02 2
10 Calc_11 2020-05-03 3
11 Calc_11 2020-05-04 4
12 Calc_11 2020-05-05 5
13 Calc_11 2020-05-06 6
14 Calc_11 2020-05-07 7
15 Calc_12 2020-05-01 1
16 Calc_12 2020-05-02 2
17 Calc_12 2020-05-03 3
18 Calc_12 2020-05-04 4
19 Calc_12 2020-05-05 5
20 Calc_12 2020-05-06 6
21 Calc_12 2020-05-07 7
22 Calc_12 2020-05-08 8
23 Calc_12 2020-05-09 9
>>> df['ma'] = df.groupby('tbname')['count'].transform(lambda x: x.rolling(3).mean())
>>> df
tbname statday count ma
0 calc_10 2020-05-01 1 NaN
1 calc_10 2020-05-02 2 NaN
2 calc_10 2020-05-03 3 2.0
3 calc_10 2020-05-04 4 3.0
4 calc_10 2020-05-05 5 4.0
5 calc_10 2020-05-06 6 5.0
6 calc_10 2020-05-07 7 6.0
7 calc_10 2020-05-08 8 7.0
8 Calc_11 2020-05-01 1 NaN
9 Calc_11 2020-05-02 2 NaN
10 Calc_11 2020-05-03 3 2.0
11 Calc_11 2020-05-04 4 3.0
12 Calc_11 2020-05-05 5 4.0
13 Calc_11 2020-05-06 6 5.0
14 Calc_11 2020-05-07 7 6.0
15 Calc_12 2020-05-01 1 NaN
16 Calc_12 2020-05-02 2 NaN
17 Calc_12 2020-05-03 3 2.0
18 Calc_12 2020-05-04 4 3.0
19 Calc_12 2020-05-05 5 4.0
20 Calc_12 2020-05-06 6 5.0
21 Calc_12 2020-05-07 7 6.0
22 Calc_12 2020-05-08 8 7.0
23 Calc_12 2020-05-09 9 8.0
>>> dfgp=df.groupby(['tbname'])
>>> df_gp=gp.rolling(1,on='statday').mean()
>>> df_gp
statday count ma
tbname
Calc_11 8 2020-05-01 1.0 NaN
9 2020-05-02 2.0 NaN
10 2020-05-03 3.0 2.0
11 2020-05-04 4.0 3.0
12 2020-05-05 5.0 4.0
13 2020-05-06 6.0 5.0
14 2020-05-07 7.0 6.0
Calc_12 15 2020-05-01 1.0 NaN
16 2020-05-02 2.0 NaN
17 2020-05-03 3.0 2.0
18 2020-05-04 4.0 3.0
19 2020-05-05 5.0 4.0
20 2020-05-06 6.0 5.0
21 2020-05-07 7.0 6.0
22 2020-05-08 8.0 7.0
23 2020-05-09 9.0 8.0
calc_10 0 2020-05-01 1.0 NaN
1 2020-05-02 2.0 NaN
2 2020-05-03 3.0 2.0
3 2020-05-04 4.0 3.0
4 2020-05-05 5.0 4.0
5 2020-05-06 6.0 5.0
6 2020-05-07 7.0 6.0
7 2020-05-08 8.0 7.0

For each CohortGroup assign the Proper CohortPeriod Count

I am trying to assign the proper cohort period count to the 'Cohort Period' column for each cohort group. I believe showing what I am trying to achieve makes more sense.
For loops seem like the way to go, was wondering if the same can be achieved using some nifty pandas function.
Out[7]:
OrderPeriod CohortGroup Cohort Period
0 1/1/2017 1/1/2017 NaN
1 1/1/2017 1/1/2017 NaN
2 1/1/2017 1/1/2017 NaN
3 1/1/2017 1/1/2017 NaN
4 1/1/2017 1/1/2017 NaN
5 1/1/2017 1/1/2017 NaN
6 1/1/2017 1/1/2017 NaN
7 1/1/2017 1/1/2017 NaN
8 4/1/2017 1/1/2017 NaN
9 6/1/2017 1/1/2017 NaN
10 8/1/2017 1/1/2017 NaN
11 9/1/2017 1/1/2017 NaN
12 9/1/2017 1/1/2017 NaN
13 11/1/2017 1/1/2017 NaN
14 4/1/2018 1/1/2017 NaN
15 6/1/2018 1/1/2017 NaN
16 12/1/2018 1/1/2017 NaN
17 1/1/2019 1/1/2017 NaN
18 5/1/2019 1/1/2017 NaN
19 2/1/2017 2/1/2017 NaN
20 3/1/2017 3/1/2017 NaN
21 3/1/2017 3/1/2017 NaN
22 3/1/2017 3/1/2017 NaN
23 3/1/2017 3/1/2017 NaN
24 3/1/2017 3/1/2017 NaN
25 4/1/2017 3/1/2017 NaN
If Cohort Group and OrderPeriod are the same it's assigned a 1, then counts for each new OrderPeriod and assigns that number to Cohort Period. Once a new CohortGroup starts the process begins again.
Out[7]:
OrderPeriod CohortGroup Cohort Period
0 1/1/2017 1/1/2017 1.0
1 1/1/2017 1/1/2017 1.0
2 1/1/2017 1/1/2017 1.0
3 1/1/2017 1/1/2017 1.0
4 1/1/2017 1/1/2017 1.0
5 1/1/2017 1/1/2017 1.0
6 1/1/2017 1/1/2017 1.0
7 1/1/2017 1/1/2017 1.0
8 4/1/2017 1/1/2017 2.0
9 6/1/2017 1/1/2017 3.0
10 8/1/2017 1/1/2017 4.0
11 9/1/2017 1/1/2017 5.0
12 9/1/2017 1/1/2017 5.0
13 11/1/2017 1/1/2017 6.0
14 4/1/2018 1/1/2017 7.0
15 6/1/2018 1/1/2017 8.0
16 12/1/2018 1/1/2017 9.0
17 1/1/2019 1/1/2017 10.0
18 5/1/2019 1/1/2017 11.0
19 2/1/2017 2/1/2017 1.0
20 3/1/2017 3/1/2017 1.0
21 3/1/2017 3/1/2017 1.0
22 3/1/2017 3/1/2017 1.0
23 3/1/2017 3/1/2017 1.0
24 3/1/2017 3/1/2017 1.0
25 4/1/2017 3/1/2017 2.0
I will do rank
df=df.apply(pd.to_datetime)
df['Cohort Period']=df.groupby('CohortGroup')['OrderPeriod'].rank('dense')
df
OrderPeriod CohortGroup Cohort Period
0 2017-01-01 2017-01-01 1.0
1 2017-01-01 2017-01-01 1.0
2 2017-01-01 2017-01-01 1.0
3 2017-01-01 2017-01-01 1.0
4 2017-01-01 2017-01-01 1.0
5 2017-01-01 2017-01-01 1.0
6 2017-01-01 2017-01-01 1.0
7 2017-01-01 2017-01-01 1.0
8 2017-04-01 2017-01-01 2.0
9 2017-06-01 2017-01-01 3.0
10 2017-08-01 2017-01-01 4.0
11 2017-09-01 2017-01-01 5.0
12 2017-09-01 2017-01-01 5.0
13 2017-11-01 2017-01-01 6.0
14 2018-04-01 2017-01-01 7.0
15 2018-06-01 2017-01-01 8.0
16 2018-12-01 2017-01-01 9.0
17 2019-01-01 2017-01-01 10.0
18 2019-05-01 2017-01-01 11.0
19 2017-02-01 2017-02-01 1.0
20 2017-03-01 2017-03-01 1.0
21 2017-03-01 2017-03-01 1.0
22 2017-03-01 2017-03-01 1.0
23 2017-03-01 2017-03-01 1.0
24 2017-03-01 2017-03-01 1.0
25 2017-04-01 2017-03-01 2.0
First we make your CohortGroup groups be checking where it changes with shift
Then we use groupby.apply to check where OrderPeriod is not the same as CohortGroup:
groups = df['CohortGroup'].ne(df['CohortGroup'].shift()).cumsum()
cohort_period = df.groupby(groups)\
.apply(lambda x: (x['OrderPeriod'].ne(x['CohortGroup'])\
& x['OrderPeriod'].ne(x['OrderPeriod'].shift(-1)))\
.cumsum().add(1)).values
df['Cohort Period'] = cohort_period
output
OrderPeriod CohortGroup Cohort Period
0 1/1/2017 1/1/2017 1
1 1/1/2017 1/1/2017 1
2 1/1/2017 1/1/2017 1
3 1/1/2017 1/1/2017 1
4 1/1/2017 1/1/2017 1
5 1/1/2017 1/1/2017 1
6 1/1/2017 1/1/2017 1
7 1/1/2017 1/1/2017 1
8 4/1/2017 1/1/2017 2
9 6/1/2017 1/1/2017 3
10 8/1/2017 1/1/2017 4
11 9/1/2017 1/1/2017 4
12 9/1/2017 1/1/2017 5
13 11/1/2017 1/1/2017 6
14 4/1/2018 1/1/2017 7
15 6/1/2018 1/1/2017 8
16 12/1/2018 1/1/2017 9
17 1/1/2019 1/1/2017 10
18 5/1/2019 1/1/2017 11
19 2/1/2017 2/1/2017 1
20 3/1/2017 3/1/2017 1
21 3/1/2017 3/1/2017 1
22 3/1/2017 3/1/2017 1
23 3/1/2017 3/1/2017 1
24 3/1/2017 3/1/2017 1
25 4/1/2017 3/1/2017 2