There was an elegant answer to a question almost like this provided by EdChum. The difference between that question and this is that now the capping needs to be applied to data that had had "GroupBy" performed.
Original Data:
Symbol DTE Spot Strike Vol
AAPL 30.00 100.00 80.00 14.58
AAPL 30.00 100.00 85.00 16.20
AAPL 30.00 100.00 90.00 18.00
AAPL 30.00 100.00 95.00 20.00
AAPL 30.00 100.00 100.00 22.00
AAPL 30.00 100.00 105.00 25.30
AAPL 30.00 100.00 110.00 29.10
AAPL 30.00 100.00 115.00 33.46
AAPL 30.00 100.00 120.00 38.48
AAPL 50.00 102.00 80.00 13.08
AAPL 50.00 102.00 85.00 14.70
AAPL 50.00 102.00 90.00 16.50
AAPL 50.00 102.00 95.00 18.50
AAPL 50.00 102.00 100.00 20.50
AAPL 50.00 102.00 105.00 23.80
AAPL 50.00 102.00 110.00 27.60
AAPL 50.00 102.00 115.00 31.96
AAPL 50.00 102.00 120.00 36.98
IBM 30.00 170.00 150.00 7.29
IBM 30.00 170.00 155.00 8.10
IBM 30.00 170.00 160.00 9.00
IBM 30.00 170.00 165.00 10.00
IBM 30.00 170.00 170.00 11.00
IBM 30.00 170.00 175.00 12.65
IBM 30.00 170.00 180.00 14.55
IBM 30.00 170.00 185.00 16.73
IBM 30.00 170.00 190.00 19.24
IBM 60.00 171.00 150.00 5.79
IBM 60.00 171.00 155.00 6.60
IBM 60.00 171.00 160.00 7.50
IBM 60.00 171.00 165.00 8.50
IBM 60.00 171.00 170.00 9.50
IBM 60.00 171.00 175.00 11.15
IBM 60.00 171.00 180.00 13.05
IBM 60.00 171.00 185.00 15.23
IBM 60.00 171.00 190.00 17.74
I then create a few new variables:
df['ATM_dist'] =abs(df['Spot']-df['Strike'])
imin = df.groupby(['DTE','Symbol'])['ATM_dist'].transform('idxmin')
df['NormStrike']=np.log(df['Strike']/df['Spot'])/(((df['DTE']/365)**.5)*df['ATMvol']/100)
df['ATMvol'] = df.loc[imin,'Vol'].values
The results are below:
Symbol DTE Spot Strike Vol ATM_dist ATMvol NormStrike
0 AAPL 30 100 80 14.58 20 22.0 -3.537916
1 AAPL 30 100 85 16.20 15 22.0 -2.576719
2 AAPL 30 100 90 18.00 10 22.0 -1.670479
3 AAPL 30 100 95 20.00 5 22.0 -0.813249
4 AAPL 30 100 100 22.00 0 22.0 0.000000
5 AAPL 30 100 105 25.30 5 22.0 0.773562
6 AAPL 30 100 110 29.10 10 22.0 1.511132
7 AAPL 30 100 115 33.46 15 22.0 2.215910
8 AAPL 30 100 120 38.48 20 22.0 2.890688
9 AAPL 50 102 80 13.08 22 20.5 -3.201973
10 AAPL 50 102 85 14.70 17 20.5 -2.402955
11 AAPL 50 102 90 16.50 12 20.5 -1.649620
12 AAPL 50 102 95 18.50 7 20.5 -0.937027
13 AAPL 50 102 100 20.50 2 20.5 -0.260994
14 AAPL 50 102 105 23.80 3 20.5 0.382049
15 AAPL 50 102 110 27.60 8 20.5 0.995172
16 AAPL 50 102 115 31.96 13 20.5 1.581035
17 AAPL 50 102 120 36.98 18 20.5 2.141961
18 IBM 30 170 150 7.29 20 11.0 -3.968895
19 IBM 30 170 155 8.10 15 11.0 -2.929137
20 IBM 30 170 160 9.00 10 11.0 -1.922393
21 IBM 30 170 165 10.00 5 11.0 -0.946631
22 IBM 30 170 170 11.00 0 11.0 0.000000
23 IBM 30 170 175 12.65 5 11.0 0.919188
24 IBM 30 170 180 14.55 10 11.0 1.812480
25 IBM 30 170 185 16.73 15 11.0 2.681295
26 IBM 30 170 190 19.24 20 11.0 3.526940
27 IBM 60 171 150 5.79 21 9.5 -3.401827
28 IBM 60 171 155 6.60 16 9.5 -2.550520
29 IBM 60 171 160 7.50 11 9.5 -1.726243
30 IBM 60 171 165 8.50 6 9.5 -0.927332
31 IBM 60 171 170 9.50 1 9.5 -0.152273
32 IBM 60 171 175 11.15 4 9.5 0.600317
33 IBM 60 171 180 13.05 9 9.5 1.331704
34 IBM 60 171 185 15.23 14 9.5 2.043051
35 IBM 60 171 190 17.74 19 9.5 2.735427
I wish to have the values of 'Vol' cap to the level where another column 'NormStrike' hits a trigger (in this case abs(NormStrike) >= 2 ). This new column, 'Desired_Level', created while leaving the 'Vol' column unchanged. The first cap should cause the Vol value at index location 0 to be 16.2 because the cap was triggered at index location 1 when NormStrike hit -2.576719.
Added clarification:
I am looking for a generic solution, that works away from the lowest abs(NormStrike) level in both directions to hit both the -2 and the +2 trigger. If it is not hit (which it might not be) then desired level is just original_level
An additional note, it will always be true that the abs(NormStrike) continues to grow in size from the min(abs(NormStrike)) level as it is a function of abs(distance from spot to strike)
the code that EdChum provided (prior to me bringing GroupBy into the mix) is below:
clip = 4
lower = df.loc[df['NS'] <= -clip, 'Vol'].idxmax()
upper = df.loc[df['NS'] >= clip, 'Vol'].idxmin()
df['Original_level'] = df['Original_level'].clip(df.loc[lower,'Original_level'], df.loc[upper, 'Original_level'])
There are 2 issues, first, it did not work after groupby and second, if a particular group of data does not have a NS value that exceeds the "clip" value then it generates an error. The ideal outcome would be, in this case, nothing is done to the Vol level for the particular Symbol/DTE group in question.
Ed suggested implementing a reset_index() but I am not sure how to use that to solve the issue.
I hope this was not to convoluted of a question
thank you for any assistance
You can try this to see whether it works out. I assume if the clip has been triggered, then NaN will be put. You can replace it by your customized choice.
import pandas as pd
import numpy as np
# use np.where(criterion, x, y) to do a vectorized statement like if criterion is True, then set it to x, else set it to y
def func(group):
group['Triggered'] = np.where((group['NormStrike'] >= 2) | (group['NormStrike'] <= -4), 'Yes', 'No')
group['Desired_Level'] = np.where((group['NormStrike'] >= 2) | (group['NormStrike'] <= -4), np.nan, group['Vol'])
group = group.fillna(method='ffill').fillna(method='bfill')
return group
df = df.groupby(['Symbol', 'DTE']).apply(func)
Out[410]:
Symbol DTE Spot Strike Vol ATM_dist ATMvol NormStrike Triggered Desired_Level
0 AAPL 30 100 80 14.58 20 22 -3.5379 No 14.58
1 AAPL 30 100 85 16.20 15 22 -2.5767 No 16.20
2 AAPL 30 100 90 18.00 10 22 -1.6705 No 18.00
3 AAPL 30 100 95 20.00 5 22 -0.8132 No 20.00
4 AAPL 30 100 100 22.00 0 22 0.0000 No 22.00
5 AAPL 30 100 105 25.30 5 22 0.7736 No 25.30
6 AAPL 30 100 110 29.10 10 22 1.5111 No 29.10
7 AAPL 30 100 115 33.46 15 22 2.2159 Yes 29.10
8 AAPL 30 100 120 38.48 20 22 2.8907 Yes 29.10
9 AAPL 50 102 80 14.58 22 22 -3.5379 No 14.58
10 AAPL 50 102 85 16.20 17 22 -2.5767 No 16.20
11 AAPL 50 102 90 18.00 12 22 -1.6705 No 18.00
12 AAPL 50 102 95 20.00 7 22 -0.8132 No 20.00
13 AAPL 50 102 100 22.00 2 22 0.0000 No 22.00
14 AAPL 50 102 105 25.30 3 22 0.7736 No 25.30
15 AAPL 50 102 110 29.10 8 22 1.5111 No 29.10
16 AAPL 50 102 115 33.46 13 22 2.2159 Yes 29.10
17 AAPL 50 102 120 38.48 18 22 2.8907 Yes 29.10
18 AAPL 30 170 150 14.58 20 22 -3.5379 No 14.58
19 AAPL 30 170 155 16.20 15 22 -2.5767 No 16.20
20 AAPL 30 170 160 18.00 10 22 -1.6705 No 18.00
21 AAPL 30 170 165 20.00 5 22 -0.8132 No 20.00
22 AAPL 30 170 170 22.00 0 22 0.0000 No 22.00
23 AAPL 30 170 175 25.30 5 22 0.7736 No 25.30
24 AAPL 30 170 180 29.10 10 22 1.5111 No 29.10
25 AAPL 30 170 185 33.46 15 22 2.2159 Yes 29.10
26 AAPL 30 170 190 38.48 20 22 2.8907 Yes 29.10
27 AAPL 60 171 150 14.58 21 22 -3.5379 No 14.58
28 AAPL 60 171 155 16.20 16 22 -2.5767 No 16.20
29 AAPL 60 171 160 18.00 11 22 -1.6705 No 18.00
30 AAPL 60 171 165 20.00 6 22 -0.8132 No 20.00
31 AAPL 60 171 170 22.00 1 22 0.0000 No 22.00
32 AAPL 60 171 175 25.30 4 22 0.7736 No 25.30
33 AAPL 60 171 180 29.10 9 22 1.5111 No 29.10
34 AAPL 60 171 185 33.46 14 22 2.2159 Yes 29.10
35 AAPL 60 171 190 38.48 19 22 2.8907 Yes 29.10
Related
I have a dataset that looks something like this. I wish to calculate a modified moving average (column Mod_MA) for sales column based on the following logic :
If there is no event, then ST Else Average last 4 dates.
Date
Item
Event
ST
Mod_MA
2022-10-01
ABC
100
100
2022-10-02
ABC
110
110
2022-10-03
ABC
120
120
2022-10-04
ABC
130
130
2022-10-05
ABC
EV1
140
115
2022-10-06
ABC
EV1
150
119
2022-10-07
ABC
160
160
2022-10-08
ABC
170
170
2022-10-09
ABC
180
180
2022-10-10
ABC
EV2
190
157
2022-10-11
ABC
EV2
200
167
2022-10-12
ABC
EV2
210
168
2022-10-01
XYZ
100
100
2022-10-02
XYZ
110
110
2022-10-03
XYZ
120
120
2022-10-04
XYZ
130
130
2022-10-05
XYZ
EV3
140
115
2022-10-06
XYZ
EV3
150
119
2022-10-07
XYZ
EV3
160
121
2022-10-08
XYZ
170
170
2022-10-09
XYZ
180
180
2022-10-10
XYZ
EV4
190
147
2022-10-11
XYZ
EV4
200
155
2022-10-12
XYZ
210
210
Hopefully the image helps clarify what I am going for.
I have tried LAG & AVG OVER ORDER BY but since I dont have an exact number of iterations I need to run, these dont work.
Calculation Formulae
Would appreciate any help.
My analysis subjects remind Netflix subscribers. Users subscribe on a certain date (e.g. 2021-04-25) and unsubscribe on another date (e.g. e.g. 2022-01-15) or null if user is still subscribed:
user_id subscription_start subscription_end
1231 2021-03-24 2021-04-07
1232 2021-05-06 2021-05-26
1234 2021-05-28 null
1235 2021-05-30 2021-06-19
1236 2021-06-01 2021-07-07
1237 2021-06-24 2021-07-09
1238 2021-07-06 null
1239 2021-08-14 null
1240 2021-09-12 null
How could I using SQL extract the weekly cohort data of user retention. E.g. 2021-03-22 (Monday) - 2021-03-28 (Sunday) is first cohort which had a single subscriber on 2021-03-24. This user stayed with the service until 2021-04-07, that is for 3 weekly cohorts and should be displayed as active on 1, 2 and 3rd week.
The end result should look like (dummy data):
Subscribed Week 1 Week2 Week 3 Week 4 Week 5 Week 6
2021-03-22 100 98 97 82 72 53 21
2021-03-29 100 97 88 88 76 44 22
2021-04-05 100 87 86 86 86 83 81
2021-04-12 100 100 100 99 98 97 96
2021-04-19 100 100 99 89 79 79 79
How do I group one DataFrame by another possibly-non-periodic Series? Mock-up below:
This is the DataFrame to be split:
i = pd.date_range(end="today", periods=20, freq="d").normalize()
v = np.random.randint(0,100,size=len(i))
d = pd.DataFrame({"value": v}, index=i)
>>> d
value
2021-02-06 48
2021-02-07 1
2021-02-08 86
2021-02-09 82
2021-02-10 40
2021-02-11 22
2021-02-12 63
2021-02-13 37
2021-02-14 41
2021-02-15 57
2021-02-16 30
2021-02-17 69
2021-02-18 63
2021-02-19 27
2021-02-20 23
2021-02-21 46
2021-02-22 66
2021-02-23 10
2021-02-24 91
2021-02-25 43
This is the splitting criteria, grouping by the Series dates. A group consists of any ordered dataframe value v such that {v} intersects [s,s+1) - but as with resampling it would be nice to control the inclusion parameters.
s = pd.date_range(start="2019-10-14", freq="2W", periods=52).to_series()
s = s.drop(np.random.choice(s.index, 10, replace=False))
s = s.reset_index(drop=True)
>>> s[25:29]
25 2021-01-24
26 2021-02-07
27 2021-02-21
28 2021-03-07
dtype: datetime64[ns]
And this is the example output... or something like it. Index is taken from the series rather than the dataframe.
>>> ???.sum()
value
...
2021-01-24 47
2021-02-07 768
2021-02-21 334
...
Internally the groups would have this structure:
...
2021-01-10
sum: 0
2021-01-24
2021-02-06 47
sum: 47
2021-02-07
2021-02-07 52
2021-02-08 56
2021-02-09 21
2021-02-10 39
2021-02-11 86
2021-02-12 30
2021-02-13 20
2021-02-14 76
2021-02-15 91
2021-02-16 70
2021-02-17 34
2021-02-18 73
2021-02-19 41
2021-02-20 79
sum: 768
2021-02-21
2021-02-21 90
2021-02-22 75
2021-02-23 12
2021-02-24 70
2021-02-25 87
sum: 334
2021-03-07
sum: 0
...
Looks like you can do:
bucket = pd.cut(d.index, bins=s, label=s[:-1], right=False)
d.groupby(bucket).sum()
I have the following table where is clipping from my db. I have 2 types of contracts.
I: client pays for first 6mth 60$, next 6mth 120$ (111 client)
II: client pays for first 6mth 60$ but if want still pays 60$ the contract will be extended at 6mth, whole contract is 18mth. (321 client who still pays)
ID_Client | Amount | Amount_charge | Lenght | Date_from | Date_to | Reverse
--------------------------------------------------------------------------------
111 60 60 12 2015-01-01 2015-01-31 12
111 60 60 12 2015-02-01 2015-02-28 11
111 60 60 12 2015-03-01 2015-03-31 10
111 60 60 12 2015-04-01 2015-04-30 9
111 60 60 12 2015-05-01 2015-05-31 8
111 60 60 12 2015-06-01 2015-06-30 7
111 120 60 12 2015-07-01 2015-07-31 6
111 120 60 12 2015-08-01 2015-08-31 5
111 120 60 12 2015-09-01 2015-09-30 4
111 120 60 12 2015-10-01 2015-10-31 3
111 120 60 12 2015-11-01 2015-11-30 2
111 120 60 12 2015-12-01 2015-12-31 1
111 120 60 12 2016-01-01 2015-01-31 0
111 120 60 12 2016-02-01 2015-02-29 0
321 60 60 12 2015-01-01 2015-01-31 12
321 60 60 12 2015-02-01 2015-02-28 11
321 60 60 12 2015-03-01 2015-03-31 10
321 60 60 12 2015-04-01 2015-04-30 9
321 60 60 12 2015-05-01 2015-05-31 8
321 60 60 12 2015-06-01 2015-06-30 7
321 60 60 12 2015-07-01 2015-07-31 6
321 60 60 12 2015-08-01 2015-08-31 5
321 60 60 12 2015-09-01 2015-09-30 4
321 60 60 12 2015-10-01 2015-10-31 3
321 60 60 12 2015-11-01 2015-11-30 2
321 60 60 12 2015-12-01 2015-12-31 1
321 60 60 12 2016-01-01 2016-01-30 0
321 60 60 12 2016-02-01 2016-02-31 0
321 60 60 12 2016-03-01 2016-03-30 0
321 60 60 12 2016-04-01 2016-04-31 0
I need to add status column.
A - normal period of agreement
D - where the agreement is doubled after 6mth but after 12mth is E(nd of agreemnt)
E - where contract is finished
L - where contract after 6mth was extended, after 18mth the status will be type E
For 321 Client after 12mth the lenght of contract was updated from 12 to 18
I have a lot of clients so i think better will be using loop to go by all clients?
ID_Client | Amount | Amount_charge | Lenght | Date_from | Date_to | Reverse | Status
-----------------------------------------------------------------------------------------
111 60 60 12 2015-01-01 2015-01-31 12 A
111 60 60 12 2015-02-01 2015-02-28 11 A
111 60 60 12 2015-03-01 2015-03-31 10 A
111 60 60 12 2015-04-01 2015-04-30 9 A
111 60 60 12 2015-05-01 2015-05-31 8 A
111 60 60 12 2015-06-01 2015-06-30 7 A
111 120 60 12 2015-07-01 2015-07-31 6 D
111 120 60 12 2015-08-01 2015-08-31 5 D
111 120 60 12 2015-09-01 2015-09-30 4 D
111 120 60 12 2015-10-01 2015-10-31 3 D
111 120 60 12 2015-11-01 2015-11-30 2 D
111 120 60 12 2015-12-01 2015-12-31 1 D
111 120 60 12 2016-01-01 2015-01-31 0 E
111 120 60 12 2016-02-01 2015-02-29 0 E
321 60 60 12 2015-01-01 2015-01-31 12 A
321 60 60 12 2015-02-01 2015-02-28 11 A
321 60 60 12 2015-03-01 2015-03-31 10 A
321 60 60 12 2015-04-01 2015-04-30 9 A
321 60 60 12 2015-05-01 2015-05-31 8 A
321 60 60 12 2015-06-01 2015-06-30 7 A
321 60 60 12 2015-07-01 2015-07-31 6 L
321 60 60 12 2015-08-01 2015-08-31 5 L
321 60 60 12 2015-09-01 2015-09-30 4 L
321 60 60 12 2015-10-01 2015-10-31 3 L
321 60 60 12 2015-11-01 2015-11-30 2 L
321 60 60 12 2015-12-01 2015-12-31 1 L
321 60 60 18 2016-01-01 2016-01-30 0 L
321 60 60 18 2016-02-01 2016-02-31 0 L
321 60 60 18 2016-03-01 2016-03-30 0 L
321 60 60 18 2016-04-01 2016-04-31 0 L
If the Reverse column is what I think:
update table1 a
set "Status"=
CASE
WHEN A."Reverse" > 6 THEN
'A'
WHEN A."Reverse" > 0 THEN
DECODE (A."Amount", A."Amount_charge", 'L', 'D')
ELSE
CASE
WHEN A."Amount" <> A."Amount_charge" THEN
'E'
ELSE
CASE WHEN ADD_MONTHS ( (SELECT b."Date_from" FROM table1 b WHERE a."ID_Client" = b."ID_Client" AND b."Reverse" = 1),6) > a."Date_from" THEN 'L'
ELSE
'E'
END
END
END
Better is to calculate the sums. The amount per month come from first payment. Something like this:
DECLARE
CURSOR c2
IS
SELECT ID_CLIENT, --AMOUNT, AMOUNT_CHARGE, LENGTH, DATE_FROM, DATE_TO, REVERSE, STATUS,
FIRST_VALUE (amount_charge) OVER (PARTITION BY id_client ORDER BY date_from) first_amount_charge,
SUM (amount) OVER (PARTITION BY id_client ORDER BY date_from) sum_amount,
SUM (amount_charge) OVER (PARTITION BY id_client ORDER BY date_from) sum_amount_charge
FROM TABLE2
FOR UPDATE NOWAIT;
BEGIN
FOR c1 IN c2
LOOP
UPDATE table2
SET status = CASE WHEN c1.sum_amount <= 6 * c1.first_amount_charge THEN 'A'
WHEN c1.sum_amount > 18 * c1.first_amount_charge THEN 'E'
WHEN c1.sum_amount > c1.sum_amount_charge THEN 'D'
ELSE 'L'
END
WHERE CURRENT OF c2;
END LOOP;
END;
File contains information about products per day, and I need to calculate average values for month for each product.
Source data looks like this:
A B C D
id date rating price
1 1 2014/01/01 2 20
2 1 2014/01/02 2 20
3 1 2014/01/03 2 20
4 1 2014/01/04 1 20
5 1 2014/01/05 1 20
6 1 2014/01/06 1 20
7 1 2014/01/07 1 20
8 3 2014/01/01 5 99
9 3 2014/01/02 5 99
10 3 2014/01/03 5 99
11 3 2014/01/04 5 99
12 3 2014/01/05 5 120
13 3 2014/01/06 5 120
14 3 2014/01/07 5 120
Need to get:
A B C D
id date rating price
1 1 1.42 20
2 3 5 108
How to do that? Need some advanced formula or VB Script.
Update: I have data for long period - about 2 years. Need to calculate average values for each product for each week, and after for each month.
Source data example:
id date rating
4 2013-09-01 445
4 2013-09-02 446
4 2013-09-03 447
4 2013-09-04 448
4 2013-09-05 449
4 2013-09-06 450
4 2013-09-07 451
4 2013-09-08 452
4 2013-09-09 453
4 2013-09-10 454
4 2013-09-11 455
4 2013-09-12 456
4 2013-09-13 457
4 2013-09-14 458
4 2013-09-15 459
4 2013-09-16 460
4 2013-09-17 461
4 2013-09-18 462
4 2013-09-19 463
4 2013-09-20 464
4 2013-09-21 465
4 2013-09-22 466
4 2013-09-23 467
4 2013-09-24 468
4 2013-09-25 469
4 2013-09-26 470
4 2013-09-27 471
4 2013-09-28 472
4 2013-09-29 473
4 2013-09-30 474
4 2013-10-01 475
4 2013-10-02 476
4 2013-10-03 477
4 2013-10-04 478
4 2013-10-05 479
4 2013-10-06 480
4 2013-10-07 481
4 2013-10-08 482
4 2013-10-09 483
4 2013-10-10 484
4 2013-10-11 485
4 2013-10-12 486
4 2013-10-13 487
4 2013-10-14 488
4 2013-10-15 489
4 2013-10-16 490
4 2013-10-17 491
4 2013-10-18 492
4 2013-10-19 493
4 2013-10-20 494
4 2013-10-21 495
4 2013-10-22 496
4 2013-10-23 497
4 2013-10-24 498
4 2013-10-25 499
4 2013-10-26 500
4 2013-10-27 501
4 2013-10-28 502
4 2013-10-29 503
4 2013-10-30 504
4 2013-10-31 505
7 2013-09-01 1445
7 2013-09-02 1446
7 2013-09-03 1447
7 2013-09-04 1448
7 2013-09-05 1449
7 2013-09-06 1450
7 2013-09-07 1451
7 2013-09-08 1452
7 2013-09-09 1453
7 2013-09-10 1454
7 2013-09-11 1455
7 2013-09-12 1456
7 2013-09-13 1457
7 2013-09-14 1458
7 2013-09-15 1459
7 2013-09-16 1460
7 2013-09-17 1461
7 2013-09-18 1462
7 2013-09-19 1463
7 2013-09-20 1464
7 2013-09-21 1465
7 2013-09-22 1466
7 2013-09-23 1467
7 2013-09-24 1468
7 2013-09-25 1469
7 2013-09-26 1470
7 2013-09-27 1471
7 2013-09-28 1472
7 2013-09-29 1473
7 2013-09-30 1474
7 2013-10-01 1475
7 2013-10-02 1476
7 2013-10-03 1477
7 2013-10-04 1478
7 2013-10-05 1479
7 2013-10-06 1480
7 2013-10-07 1481
7 2013-10-08 1482
7 2013-10-09 1483
7 2013-10-10 1484
7 2013-10-11 1485
7 2013-10-12 1486
7 2013-10-13 1487
7 2013-10-14 1488
7 2013-10-15 1489
7 2013-10-16 1490
7 2013-10-17 1491
7 2013-10-18 1492
7 2013-10-19 1493
7 2013-10-20 1494
7 2013-10-21 1495
7 2013-10-22 1496
7 2013-10-23 1497
7 2013-10-24 1498
7 2013-10-25 1499
7 2013-10-26 1500
7 2013-10-27 1501
7 2013-10-28 1502
7 2013-10-29 1503
7 2013-10-30 1504
7 2013-10-31 1505
This is the job of a pivot table, and it takes about 30secs to do it
Update:
as per your update, put the date into the Report Filter and modify to suit