Pandas df print output - pandas

How to sort this dataframe by 'Sex' column alternating F, M?
Name Sex Age Height Weight
0 Alfred M 14 69.0 112.5
1 Alice F 13 56.5 84.0
2 Barbara F 13 65.3 98.0
3 Carol F 14 62.8 102.5
4 Henry M 14 63.5 102.5
5 James M 12 57.3 83.0
6 Jane F 12 59.8 84.5
7 Janet F 15 62.5 112.5
8 Jeffrey M 13 62.5 84.0
9 John M 12 59.0 99.5
10 Joyce F 11 51.3 50.5
11 Judy F 14 64.3 90.0
12 Louise F 12 56.3 77.0
13 Mary F 15 66.5 112.0
14 Philip M 16 72.0 150.0
15 Robert M 12 64.8 128.0
16 Ronald M 15 67.0 133.0
17 Thomas M 11 57.5 85.0
18 William M 15 66.5 112.0
Goal: Print sex column F and M alternatively.

IIUC, try this:
(df.assign(sortkey=df.groupby('Sex').cumcount())
.sort_values(['sortkey','Sex'])
.drop('sortkey', axis=1))
Output:
Name Sex Age Height Weight
1 Alice F 13 56.5 84.0
0 Alfred M 14 69.0 112.5
2 Barbara F 13 65.3 98.0
4 Henry M 14 63.5 102.5
3 Carol F 14 62.8 102.5
5 James M 12 57.3 83.0
6 Jane F 12 59.8 84.5
8 Jeffrey M 13 62.5 84.0
7 Janet F 15 62.5 112.5
9 John M 12 59.0 99.5
10 Joyce F 11 51.3 50.5
14 Philip M 16 72.0 150.0
11 Judy F 14 64.3 90.0
15 Robert M 12 64.8 128.0
12 Louise F 12 56.3 77.0
16 Ronald M 15 67.0 133.0
13 Mary F 15 66.5 112.0
17 Thomas M 11 57.5 85.0
18 William M 15 66.5 112.0

Related

add Rank Column to pandas Dataframe based on column condition

What I have is below.
DOG
Date
Steps
Tiger
2021-11-01
164
Oakley
2021-11-01
76
Piper
2021-11-01
65
Millie
2021-11-01
188
Oscar
2021-11-02
152
Foster
2021-11-02
191
Zeus
2021-11-02
101
Benji
2021-11-02
94
Lucy
2021-11-02
186
Rufus
2021-11-02
65
Hank
2021-11-03
98
Olive
2021-11-03
122
Ellie
2021-11-03
153
Thor
2021-11-03
152
Nala
2021-11-03
181
Mia
2021-11-03
48
Bella
2021-11-03
23
Izzy
2021-11-03
135
Pepper
2021-11-03
22
Diesel
2021-11-04
111
Dixie
2021-11-04
34
Emma
2021-11-04
56
Abbie
2021-11-04
32
Guinness
2021-11-04
166
Kobe
2021-11-04
71
What I want is below. Rank by value of ['Steps'] column for each Date
DOG
Date
Steps
Rank
Tiger
2021-11-01
164
2
Oakley
2021-11-01
76
3
Piper
2021-11-01
65
4
Millie
2021-11-01
188
1
Oscar
2021-11-02
152
3
Foster
2021-11-02
191
1
Zeus
2021-11-02
101
4
Benji
2021-11-02
94
5
Lucy
2021-11-02
186
2
Rufus
2021-11-02
65
6
Hank
2021-11-03
98
6
Olive
2021-11-03
122
5
Ellie
2021-11-03
153
2
Thor
2021-11-03
152
3
Nala
2021-11-03
181
1
Mia
2021-11-03
48
7
Bella
2021-11-03
23
8
Izzy
2021-11-03
135
4
Pepper
2021-11-03
22
9
Diesel
2021-11-04
111
2
Dixie
2021-11-04
34
5
Emma
2021-11-04
56
4
Abbie
2021-11-04
32
6
Guinness
2021-11-04
166
1
Kobe
2021-11-04
71
3
I tried below, but it failed.
df['Rank'] = df.groupby('Date')['Steps'].rank(ascending=False)
First your solution for me working.
Maybe need method='dense' and casting to integers.
df['Rank'] = df.groupby('Date')['Steps'].rank(ascending=False, method='dense').astype(int)

Groupby filter based on count, calculate duration, penultimate status

I have a dataframe as shown below.
ID Status Date Cost
0 1 F 2017-06-22 500
1 1 M 2017-07-22 100
2 1 P 2017-10-22 100
3 1 F 2018-06-22 600
4 1 M 2018-08-22 150
5 1 P 2018-10-22 120
6 1 F 2019-03-22 750
7 2 M 2017-06-29 200
8 2 F 2017-09-29 600
9 2 F 2018-01-29 500
10 2 M 2018-03-29 100
11 2 P 2018-08-29 100
12 2 M 2018-10-29 100
13 2 F 2018-12-29 500
14 3 M 2017-03-20 300
15 3 F 2018-06-20 700
16 3 P 2018-08-20 100
17 3 M 2018-10-20 250
18 3 F 2018-11-20 100
19 3 P 2018-12-20 100
20 3 F 2019-03-20 600
21 3 M 2019-05-20 200
22 4 M 2017-08-10 800
23 4 F 2018-06-10 100
24 4 P 2018-08-10 120
25 4 F 2018-10-10 500
26 4 M 2019-01-10 200
27 4 F 2019-06-10 600
28 5 M 2018-10-10 200
29 5 F 2019-06-10 500
30 6 F 2019-06-10 600
31 7 M 2017-08-10 800
32 7 F 2018-06-10 100
33 7 P 2018-08-10 120
34 7 M 2019-01-10 200
35 7 F 2019-06-10 600
where
F = Failure
M = Maintenance
P = Planned
Step1 - Select the data of IDs which is having at least two status(F or M or P) before the last Failure
Step2 - Ignore the rows if the last raw per ID is not F, expected output after this as shown below.
ID Status Date Cost
0 1 F 2017-06-22 500
1 1 M 2017-07-22 100
2 1 P 2017-10-22 100
3 1 F 2018-06-22 600
4 1 M 2018-08-22 150
5 1 P 2018-10-22 120
6 1 F 2019-03-22 750
7 2 M 2017-06-29 200
8 2 F 2017-09-29 600
9 2 F 2018-01-29 500
10 2 M 2018-03-29 100
11 2 P 2018-08-29 100
12 2 M 2018-10-29 100
13 2 F 2018-12-29 500
14 3 M 2017-03-20 300
15 3 F 2018-06-20 700
16 3 P 2018-08-20 100
17 3 M 2018-10-20 250
18 3 F 2018-11-20 100
19 3 P 2018-12-20 100
20 3 F 2019-03-20 600
22 4 M 2017-08-10 800
23 4 F 2018-06-10 100
24 4 P 2018-08-10 120
25 4 F 2018-10-10 500
26 4 M 2019-01-10 200
27 4 F 2019-06-10 600
31 7 M 2017-08-10 800
32 7 F 2018-06-10 100
33 7 P 2018-08-10 120
34 7 M 2019-01-10 200
35 7 F 2019-06-10 600
Now, for each id last status is failure
Then from the above df I would like to prepare below Data frame
ID No_of_F No_of_M No_of_P SLS NoDays_to_SLS NoDays_SLS_to_LS
1 3 2 2 P 487 151
2 3 3 2 M 487 61
3 3 2 2 P 640 90
4 3 1 1 M 518 151
7 2 1 1 M 518 151
SLS = Second Last Status
LS = Last Status
I tried the following code to calculate the duration.
df['Date'] = pd.to_datetime(df['Date'])
df = df.sort_values(['ID', 'Date', 'Status'])
df['D'] = df.groupby('ID')['Date'].diff().dt.days
We can create a mask with gropuby + bfill that allows us to perform both selections.
m = df.Status.eq('F').replace(False, np.NaN).groupby(df.ID).bfill()
df = df.loc[m.groupby(df.ID).transform('sum').gt(2) & m]
ID Status Date Cost
0 1 F 2017-06-22 500
1 1 M 2017-07-22 100
2 1 P 2017-10-22 100
3 1 F 2018-06-22 600
4 1 M 2018-08-22 150
5 1 P 2018-10-22 120
6 1 F 2019-03-22 750
7 2 M 2017-06-29 200
8 2 F 2017-09-29 600
9 2 F 2018-01-29 500
10 2 M 2018-03-29 100
11 2 P 2018-08-29 100
12 2 M 2018-10-29 100
13 2 F 2018-12-29 500
14 3 M 2017-03-20 300
15 3 F 2018-06-20 700
16 3 P 2018-08-20 100
17 3 M 2018-10-20 250
18 3 F 2018-11-20 100
19 3 P 2018-12-20 100
20 3 F 2019-03-20 600
22 4 M 2017-08-10 800
23 4 F 2018-06-10 100
24 4 P 2018-08-10 120
25 4 F 2018-10-10 500
26 4 M 2019-01-10 200
27 4 F 2019-06-10 600
31 7 M 2017-08-10 800
32 7 F 2018-06-10 100
33 7 P 2018-08-10 120
34 7 M 2019-01-10 200
35 7 F 2019-06-10 600
The second part is a bit more annoying. There's almost certainly a smarter way to do this, but here's the straight forward way:
s = df.Date.diff().dt.days
res = pd.concat([df.groupby('ID').Status.value_counts().unstack().add_prefix('No_of_'),
df.groupby('ID').Status.apply(lambda x: x.iloc[-2]).to_frame('SLS'),
(s.where(s.gt(0)).groupby(df.ID).apply(lambda x: x.cumsum().iloc[-2])
.to_frame('NoDays_to_SLS')),
s.groupby(df.ID).apply(lambda x: x.iloc[-1]).to_frame('NoDays_SLS_to_LS')],
axis=1)
Output:
No_of_F No_of_M No_of_P SLS NoDays_to_SLS NoDays_SLS_to_LS
ID
1 3 2 2 P 487.0 151.0
2 3 3 1 M 487.0 61.0
3 3 2 2 P 640.0 90.0
4 3 2 1 M 518.0 151.0
7 2 2 1 M 518.0 151.0
Here's my attempt (Note: I am using pandas 0.25) :
df = pd.read_clipboard()
df['Date'] = pd.to_datetime(df['Date'])
df_1 = df.groupby('ID',group_keys=False)\
.apply(lambda x: x[(x['Status']=='F')[::-1].cumsum().astype(bool)])
df_2 = df_1[df_1.groupby('ID')['Status'].transform('count') > 2]
g = df_2.groupby('ID')
df_Counts = g['Status'].value_counts().unstack().add_prefix('No_of_')
df_SLS = g['Status'].agg(lambda x: x.iloc[-2]).rename('SLS')
df_dates = g['Date'].agg(NoDays_to_SLS = lambda x: x.iloc[-2]-x.iloc[0],
NoDays_to_SLS_LS = lambda x: x.iloc[-1]-x.iloc[-2])
pd.concat([df_Counts, df_SLS, df_dates], axis=1).reset_index()
Output:
ID No_of_F No_of_M No_of_P SLS NoDays_to_SLS NoDays_to_SLS_LS
0 1 3 2 2 P 487 days 151 days
1 2 3 3 1 M 487 days 61 days
2 3 3 2 2 P 640 days 90 days
3 4 3 2 1 M 518 days 151 days
4 7 2 2 1 M 518 days 151 days
There are some enhancements in 0.25 that this code uses.

how to encode only categorical data in a dataframe

enter image description here
how to encode only categorical data in a data frame
Income Length of Residence Median House Value Number of Vehicles Percentage Asian Percentage Black Percentage English Speaking Percentage Hispanic Percentage White MakeDescr SeriesDescr Msrp
1 90000 15.0 F 4 1 1 71 6 81 HYUNDAI Sonata-4 Cyl. 19395.0
2 125000 7.0 H 1 11 1 91 1 81 JEEP Grand Cherokee-V6 29135.0
3 90000 8.0 F 1 1 1 71 6 86 JEEP Liberty 20700.0
4 125000 8.0 F 3 1 1 86 6 86 VOLKSWAGEN Passat-V6 28750.0
5 90000 8.0 F 1 1 1 71 6 81 JEEP Wrangler 20210.0
6 110000 7.0 G 5 6 6 71 6 76 HYUNDAI Santa Fe-V6 25645.0
7 110000 7.0 G 3 11 6 71 6 71 HYUNDAI Sonata-4 Cyl. 15999.0
8 125000 8.0 G 1 1 11 81 6 76 HYUNDAI Santa Fe-V6 23645.0
9 125000 9.0 G 1 6 1 91 1 86 CHEVROLET TRUCK Trailblazer EXT 32040.0
10 110000 8.0 E 2 6 46 81 16 26 JEEP Wrangler-V6 18660.0
11 125000 11.0 G 3 6 1 76 1 86 CHEVROLET TRUCK Silverado 2500 HD 31775.0
12 125000 12.0 G 2 11 6 66 1 71 CHEVROLET Cobalt 13675.0
13 125000 13.0 G 2 1 16 95 6 71 HYUNDAI Veracruz-V6 28600.0
15 110000 11.0 F 5 6 41 61 11 41 HYUNDAI Santa Fe 22499.0
16 125000 9.0 F 2 1 6 91 1 81 HYUNDAI Santa Fe 22499.0
17 125000 8.0 G 2 11 11 66 1 66 MITSUBISHI Endeavor-V6 32602.0
18 110000 12.0 E 1 6 46 81 16 26 HYUNDAI Accent-4 Cyl. 10899.0
19 90000 9.0 F 4 1 6 71 6 81 JEEP Grand Cherokee-6 Cyl. 29080.0
21 125000 8.0 G 1 6 1 76 1 86 MITSUBISHI Endeavor-V6 29302.0
22 110000 12.0 F 2 6 26 66 11 51 HYUNDAI Santa Fe 22499.0
23 90000 9.0 F 1 6 6 66 6 76 HYUNDAI Santa Fe-V6 20995.0
24 125000 9.0 H 1 6 1 91 1 81 HYUNDAI Sonata-V6 18799.0
25 90000 14.0 F 2 1 6 71 11 81 HYUNDAI Elantra-4 Cyl. 13299.0
26 125000 9.0 G 3 1 11 81 6 76 JEEP Grand Cherokee-6 Cyl. 29080.0
27 125000 8.0 H 5 6 1 91 1 81 CHEVROLET TRUCK Trailblazer 29395.0
28 110000 12.0 E 4 6 41 61 11 36 HYUNDAI Sonata-4 Cyl. 15999.0
29 110000 10.0 E 1 6 41 61 11 36 HYUNDAI Santa Fe-V6 20995.0
30 125000 10.0 F 2 6 1 71 6 86 CHEVROLET TRUCK Tahoe 37000.0
32 90000 10.0 F 1 1 1 71 6 86 MITSUBISHI Galant-V6 19997.0
33 125000 12.0 F 1 1 1 86 6 86 CHEVROLET TRUCK Trailblazer 28175.0
... ... ... ... ... ... ... ... ... ... ... ... ...
4451 110000 9.0 F 3 6 41 61 11 36 NISSAN Sentra-4 Cyl. 17990.0
4452 125000 11.0 G 2 1 11 81 6 76 CHEVROLET TRUCK Tahoe 39515.0
4453 125000 8.0 H 1 6 1 91 1 81 HYUNDAI Elantra-4 Cyl. 15195.0
4454 110000 10.0 F 3 6 41 61 11 41 HYUNDAI Genesis-4 Cyl. 26750.0
4455 125000 7.0 H 4 11 1 76 1 76 HYUNDAI Sonata-4 Cyl. 19695.0
4456 125000 9.0 G 5 6 1 76 1 86 NISSAN Altima 22500.0
4457 110000 11.0 E 1 6 46 81 16 26 GMC LIGHT DUTY Denali 51935.0
4458 125000 6.0 H 1 11 1 76 1 76 JEEP Liberty-V6 24865.0
4459 125000 12.0 G 3 1 16 95 6 71 HONDA Accord-V6 26700.0
4460 125000 7.0 F 1 1 1 86 6 86 HYUNDAI Veloster-4 Cyl. 17300.0
4461 90000 10.0 F 2 6 11 66 6 71 CADILLAC SRX-V6 42210.0
4463 110000 8.0 F 3 6 26 61 11 56 GMC LIGHT DUTY Acadia 42390.0
4468 125000 8.0 G 1 1 1 91 1 86 HONDA Pilot-V6 40820.0
4469 125000 10.0 H 5 11 1 91 1 81 TOYOTA Highlander-V6 30695.0
4470 110000 12.0 F 1 6 41 61 11 41 HYUNDAI Elantra-4 Cyl. 15195.0
4473 110000 13.0 F 1 6 21 66 6 61 ACURA TSX 32910.0
4476 125000 9.0 G 1 6 1 76 1 86 BMW X3 36750.0
4482 125000 10.0 H 1 6 1 91 1 81 SUBARU Forester-4 Cyl. 21195.0
4486 125000 11.0 H 2 6 1 91 1 81 GMC LIGHT DUTY Yukon XL 44315.0
4492 125000 10.0 H 2 6 1 91 1 81 BMW 5 Series 53400.0
4493 110000 12.0 G 2 6 6 71 6 76 ACURA TL 33725.0
4494 125000 12.0 F 3 1 1 86 6 86 ACURA TL 33725.0
4495 125000 12.0 F 3 1 1 86 6 86 ACURA TL 33725.0
4496 125000 7.0 G 5 1 11 81 6 76 ACURA TL 33325.0
4497 125000 9.0 G 1 6 1 76 1 86 ACURA TL 33725.0
4498 125000 12.0 G 3 1 11 81 6 76 ACURA TL 33725.0
4499 110000 14.0 G 8 11 6 71 6 71 ACURA TL 33725.0
4501 125000 9.0 G 3 11 6 66 1 71 FORD Taurus-V6 20050.0
4502 110000 2.0 G 4 11 6 71 6 71 DODGE Stratus-4 Cyl. 15910.0
4503 125000 8.0 F 1 1 1 86 6 86 DODGE Stratus-4 Cyl. 19145.0
# Using standard scikit-learn label encoder.
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
# Encode all string columns. Assuming all categoricals are of type str.
for c in df.select_dtypes(['object']):
print "Encoding column " + c
df[c] = le.fit_transform(df[c])

pandas element wise conditional return index

I want to find abnormal values and replace them with corresponding day of next week.
year week day v1 v2
2001 1 1 46 9999
2001 1 2 60 9335
2001 1 3 9999 9318
2001 1 4 47 9999
2001 1 5 57 9373
2001 1 6 9999 9384
2001 1 7 72 9444
2001 2 1 75 73
2001 2 2 74 63
2001 2 3 79 377
2001 2 4 70 361
2001 2 5 75 73
2001 2 6 77 64
2001 2 7 76 57
I could carry out column by column,code as follows:
index_row=df[df['v1']==9999].index
for i in index_row:
df['v1'][i]=df['v1'][i+7] # i+7 is the index of next week
How to element-wise the whole dataframe? Such as pd.applymap.
How get the columns number(name) and row number base on conditional seiving values?
The target df I want as follows:
( * indicated modified values and the next week values)
year week day v1 v2
2001 1 1 46 *73
2001 1 2 60 9335
2001 1 3 *79 9318
2001 1 4 47 *361
2001 1 5 57 9373
2001 1 6 *77 9384
2001 1 7 72 9444
2001 2 1 75 *73
2001 2 2 74 63
2001 2 3 *79 377
2001 2 4 70 *361
2001 2 5 75 73
2001 2 6 *77 64
2001 2 7 76 57
create d1 with set_index on columns ['year', 'week', 'day']
create d2 with same index as d1 except, subtract 1 from week
mask with other
cols = ['year', 'week', 'day']
d1 = df.set_index(cols)
d2 = df.assign(week=df.week - 1).set_index(cols)
d1.mask(d1.eq(9999), d2).reset_index()
year week day v1 v2
0 2001 1 1 46 73
1 2001 1 2 60 9335
2 2001 1 3 79 9318
3 2001 1 4 47 361
4 2001 1 5 57 9373
5 2001 1 6 77 9384
6 2001 1 7 72 9444
7 2001 2 1 75 73
8 2001 2 2 74 63
9 2001 2 3 79 377
10 2001 2 4 70 361
11 2001 2 5 75 73
12 2001 2 6 77 64
13 2001 2 7 76 57
old answer
One approach is to setup d1 with index of ['year', 'week', 'day'] and manipulate that to shift a week. Then mask it for equal to 9999 and fillna
d1 = df.set_index(['year', 'week', 'day'])
s1 = d1.unstack(['year', 'day']).shift(-1).stack(['year', 'day']).swaplevel(0, 1)
d1.mask(d1==9999).fillna(s1).reset_index()
year week day v1 v2
0 2001 1 1 46.0 73.0
1 2001 1 2 60.0 9335.0
2 2001 1 3 79.0 9318.0
3 2001 1 4 47.0 361.0
4 2001 1 5 57.0 9373.0
5 2001 1 6 77.0 9384.0
6 2001 1 7 72.0 9444.0
7 2001 2 1 75.0 73.0
8 2001 2 2 74.0 63.0
9 2001 2 3 79.0 377.0
10 2001 2 4 70.0 361.0
11 2001 2 5 75.0 73.0
12 2001 2 6 77.0 64.0
13 2001 2 7 76.0 57.0
You can working with DatetimeIndex, set value by mask with shifted rows:
a = df['year'].astype(str).add('-').add(df['week'].astype(str))
.add('-').add(df['day'].sub(1).astype(str))
#http://strftime.org/
df.index = pd.to_datetime(a, format='%Y-%U-%w')
df2 = df.shift(-1,freq='7D')
df = df.mask(df.eq(9999), df2).reset_index(drop=True)
print (df)
year week day v1 v2
0 2001 1 1 46 73
1 2001 1 2 60 9335
2 2001 1 3 79 9318
3 2001 1 4 47 361
4 2001 1 5 57 9373
5 2001 1 6 77 9384
6 2001 1 7 72 9444
7 2001 2 1 75 73
8 2001 2 2 74 63
9 2001 2 3 79 377
10 2001 2 4 70 361
11 2001 2 5 75 73
12 2001 2 6 77 64
13 2001 2 7 76 57

Calculate average values for rows with different ids in MS Excel

File contains information about products per day, and I need to calculate average values for month for each product.
Source data looks like this:
A B C D
id date rating price
1 1 2014/01/01 2 20
2 1 2014/01/02 2 20
3 1 2014/01/03 2 20
4 1 2014/01/04 1 20
5 1 2014/01/05 1 20
6 1 2014/01/06 1 20
7 1 2014/01/07 1 20
8 3 2014/01/01 5 99
9 3 2014/01/02 5 99
10 3 2014/01/03 5 99
11 3 2014/01/04 5 99
12 3 2014/01/05 5 120
13 3 2014/01/06 5 120
14 3 2014/01/07 5 120
Need to get:
A B C D
id date rating price
1 1 1.42 20
2 3 5 108
How to do that? Need some advanced formula or VB Script.
Update: I have data for long period - about 2 years. Need to calculate average values for each product for each week, and after for each month.
Source data example:
id date rating
4 2013-09-01 445
4 2013-09-02 446
4 2013-09-03 447
4 2013-09-04 448
4 2013-09-05 449
4 2013-09-06 450
4 2013-09-07 451
4 2013-09-08 452
4 2013-09-09 453
4 2013-09-10 454
4 2013-09-11 455
4 2013-09-12 456
4 2013-09-13 457
4 2013-09-14 458
4 2013-09-15 459
4 2013-09-16 460
4 2013-09-17 461
4 2013-09-18 462
4 2013-09-19 463
4 2013-09-20 464
4 2013-09-21 465
4 2013-09-22 466
4 2013-09-23 467
4 2013-09-24 468
4 2013-09-25 469
4 2013-09-26 470
4 2013-09-27 471
4 2013-09-28 472
4 2013-09-29 473
4 2013-09-30 474
4 2013-10-01 475
4 2013-10-02 476
4 2013-10-03 477
4 2013-10-04 478
4 2013-10-05 479
4 2013-10-06 480
4 2013-10-07 481
4 2013-10-08 482
4 2013-10-09 483
4 2013-10-10 484
4 2013-10-11 485
4 2013-10-12 486
4 2013-10-13 487
4 2013-10-14 488
4 2013-10-15 489
4 2013-10-16 490
4 2013-10-17 491
4 2013-10-18 492
4 2013-10-19 493
4 2013-10-20 494
4 2013-10-21 495
4 2013-10-22 496
4 2013-10-23 497
4 2013-10-24 498
4 2013-10-25 499
4 2013-10-26 500
4 2013-10-27 501
4 2013-10-28 502
4 2013-10-29 503
4 2013-10-30 504
4 2013-10-31 505
7 2013-09-01 1445
7 2013-09-02 1446
7 2013-09-03 1447
7 2013-09-04 1448
7 2013-09-05 1449
7 2013-09-06 1450
7 2013-09-07 1451
7 2013-09-08 1452
7 2013-09-09 1453
7 2013-09-10 1454
7 2013-09-11 1455
7 2013-09-12 1456
7 2013-09-13 1457
7 2013-09-14 1458
7 2013-09-15 1459
7 2013-09-16 1460
7 2013-09-17 1461
7 2013-09-18 1462
7 2013-09-19 1463
7 2013-09-20 1464
7 2013-09-21 1465
7 2013-09-22 1466
7 2013-09-23 1467
7 2013-09-24 1468
7 2013-09-25 1469
7 2013-09-26 1470
7 2013-09-27 1471
7 2013-09-28 1472
7 2013-09-29 1473
7 2013-09-30 1474
7 2013-10-01 1475
7 2013-10-02 1476
7 2013-10-03 1477
7 2013-10-04 1478
7 2013-10-05 1479
7 2013-10-06 1480
7 2013-10-07 1481
7 2013-10-08 1482
7 2013-10-09 1483
7 2013-10-10 1484
7 2013-10-11 1485
7 2013-10-12 1486
7 2013-10-13 1487
7 2013-10-14 1488
7 2013-10-15 1489
7 2013-10-16 1490
7 2013-10-17 1491
7 2013-10-18 1492
7 2013-10-19 1493
7 2013-10-20 1494
7 2013-10-21 1495
7 2013-10-22 1496
7 2013-10-23 1497
7 2013-10-24 1498
7 2013-10-25 1499
7 2013-10-26 1500
7 2013-10-27 1501
7 2013-10-28 1502
7 2013-10-29 1503
7 2013-10-30 1504
7 2013-10-31 1505
This is the job of a pivot table, and it takes about 30secs to do it
Update:
as per your update, put the date into the Report Filter and modify to suit