WCF : Moving from IIS 7 to IIS 8 - wcf

I have moved my my wcf service from iis7 to 8.i can browse to the svc file but i cannot browse to any other methods through get or post method.it shows the below error
The sever encountered an error processing the request.see server logs for more details
the log file is shown below
Software: Microsoft Internet Information Services 8.5
Version: 1.0
Date: 2014-12-17 04:25:48
Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken
2014-12-17 04:25:48 (ipaddress) GET /service - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 301 0 0 120
2014-12-17 04:25:48 (ipaddress) GET /service/ - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 200 0 0 3
2014-12-17 04:25:53 (ipaddress) GET /service/MposService.svc - 786 - (ipaddress) Mozilla/5.0+(Windows+NT+6.3;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko (ipaddress):786/service/ 200 0 0 904
2014-12-17 04:27:42 (ipaddress) GET /service/MposService.svc - 786 - publicip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 200 0 0 628
2014-12-17 04:27:42 (ipaddress) GET /favicon.ico - 786 - public ip Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 404 0 2 470
2014-12-17 04:28:24 (ipaddress) GET /service/MposService.svc/getCustomer section=s1 786 - 117.213.26.161 Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/39.0.2171.95+Safari/537.36 - 400 0 0 640

Related

How to select data for especific time intervals after using Pandas’ resample function?

I used Pandas’ resample function for calculating the sales of a list of proucts every 6 months.
I used the resample function for ‘6M’ and using apply({“column-name”:”sum”}).
Now I’d like to create a table with the sum of the sales for the first six months.
How can I extract the sum of the first 6 months, given that all products have records for more than 3 years, and none of them have the same start date?
Thanks in advance for any suggestions.
Here is an example of the data:
Product Date sales
Product 1 6/30/2017 20
12/31/2017 60
6/30/2018 50
12/31/2018 100
Product 2 1/31/2017 30
7/31/2017 150
1/31/2018 200
7/31/2018 300
1/31/2019 100
While waiting for your data, I worked on this. See if this is something that will be helpful for you.
import pandas as pd
df = pd.DataFrame({'Date':['2018-01-10','2018-02-15','2018-03-18',
'2018-07-10','2018-09-12','2018-10-14',
'2018-11-16','2018-12-20','2019-01-10',
'2019-04-15','2019-06-12','2019-10-18',
'2019-12-02','2020-01-05','2020-02-25',
'2020-03-15','2020-04-11','2020-07-22'],
'Sales':[200,300,100,250,150,350,150,200,250,
200,300,100,250,150,350,150,200,250]})
#first breakdown the data by Yearly Quarters
df['YQtr'] = pd.PeriodIndex(pd.to_datetime(df.Date), freq='Q')
#next create a column to identify Half Yearly - H1 for Jan-Jun & H2 for Jul-Dec
df.loc[df['YQtr'].astype(str).str[-2:].isin(['Q1','Q2']),'HYear'] = df['YQtr'].astype(str).str[:-2]+'H1'
df.loc[df['YQtr'].astype(str).str[-2:].isin(['Q3','Q4']),'HYear'] = df['YQtr'].astype(str).str[:-2]+'H2'
#Do a cummulative sum on Half Year to get sales by H1 & H2 for each year
df['HYear_cumsum'] = df.groupby('HYear')['Sales'].cumsum()
#Now filter out only the rows with the max value. That's the H1 & H2 sales figure
df1 = df[df.groupby('HYear')['HYear_cumsum'].transform('max')== df['HYear_cumsum']]
print (df)
print (df1)
The output of this will be:
Source Data + Half Year cumulative sum:
Date Sales YQtr HYear HYear_cumsum
0 2018-01-10 200 2018Q1 2018H1 200
1 2018-02-15 300 2018Q1 2018H1 500
2 2018-03-18 100 2018Q1 2018H1 600
3 2018-07-10 250 2018Q3 2018H2 250
4 2018-09-12 150 2018Q3 2018H2 400
5 2018-10-14 350 2018Q4 2018H2 750
6 2018-11-16 150 2018Q4 2018H2 900
7 2018-12-20 200 2018Q4 2018H2 1100
8 2019-01-10 250 2019Q1 2019H1 250
9 2019-04-15 200 2019Q2 2019H1 450
10 2019-06-12 300 2019Q2 2019H1 750
11 2019-10-18 100 2019Q4 2019H2 100
12 2019-12-02 250 2019Q4 2019H2 350
13 2020-01-05 150 2020Q1 2020H1 150
14 2020-02-25 350 2020Q1 2020H1 500
15 2020-03-15 150 2020Q1 2020H1 650
16 2020-04-11 200 2020Q2 2020H1 850
17 2020-07-22 250 2020Q3 2020H2 250
The half year cumulative sum for each half year.
Date Sales YQtr HYear HYear_cumsum
2 2018-03-18 100 2018Q1 2018H1 600
7 2018-12-20 200 2018Q4 2018H2 1100
10 2019-06-12 300 2019Q2 2019H1 750
12 2019-12-02 250 2019Q4 2019H2 350
16 2020-04-11 200 2020Q2 2020H1 850
17 2020-07-22 250 2020Q3 2020H2 250
I will look at your sample data and work on it later tonight.

Future dates calculating incorrectly in FBProphet - make_future_dataframe method

I'm trying to do a weekly forecast in FBProphet for just 5 weeks ahead. The make_future_dataframe method doesn't seem to be working right....makes the correct one week intervals except for one week between jul 3 and Jul 5....every other interval is correct at 7 days or a week. Code and output below:
INPUT DATAFRAME
ds y
548 2010-01-01 3117
547 2010-01-08 2850
546 2010-01-15 2607
545 2010-01-22 2521
544 2010-01-29 2406
... ... ...
4 2020-06-05 2807
3 2020-06-12 2892
2 2020-06-19 3012
1 2020-06-26 3077
0 2020-07-03 3133
CODE
future = m.make_future_dataframe(periods=5, freq='W')
future.tail(9)
OUTPUT
ds
545 2020-06-12
546 2020-06-19
547 2020-06-26
548 2020-07-03
549 2020-07-05
550 2020-07-12
551 2020-07-19
552 2020-07-26
553 2020-08-02
All you need to do is create a dataframe with the dates you need for predict method. utilizing the make_future_dataframe method is not necessary.

Pandas: add date field to parsed timestamp

I have several date specific text files (for ex 20150211.txt) that looks like
TopOfBook 0x21 60 07:15:00.862 101 85 5 109 500 24 +
TopOfBook 0x21 60 07:15:00.882 101 91 400 109 500 18 +
TopOfBook 0x21 60 07:15:00.890 101 91 400 105 80 14 +
TopOfBook 0x21 60 07:15:00.914 101 93.3 400 105 80 11.7 +
where the 4th column contains the timestamp.
If I read this into pandas with automatic parsing
df_top = pd.read_csv('TOP_20150210.txt', sep='\t', names=hdr_top, parse_dates=[3])
I get:
0 TopOfBook 0x21 60 2015-05-17 07:15:00.862000 101 85.0 5 109.0 500 24.0 +
1 TopOfBook 0x21 60 2015-05-17 07:15:00.882000 101 91.0 400 109.0 500 18.0 +
2 TopOfBook 0x21 60 2015-05-17 07:15:00.890000 101 91.0 400 105.0 80 14.0 +
Where the time part of course is correct, but how do I add the correct date part of this timestamp (2015-02-11)? Thank you
After parsing the dates, the third column has dtype <M8[ns]. This is the NumPy datetime64 dtype with nanosecond resolution. You can do fast date arithmetic by adding or subtracting NumPy timedelta64s.
So, for example, subtracting 6 days from df[3] yields
In [139]: df[3] - np.array([6], dtype='<m8[D]')
Out[139]:
0 2015-05-11 07:15:00.862000
1 2015-05-11 07:15:00.882000
2 2015-05-11 07:15:00.890000
3 2015-05-11 07:15:00.914000
Name: 3, dtype: datetime64[ns]
To find the correct number of days to subtract you could use
today = df.iloc[0,3]
date = pd.Timestamp(re.search(r'\d+', filename).group())
n = (today-date).days
import datetime as DT
import numpy as np
import pandas as pd
import re
filename = '20150211.txt'
df = pd.read_csv(filename, sep='\t', header=None, parse_dates=[3])
today = df.iloc[0,3]
date = pd.Timestamp(re.search(r'\d+', filename).group())
n = (today-date).days
df[3] -= np.array([n], dtype='<m8[D]')
print(df)
yields
0 1 2 3 4 5 6 7 8 \
0 TopOfBook 0x21 60 2015-02-11 07:15:00.862000 101 85.0 5 109 500
1 TopOfBook 0x21 60 2015-02-11 07:15:00.882000 101 91.0 400 109 500
2 TopOfBook 0x21 60 2015-02-11 07:15:00.890000 101 91.0 400 105 80
3 TopOfBook 0x21 60 2015-02-11 07:15:00.914000 101 93.3 400 105 80
9
0 24.0
1 18.0
2 14.0
3 11.7
You could apply and construct the datetime using your desired date values and then copying the time portion to the constructor:
In [9]:
import datetime as dt
df[3] = df[3].apply(lambda x: dt.datetime(2015,2,11,x.hour,x.minute,x.second,x.microsecond))
df
Out[9]:
0 1 2 3 4 5 6 7 8 \
0 TopOfBook 0x21 60 2015-02-11 07:15:00.862000 101 85.0 5 109 500
1 TopOfBook 0x21 60 2015-02-11 07:15:00.882000 101 91.0 400 109 500
2 TopOfBook 0x21 60 2015-02-11 07:15:00.890000 101 91.0 400 105 80
3 TopOfBook 0x21 60 2015-02-11 07:15:00.914000 101 93.3 400 105 80
9 10
0 24.0 +
1 18.0 +
2 14.0 +
3 11.7 +

How to choose rows with time-stamps in a complicated range?

My data frame has a time stamp column (dtype: datetime64[ns]) like this
ID TIMESTAMP
1 2014-08-14 17:57:17
2 2014-08-14 17:50:11
3 2014-08-14 17:49:28
4 2014-08-14 17:58:10
5 2014-08-14 17:59:37
6 2014-08-14 17:25:46
7 2014-08-14 17:54:06
8 2014-08-14 17:55:48
9 2014-08-14 17:49:23
10 2014-08-14 17:40:21
...
301 2014-12-21 14:11:52
302 2014-12-21 14:22:22
303 2014-12-21 14:29:19
304 2014-12-21 14:27:37
305 2014-12-21 14:22:33
306 2014-12-21 14:26:25
307 2014-12-21 14:11:13
308 2014-12-21 11:41:54
309 2014-12-21 13:18:44
310 2014-12-21 14:26:31
Now suppose I want to find the rows from 2014-08-04 to 2014-08-24, and from 17:55 to 18:00 of each day in the period, how can I do this with pandas? I think I should use Timedelta, but I don't find the functionality of Timedelta for hours only. Thank you.
Ugly but should work for you:
In [113]:
df[(df['TIMESTAMP'] > '2014-08-04') & (df['TIMESTAMP'] < '2014-08-24') & (df['TIMESTAMP'].dt.time > dt.time(17,55)) & (df['TIMESTAMP'].dt.time < dt.time(18))]
Out[113]:
ID TIMESTAMP
0 1 2014-08-14 17:57:17
3 4 2014-08-14 17:58:10
4 5 2014-08-14 17:59:37
7 8 2014-08-14 17:55:48
So we can use date strings for comparing the date but for the time portion you'll have to construct a time object

from 15 minutes interval to hourly interval counts

am using excel sheet to display data from sql with this query
SELECT itable.Timestamp, itable.Time,
Sum(itable.CallsOffered)AS CallsOffered, Sum(itable.CallsAnswered)AS CallsAnswered, Sum(itable.CallsAnsweredAftThreshold)AS CallsAnsweredAftThreshold,
sum(CallsAnsweredDelay)AS CallsAnsweredDelay
FROM tablename itable
WHERE
(itable.Timestamp>=?) AND (itable.Timestamp<=?) AND
(itable.Application in ('1','2','3','4'))
GROUP BY itable.Timestamp, itable.Time
ORDER BY itable.Timestamp, itable.Time
and i get a data with an interval of 15 minutes like this :
Timestamp Time CallsOffered CallsAnswered CallsAnsweredAftThreshold CallsAnsweredDelay
6/1/2014 0:00 00:00 0 1 1 52
6/1/2014 0:15 00:15 3 1 1 23
6/1/2014 0:30 00:30 3 3 2 89
6/1/2014 0:45 00:45 0 0 0 0
6/1/2014 1:00 01:00 0 0 0 0
6/1/2014 1:15 01:15 4 1 1 12
6/1/2014 1:30 01:30 1 1 1 39
6/1/2014 1:45 01:45 0 0 0 0
6/1/2014 2:00 02:00 2 1 0 7
6/1/2014 2:15 02:15 1 1 1 80
6/1/2014 2:30 02:30 3 2 2 75
6/1/2014 2:45 02:45 0 0 0 0
6/1/2014 3:00 03:00 0 0 0 0
and i want to convert the interval from being 15 minutes to hourly interval
like this
2014-07-01 00:00:00.000
2014-07-01 01:00:00.000
2014-07-01 02:00:00.000
2014-07-01 03:00:00.000
2014-07-01 04:00:00.000
2014-07-01 05:00:00.000
2014-07-01 06:00:00.000
2014-07-01 07:00:00.000
2014-07-01 08:00:00.000
2014-07-01 09:00:00.000
2014-07-01 10:00:00.000
2014-07-01 11:00:00.000
2014-07-01 12:00:00.000
2014-07-01 13:00:00.000
2014-07-01 14:00:00.000
the query i came up with is :
select
timestamp = DATEADD(hour,datediff(hour,0,app.Timestamp),0),
Sum(app.CallsOffered)AS CallsOffered,
Sum(app.CallsAnswered)AS CallsAnswered,
Sum(app.CallsAnsweredAftThreshold)AS CallsAnsweredAftThreshold,
sum(CallsAnsweredDelay)AS CallsAnsweredDelay,
max(MaxCallsAnsDelay) as MaxCallsAnsDelay ,
max(app.MaxCallsAbandonedDelay)as MaxCallsAbandonedDelay
from tablename app
where Timestamp >='2014-7-1' AND timestamp<='2014-7-2' and
(app.Application in (
'1',
'2',
'3',
'4')
group by DATEADD(hour,datediff(hour,0,Timestamp),0)
order by Timestamp;
i get the result i want when i run in in Microsoft Sql server Managment studio
but it gives me a long error when i try running the same query in Microsoft Query in excel the error is like i cant start with timestamp
and that its giving me error for DATEADD ,DATEDIFF
so is there something i should change in my query or anything i can do to get an hourly count interval instead of 15 minutes count interval as ive shown
and thank you in advance