What makes Fiddler accelerate my iOS app rest call? - wcf

My iOS app makes Rest calls to my WCF web service.
The responding speed is very slow, over 3 min.
However, when I set up Fiddler as a Proxy to monitor the iOS traffic. The call was finished in 1 sec.
What does make Fiddler magically accelerate the Rest call from iOS?
p.s. Fiddler is setup on a windows PC where uses the same network with iOS App.
The rest call example (from Fiddler)
Request
GET https://xxxx.xxxx.com/Deals HTTP/1.1
Host: xxx.xxxx.com
Proxy-Connection: keep-alive
Accept-Encoding: gzip
Content-Type: application/json
Cookie: ASPXAUTH=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Connection: keep-alive
User-Agent: Natural xxxx x.x.x (iPad; iPhone OS 7.0.2; en_US)
Response
HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 891437
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/7.5
LastFetchDateTimeUTC: 2014-02-14T16:52:43.5465273Z
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Fri, 14 Feb 2014 16:52:45 GMT
Response body is a large json (2MB)
p.s.
Except for Fiddler, we also tried to install wireshark and use it to capture traffic on the mac while running the app from on the simulator.
We see a lot DUP ACK, I guess that's causing tcp re-transmission
p.s.
We pinged from iOS too, there is no delay to the WCF web service.
Help!
UPDATE:
We found out a problem, looks like the respond time decreases with the length of the body. Does it mean anything?

The WireShark logs should provide you plenty of information about what happens in each case. When Fiddler "magically" makes things faster, it's typically due to:
Better connection reuse (e.g. Fiddler may reuse connections better than client)
Better buffer sizes (e.g. not using tiny buffers for read/write)
Non-broken proxy determination behavior
I wrote a bit about these in this blog post.

We solved this problem by proving that server is a shitty one. We deployed the same service on another VM and it works. Must be the a broken network card

Related

CouchDB Proxy Authentication Doesn't work

When I send a http request to my couchdb server like it is shown in the docs here CouchDB Proxy Authentication, it doesn't give the response shown in the docs, just empty data. What am I doing wrong?
Also, am I able to start a session with this Proxy Auth? If I try a POST /_session, I get 500 error code.
GET /_session HTTP/1.1
Host: 127.0.0.2:5984
User-Agent: curl/7.51.0
Accept: application/json
Content-Type: application/json; charset=utf-8
X-Auth-CouchDB-UserName: john
X-Auth-CouchDB-Roles: blogger
< HTTP/1.1 200 OK
< Cache-Control: must-revalidate
< Content-Length: 132
< Content-Type: application/json
< Date: Sun, 06 Nov 2016 01:10:58 GMT
< Server: CouchDB/2.0.0 (Erlang OTP/17)
<
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
I found in the CouchDB issue tracker that the Proxy Authentication is broken in version 2.0.0. Either that or the docs aren't updated to indicate that it only works with clusters or something. I changed back to version 1.6.1 and everything works fine. I must say that the documentation for how Proxy Authentication works is very poor.
How it works is you need your third party authentication server to have the "[couch_httpd_auth] secret" and when a client authenticates, you need to generate a HMAC-SHA1 token by combining the username and secret. Then, on any http requests you make from the client to the CouchDB server, if you include all the headers:
X-Auth-CouchDB-Roles
X-Auth-CouchDB-UserName
X-Auth-CouchDB-Token
that request will be authenticated as a user client.
Also, it is not mentioned in the docs, but POST on the /_session API using these headers does nothing.
It's not the Proxy Authentication itself which is broken in CouchDB 2.0, it's just that in the current release there's no way to configure the authentication handlers like there was in the old 1.6 days.
There are some patches mentioned in the issue tracker which add proxy authentication to the list of authentication handlers. Furthermore there was a pull request which was accepted and merged which brings back configurability to CouchDB 2.0.
However in order to take advantage of those I'm afraid you either have to wait until the next release, or build CouchDB 2.0 yourself from the sources.
Proxy authentication is fixed as of CouchDB 2.1.1. The latest (>2.1.1) documentation shows how to configure proxy authentication again, along with the important proxy_use_secret option.

Possible to emulate flow control in HTTP protocol ? (raw HTTP protocol)

I have hundreds of hardware devices at customers which need to send HTTP data through a telnet interface.
The destination is an Apache 2 Webserver with a PHP script waiting for the data.
This is already working however we found that the hardware involved is not able to handle hw-flow-control, this means that once data is filled (around 250 bytes) the buffer can overflow resulting in data corruption.
Fixing the HW-flow is not an option, the "modem" firmware is closed and can not be modified by the vendor anymore as it's quite old hardware.
Normally we'd use this:
POST / HTTP/1.1
Host: api.server
User-Agent: P8
Content-Type: application/x-www-form-urlencoded
Accept: */*
Content-Length: 767
VARIABLE=URLENCODED_DATA(total length 767 bytes)
This would work perfectly fine with flow-control, but in my case the 767 bytes are too much.
After around 200 bytes buffers would be overwritten and some bytes are lost.
The only current way to get it working now was using a delay when sending to the "modem" so it can empty it's buffers in time. However in the field this will not work due to instable internet connections with unpredictable timings.
I am not an expert in HTTP, I just hope it is possible to fragment a package.
I thought about using "Connection: keep-alive" or something similar.
My main question:
Is there a way to send POST data ($VARIABLE) to a Apache 2 server in smaller chunks in a way that makes the HTTP server combine them to one stream internally ?
Pseudo code:
POST / HTTP/1.1
Host: api.server
User-Agent: P8
Content-Type: application/x-www-form-urlencoded
Accept: */*
Content-Length: 400
Connection: keep-alive
VARIABLE=URLENCODED_DATA(200 bytes)
END\n\n
Server responds in TCP stream once received with "OK".
Next chunk is sent:
VARIABLE=URLENCODED_DATA(200 bytes)
Connection is closed.
As 400 bytes have been reached the process is ready, Apache forwards VARIABLE to PHP scripts POST input.
So like a HTTP flow-control within an open TCP connection.
Maybe there is a HTTP feature which is built for that purpose, or something that can be "ab"used to act in that way. keep-alive was just a guess.
If current HTTP protocols do not have such a feature the only way I can think about solving my issue is to implement flow-control on PHP side.
I hope for a better way than that though.
Update:
Meanwhile I found two interesting parameters:
Expect: 100-continue
Transfer-Encoding: chunked
What I would need is a mix of both.
A chunked transfer encoding which is expecting a 100-continue after each chunk !
This is a very interesting question, and it really has nothing to do with HTTP but with TCP.
The way to solve this is to use an intermediary proxy that takes care of spoon-feeding your devices. Ideally, this device will be able to set the window size on the TCP packet ACKs to whatever the size of the buffer the device is. That window size will close to zero when the device cannot handle any more. If you do this, you will be utilizing TCP's built-in flow control and solve the problem in a simple way.
Another thing you can do is keep this entirely in the application layer and have this intermediary proxy buffer all of the data from the response. For most normal HTTP responses this will be okay.

OPTIONS preflight request doesn't reach IIS hosted service

I've got a localhost website and an IIS(7.5) hosted WCF service implemented like this with a Visual Studio debugger attached. Whenever I make a CORS request to my service I'm getting the following 404 response, along with the standard ASP.Net error page:
OPTIONS http://192.168.200.44/api/values HTTP/1.1
Host: 192.168.200.44
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-gb,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Origin: http://localhost:51946
Access-Control-Request-Method: POST
Access-Control-Request-Headers: content-type
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Response >>
HTTP/1.1 404 Not Found
Content-Type: text/html
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Thu, 26 Sep 2013 09:38:49 GMT
Content-Length: 1245
Originally I was receiving erroneous 200 messages which didn't touch my WCF applictions numerous breakpoints, so it was being handled by IIS itself. I then followed this and this SO posts and removed the OPTIONSVerbHandler from the site and add "OPTIONS, PUT, POST & DELETE" as allowed HTTP verbs in the IIS manager UI (rather then web.conf), which progressed me to my 404 message. I've looked into WebDav which is highlighted as a problem but I haven't disabled/removed it because I don't know how but have read that it only affects "PUT & DELETE" operations where as my "POST" ops are also failing.
GET requests work as expected so the service definitely exists/works in IIS, just the Options preflight isn't reaching my service.
TY
Although the OP managed to find a solution that worked in their specific case, the same issue has been driving me round the bend for the past few hours.
In my case I knew it wasn't a code / web.config issue as the same code ran perfectly fine on my local IIS, but not on the production version.
After reading numerous posts and trying everything they suggested (including the OP's answer) I actually got round to looking at the IIS logs (should have done that first!) and found this:
2015-03-09 14:43:39 <IP Addr> GET /Rejected-By-UrlScan ~/Service.svc/H2dbImportTxtFile <Port> - <IP Addr>
This led me too:
http://www.pressthered.com/rejected-by-urlscan_404_errors/
And ultimately [AllowVerbs] the section of:
\system32\inetsrv\urlscan\UrlScan.ini
I just needed to add OPTIONS.
And now the OPTIONS request gets through, and everything works.
Hope that helps some-one else who is being driven mad.
EDIT:
As stated in another SO post (I'm afraid I've lost it now), as mine was a WCF service I also had to change the interface of the listening service from:
[WebInvoke(Method = "POST", etc]
to
[WebInvoke(Method = "*", etc]
In Handler Mappings on the IIS GUI, I had to restore defaults to get back the OPTIONSVerbHandler I'd previously deleted, once done I edited it to be an IsapiModule rather than a ProtocolSupportModule, you also need to set an executable "%windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_isapi.dll". This allowed me to make POST requests in addition to GET requests.
PUT and DELETE were still reporting 404 until I re-enabled the OPTIONSVerbHandler, then they began reporting 200 (OK), though I couldn't see them touching my web services when I attached a debugger.
There are then 3 modules called "ExtensionlessUrlHandler-*", two for 32/64bit and one for Integrated, I only changed the 32/64bit variants (running on a classic app pool) which worked for me, but for anyone using an integrated app pool some experimentation may be needed here.
Anyway I edited the 32/64bit variants, under the "Request Restrictions" button on the Verbs tab I added "PUT,DELETE" so it read "GET,HEAD,POST,DEBUG,PUT,DELETE" and everything in my CORS ready service is working
I had <remove name="OPTIONSVerbHandler"> in my handlers section of web.config. I commented it out to fix the problem!
<system.webServer>
<handlers>
<!--<remove name="OPTIONSVerbHandler" />-->
</handlers>
</system.webServer>

NSURLConnection and Authenticating to webservices behind ssl?

I'm currently trying to connect to a webservice placed on https://xxx.xxx.xx/myapp
It has anonymous access and SSL enabled for testing purposes atm.
While trying to connect from the 3G network, i get Status 403: Access denied. You do not have permission to view this directory or page using the credentials that you supplied.
I get these headers while trying to connect to the webservice locally:
Headers
Request URL:https://xxx.xxx.xx/myapp
Request Method:GET
Status Code:200 OK
Request Headers
GET /myapp/ HTTP/1.1
Host: xxx.xxx.xxx
Connection: keep-alive
Authorization: Basic amViZTAyOlE3ZSVNNHNB
User-Agent: Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: sv-SE,sv;q=0.8,en-US;q=0.6,en;q=0.4
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Response Headers
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Server: Microsoft-IIS/7.0
X-Powered-By: ASP.NET
Date: Thu, 16 Feb 2012 12:26:13 GMT
Content-Length: 622
But when accessing outside the local area, we get the big ol 403. Which in turn wants credentials to grant the user access to the webservice.
However, i've tried using the ASIHTTPRequest library without success, and that project has been abandoned. And they suggest going back to NSURLConnection.
And i have no clue where to start, not even which direction to take.
-connection:(connection *)connection didRecieveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge
The above delegate method of NSURLConnection doesnt even trigger. So i have no idea what so ever how to authenticate myself.
All i get is the parsed results of the xml elements of the 403-page.
I needs dem seriouz helps! plx.
This was all just a major f-up.
The site had ssl required and enabled, and setting ssl required for the virtual directories does some kind of superduper meta-blocking.
So, by disabling ssl required for the virtual directories, it runs over ssl and is not blocking 3G access..

Terrible Apache Bench results on Custom CMS

Please note: This is not a complain about a shoddy CMS.
Just toying with Apache Bench and got terrible results with our custom CMS, more exactly i got:
Requests per second: 0.37 [#/sec] (mean)
When i run another test with a plain php file i got:
Requests per second: 4786.07 [#/sec] (mean)
Another test with a previous version of the CMS:
Requests per second: 6068.66 [#/sec] (mean)
The website(s) are working fine, no problems detected, Google's Webmaster Tools reports our sites as faster than 80% of the pages which is fine, i think.
The test was:
ab -t 30 -c 10 http://example.com/
Maybe some kind of Apache problem? Bad .htaccess config, or similar?
Update:
Just ran a simple test with sockets and the results are similar. Page loads very, very slowly. If i ran my script with another website everything is fine.
Also, there's a small hint about a chunk length problem. (Bad Apache Headers, or line endings?)
The site is gzipped, and when verbose logging turned on, i see these lines in the response:
LOG: Response code = 200
LOG: header received:
HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 13:10:49 GMT
Server: Apache
Set-Cookie: PHPSESSID=ibnfoqir9fee2koirfl5mhm633; path=/
Expires: Sat, 26 Jul 1997 05:00:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Cache-Control: post-check=0, pre-check=0
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
2ef6
Always at the same place, in the middle of the HTML-source, then <!DOCTYPE HTML> again.
Please, help.
Update #2:
Just checked my HTTP headers with Rex Swain's HTTP Viewer and got these results:
HTTP/1.1·200·OK(CR)(LF)
Date:·Wed,·05·Oct·2011·08:33:51·GMT(CR)(LF)
Server:·Apache(CR)(LF)
Set-Cookie:·PHPSESSID=n88g3qcvv9p6irm1fo0qfse8m2;·path=/(CR)(LF)
Expires:·Sat,·26·Jul·1997·05:00:00·GMT(CR)(LF)
Cache-Control:·no-store,·no-cache,·must-revalidate(CR)(LF)
Pragma:·no-cache(CR)(LF)
Cache-Control:·post-check=0,·pre-check=0(CR)(LF)
Vary:·Accept-Encoding(CR)(LF)
Connection:·close(CR)(LF)
Transfer-Encoding:·chunked(CR)(LF)
Content-Type:·text/html;·charset=UTF-8(CR)(LF)
(CR)(LF)
Do you notice anything unusual?
If it works well with ordinary web browsers (as you mentioned in the comments) the CMS handle the requests from Apache Benchmark differently.
A quick checklist:
AFAIK Apache Benchmark just send simple requests without any cookie handling, so try to set -C with a valid cookie (copy the values from a web browser).
Try to send exactly the same headers to the CMS as the web browser sends. Save a dump of a valid request with netcat, HttpFox or a packet sniffer and set the missing headers with -H.
Profile the CMS on the server while you're sending to it a request with Apache Benchmark. Maybe you found the bottleneck. Two poor man's error_log calls with a timestamp in the first and the last line of the index.php (or the tested script's entry point) could show how fast is the PHP script and help to calculate the overhead of the Apache HTTP Server and network.
If you run socket tests and browser tests from different machines it's could be a DNS issue (turn off HostnameLookups in Apache). Try to run them from the same machine.
Try ab -k ... or ab -H "Connection: close" ....
I guess the CMS does some costly initialization when it initializes the session and it's happens when it processes the first request. Since Apache Benchmark does not send the cookies back the CMS it creates a new session for every request and it's the cause of the slow answers.
A second guess is that the CMS handle the incoming http headers differently and the headers which was sent (or the lack of them) by Apache Benchmark trigger some costly/slow processing. It looks more appropriate since the report of the Google's Webmaster Tools.
Apache Benchmark sends HTTP 1.0 request, for example:
GET / HTTP/1.0
Host: localhost:9100
User-Agent: ApacheBench/2.3
Accept: */*
It looks to me that your server does not send any http header about Keep-Alive settings but it assumes that the client uses keep-alive when the client uses HTTP 1.0. It's not an RFC compliant behaviour:
From RFC 2616, 19.6.2 Compatibility with HTTP/1.0 Persistent Connections:
Some clients and servers might wish to be compatible with some
previous implementations of persistent connections in HTTP/1.0
clients and servers. Persistent connections in HTTP/1.0 are
explicitly negotiated as they are not the default behavior.
By default Apache Benchmark doesn't use keep-alive so it waits when the response arrives for the closing of the socket. The server closes it after 15 seconds idle. Downloading the main page with wget also takes 15 seconds. Wget also uses HTTP 1.0 in the request.
I think it's a bug in the PHP code of the CMS since ab works well on the same server with a plain php file. Anyway, you can workaround it with using keep-alive connections (-k):
ab -k -t 30 -c 10 http://example.com/
or with explicitly disabling persistent connections:
ab -H "Connection: close" -t 30 -c 10 http://example.com/
but it's still a server side issue and your original ab commands is right.
Please note that this bug probably affects only HTTP 1.0 clients (like Apache Benchmark, wget) and clients with regular browsers will not notice it.