boost ssl client/server shutdown timing - ssl

I have both a boost ssl client and server. Part of my testing is to use the client to send a small test file (~40K) many times to the server, using the pattern socket_connect, async_send, socket_shutdown each time the file is sent. I have noticed on both the client and the server that the ssl_socket.shutdown() call can take upto 10 milliseconds to complete. Is this typical behavior?
The interesting behavior is that the 10 millisecond completion time does not appear until I have executed the connect/send/shutdown pattern about 20 times.

Related

Latency from Jmeter b/w Servers in Same Network

I am testing an api in Jmeter from a Linux loadgenerator machine. When executed in non GUI mode, I am seeing a latency of 35s. But when did a ping command from LG server to the app server, the time was just in milli sec-
35 sec latency from the view results tree-
Both the servers are in the same network. Then why is there so much latency.
You're looking into 2 different metrics.
Ping sends an ICMP packet which just indicates success of failure in communicating between 2 machines.
Latency includes:
Time to establish connection
Time to send the request
Time required for the server to process the request
Time to get 1st byte of the response
So in other words Latency is Time to first byte and if your server needs 35 seconds to process the request it indicates a server-side issue rather than a network issue.
More information:
JMeter Glossary
Understanding Your Reports: Part 1 - What are KPIs?

Failure tolerance counter workaround/alternatives

I use monit to monitor my daemon with the HTTP API and restart it if needed. In addition to just checking that the process is not dead, I also added an HTTP check (if failed port 80 protocol http request "/api/status") with a failure tolerance counter (for N cycles). I use the counter to avoid restarting the daemon in case of singular failed requests (e.g. due to high load). The problem is, that the failures counter seems not to reset after the daemon is successfully restarted. I.e., consider the following scenario:
Monit and the daemon are started.
The daemon is locked (e.g. due to a software bug) and stops responding to the HTTP requests.
Monit waits for N consecutive HTTP request failures and restarts the daemon.
The first monit HTTP request after the daemon restart fails again (e.g., because the daemon needs some time to get online and start serving requests).
Monit restarts the daemon again. Go to item 4.
This seems to be a bug and actually there is an issue 64 (fixed) and 787 (open). Since the second issue is opened for a year already, I do not have much hope of it to be fixed soon, so I would like to know is there a good workaround for this case.
While not exactly what I needed, I ended up with the following alternative:
Use large enough value for the timeout parameter of the start program token, to give the server enough to get online. The connection tests are not performed by monit during this time.
Use the retry parameter in the if failed port clause to tolerate singular failures. Unfortunately, the retries are done immediately (after request failure or timeout), not in the next poll cycle.
Use for N cycles parameter, to improve failure tolerance at least partially.
Basically, I have the following monitrc structure:
set daemon 5
check process server ...
start program = "..." with timeout 60 seconds
stop program = "..."
if failed port 80 protocol http request "/status" with retry 10 and timeout 5 seconds for 5 cycles then restart

Http server - slow read

I am trying to simulate a slow http read attack against apache server running on my localhost.
But it seems like, the server does not complain and simply waits forever for the client to read.
This is what I do:
Request a huge file (say ~1MB) from the http server
Read the response from the server in a loop waiting 100 secs before successive reads
Since the file is huge and the client receive buffer is small, the server has to send the file in multiple chunks. But, at the client side, I wait for 100 secs between successive reads. As a result, the server often polls the client and finds that, the receive window size of the client is zero since the client has not yet read the receive buffer.
But it looks like the server does not bother to break the connection and it silently keeps polling the client. Server sends the data when the client window size is > 0 and again goes back to wait for the client.
I want to know whether there are any apache config parameters that I can set to break the connection from the server side after waiting sometime for the client to read the data.
Perhaps this would be more useful to you, (simpler and saves you time): http://ha.ckers.org/slowloris/ which is a Perl script that sends partial HTTP requests, the Apache server leaves the connection open (now unavailable to new users) and if executed on a Linux environment, (Linux does not limit threads beyond hardware capability) you can effectively block all open sockets, and in turn prevent other users from accessing the server. It uses minimal bandwidth because it does not "flood" the server with requests, it simply slowly takes the sockets hostage. You can download the file here: http://ha.ckers.org/slowloris/slowloris.pl
To prevent an attack like this (well, mitigate) see here: https://serverfault.com/questions/32361/how-to-best-defend-against-a-slowloris-dos-attack-against-an-apache-web-server
You could also use a load-balancer or round-robin setup.
Try slowhttptest to test the slow read attack you're describing. (It can also be used to test slow sending of headers.)

How can I increase the amount of time apache waits before timing out an HTTP request?

Occasionally when a user tries to connect to the myPHP web interface on one of our web servers, the request will time out before they're prompted to login.
Is the timeout time configured on the server side or within their web browser?
Can you tell me how to increase the amount of time it waits before timing out when this happens?
Also, what logs can I look at to see why their request takes so long from time to time?
This happens on all browsers. They are connecting to myPHP in a LAMP configuration on CentOS 5.6.
Normally when you hit a limit on execution time with LAMP, it's actually PHPs own execution timeout that needs to be adjusted, since both Apache's default and the browsers' defaults are much higher.
Edit: There are a couple more settings of interest to avoid certain other problems re: memory use and parsing time, they can be found at this link.
Typically speaking, if PHP is timing out on the defaults, you have larger problems than the timeout itself (problems connecting to the server itself, poor coding with long loops).
Joachim is right concerning the PHP timeouts though, you'll need to edit the php.ini to increase the timeout of PHP itself before troubleshooting anything else on the server; however, I would suggest trying to find out why people are hitting the timeout in the first place.
max_execution_time = 30;

Server benchmarking: how many http requests can the server process?

I am creating an application which sends the measurement data every 200 ms from mobile phone to the server. The data have to be processed in real-time, but I noticed that the difference between the sent time and the time of process start is getting bigger and bigger so I have to find the point where the requests get stuck.
I am sending the requests in the httpwebrequest form (http://testserver/submitdata?ax=value1&ay=value2&az=value3) and on the server I am using RESTful service created in WCF.
Anyway is there any benchmarking tool that could test how many requests can be handled by the server or is there any other practical way to determine what can be the maximum number of requests per seconds handled without causig the delay?
Thanks!
The Apache Benchmarking (ab) tool might be a good way to do this (it will work with any HTTP server, not just Apache).
ab is indeed a descent solution. In its simplest form you can run
ab -c 10 -n 100 http://my.page.com/
in order to call the server 100 times and keep 10 requests running concurrently. I have an example at my blog.