Does `phantomjs` `resourceTimeout` actually work? - phantomjs

I use phantomjs with slow proxy. I use resourceTimeout to force phantomjs to wait for resources downloading and not to fail.
I set resourceTimeout = 120000 (2 minutes). However, phantomjs throws onResourceError after some seconds of waiting for proxy (I am 100% sure proxy doesn't interrupt connection, phantomjs does it).
So, if phantomjs throws onResourceError much faster that timeout may appear, the question is: does resourceTimeout actually work and what does this setting mean?

Related

Apache Request/Response Too Slow

I have an Apache server with 16GB of Ram. The script cap.php returns a very small chunk of data (500B). It starts a mysql connection and makes a simple query.
However, the response from the server is, in my opinion, too lengthy.
I attach a screenshot of the Developer Tool Panel in Chrome.
Beside SSL and the TTFB there is a strange delay of 300ms (Stalled).
If I try a curl from the WebServer:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -k -H 'miyazaki' https://127.0.0.1/ui/cap.php
Lookup time: 0.000
Connect time: 0.000
PreXfer time: 0.182
StartXfer time: 0.266
Total time: 0.266
Does anyone know what that is?
Eventually, I found that if you use SSL it is really better and it does really matter to switch on the KeepAlive directive into Apache. See the picture below.
According to the Chrome documentation:
Stalled/Blocking
Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation.
Additionally, this time will include when the browser is waiting for
an already established connection to become available for re-use,
obeying Chrome's maximum six TCP connection per origin rule.
So this appears to be a client issue with Chrome talking to the network rather than a server config issue. As you are only making one request I think we can rule out the TCP limit per origin (unless you have lots of other tabs using up these connections) so would guess either limitations on your PC (network card, RAM, CPU) or infrastructure issues (e.g. You connect via a proxy and it takes time to set up that connection).
Your curl request doesn't seem to show this delay as it has just a 0.182 wait time to send the request (which is easily explained with https negotiation) and then a 0.266 total time to download (including the 0.182). This compares with 0.700 seconds when using Chrome so don't understand why you say "total time is similar" when to me it's clearly not?
Finally I do not understand your follow up answer. It looks to me like you have made request, presumably after a recent other request as this has skipped the whole network connection stage (including any grey stalling, blue DNS lookup, orange initial connection and purple https connection). So of course this quicker. But it's not comparing like for like with your first screenshot in your question and is not addressing your question.
But yes you absolutely should be using keep-alives (they are on by default in most web server so usually takes extra efforts to turn them off) and https resumption techniques (not on by default unless you explicitly add this to your https config) to benefit any additional requests sent shortly after the first. But these will not benefit the first connection of the session.

What happens AFTER Apache says "Script timed out before returning headers" to the running script?

I have a Perl web app served by Apache httpd using plain mod_cgi or optionally mod_perl with PerlHandler ModPerl::Registry. Recently the app encountered the error Script timed out before returning headers on some invocations and behaved differently afterwards: While some requests seemed to be processed successfully in the background, after httpd sent status 504 to the client, others didn't.
So how exactly behaves httpd AFTER it reached its configured timeout and sent the error to the client? The request/response cycle is finished now, so I guess things like KeepAlive come into play to decide if the TCP connections stays alive or not etc. But what happens to the running script in which environment, e.g. mod_cgi vs. mod_perl?
Especially in mod_cgi, where new processes are started for each request, I would have guessed that httpd keeps the processes simply running. Because all our Perl files have a shebang, I'm not even sure if httpd is able to track the processes and does so or not. That could be completely different with mod_perl, because in that case httpd is aware of the interpreters and what they are doing etc. In fact, the same operation which timed out using plain mod_cgi, succeeded using mod_perl without any timeout, but even with a timeout in mod_cgi at least one request succeeded afterwards as well.
I find this question interesting for other runtimes than Perl as well, because they share the concepts of plain mod_cgi vs. some persistent runtime embedded into the httpd processes or using some external daemons.
So, my question is NOT about how to get the error message away. Instead I want to understand how httpd behaves AFTER the error occurred, because I don't seem to find much information on that topic. It's all just about increasing configuration values and try to avoid the problem in the first place, which is fine, but not what I need to know currently.
Thanks!
Both mod_cgi and mod_cgid set a cleanup function on the request scope to kill the child process, but they do it slightly different ways. This would happen shortly after the timeout is reported (a little time for mod_cgi to return control, the error response to be written, the request logged, etc)
mod_cgi uses a core facility in httpd that does SIGTERM, sleeps for 3 seconds, then does a SIGKILL.

ORA-29273: HTTP request failed intermittent error using the utl_http package

I'm using the utl_http package to make HTTP GET requests to an IIS site on the same server (local) as Oracle. Sometimes it works and I get the response, but more often than not it hangs for about 15 seconds and then I get this error:
ORA-29273: HTTP request failed ORA-06512: at "SYS.UTL_HTTP", line 1722 ORA-29263: HTTP protocol error
As a test, I've got a small static text file in the IIS site, so this is how I'm testing it:
select utl_http.request('http://domain.com/test.txt') from dual
I get the same problem if I run it in Oracle Apex instead of direct on the db.
The other thing I've tried is to create a package of my own that does the HTTP request using the long utl_htp.begin_request() method, instead of the utl_http.request() shortcut. This gives the exact same problem (works sometimes but errors mostly - same error).
The pattern I'm seeing is if I wait a while and then try, it works for the first 2-10 times, and then begins erroring. When it does work, I get the response instantly and when it errors, there is always the delay before the error.
If I request the text file URL (or any other resource in the site) using a remote web browser then I get the correct response every time.
I have tried setting a timeout like below but it doesn't have any effect. For example instead of timing out after 3 seconds it continues for 10 or 15 seconds before the error is shown.
UTL_HTTP.set_transfer_timeout(3);
I think I can rule out ACL because it works sometimes.
Does anyone know what might cause this behaviour?
Possible reasons
-> You may have a problem with your TNS-Listener.
From the command prompt window, try to run TNSPING service_name .. try to run it quickly several times and check if it fails in some of them.
I had once a similar problem. Try to re-configure your TNS-Listener.
There must be also an option in which you can give an IP number in the TNS-listener definition. This also solves sometimes these kind of problems.
-> IIS problem.
Read about SET_PERSISTENT_CONN_SUPPORT Procedure:
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_http.htm#i1027673
Using: utl_http.set_persistent_conn_support(true, 30);
Could you be exceeding the limit of of concurrent HTTP connections? I vaguely remembering that I run into a similar problem when I forgot to close the HTTP connection.

jmeter hangs up and won't return

I am running 340 concurrent users to load test on server using jmeter.
But on most of the cases jmeter hangs up and won' t return, even if I try to close the connection it just hangs up. and eventually I have to close the application.
Any idea how to check what is holding the requests and how to check the requests sent by jmeter and find the bottleneck.
Got the following message on closing the thread
Shutting down thread please be patient message
I've hit this several times over the past few years. In each of my cases (may not be in your's) the issue was with the Load Balance (F5) I was sending my traffic through. Basically a property called OneConnect was holding the connections in a time-wait state and never killing the connection.
Run a pack tool like wireshark and see what's happening with the requests.
Try distributed testing, 340 concurrent users is not a big deal, but still you can try if that decreases your pain. Also take a look at the following link:
http://jmeter.apache.org/usermanual/best-practices.html#lean_mean
First check you script is ok with one user.
Ensure you use assertions.
Then run you test following jmeter best practices:
no gui
no costly listeners
You should then be able to see in csv output the longest request and be able to fix your issue.
I also encountered this problem before when I run my JMeter on my laptop(Core 2 Duo 1.5Ghz) it always hang-up in the middle of the processing. I tried to run on another pc which is more powerful than my laptop and its works now smoothly. Therefore, JMeter will run effectively if your pc or laptop has a better specs.
Note: It is also advisable to run your JMeter in non-gui mode.
Example to run JMeter in Linux box:
$ ./jmeter -t test.jmx -n -l /Users/home/test.jtl
I had the
one or more test threads won't exit
because of a firewall blocking some requests. So I had to leap in the firewalls timeout for all blocked request... then it returned.
You are getting this error probably because JVM is not capable of running so many threads. If you take a look at your terminal, you will see the exception you get:
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread. See log file for details.
You can solve this by doing Remote Testing and have multiple clusters running, instead of one.

Apache taking 20sec to start running CGI script

Any ideas as to why my Apache/2.0.49 server always waits 20 seconds from receiving a request that is executed using a cgi-bin script to starting to run that script?
The server responds immediately to normal HTTP requests that only use static files, but always takes 20 seconds to respond to cgi-bin requests. I've used tcpdump to time the arrival of the request and printed the time at the beginning of the script to determine that the delay is between those two events.
I can't see anything in the configuration that relates to 20 seconds. The server runs Red Hat Linux 3.3.3-7 & I'm pretty sure that it used to respond to cgi-scripts immediately, but am unsure when it started being slow & what might have changed to cause that.
Thanks in advance for any help/suggestions you may be able to provide.
We had a similar problem with our Apache server running PHP but only with a specific client, in the end it turned out the issue was that the PHP was trying to fetch a file from a remote server but didn’t have connectivity, after 20 seconds the PHP “gave up” and the flow was completed. You should try to search your code for something similar.