am not able to configure readTimeout to be more than 60 sec in karate - karate

One of my api's take more than 2 mins ( i know it's bad response time) to respond and am using karate to test it. I tried to increase the default 30 sec time by * configure readTimeout = 90000 but it still only waits for 60 sec. am using karate v 0.7.0. Any suggestions?

You can try switching to karate-jersey from karate-apache or vice-versa. Also try the other connectTimeout setting. Else, upgrade to 0.9.1, in case this bug-fix is involved: https://github.com/intuit/karate/issues/586

Related

Karate - Multi threaded access requested - issue

I have 100+ tests being covered in 25+ feature files and I have the karate-config.js which has 3 "karate.callSingle" functions as below.
config.weatherParams = karate.callSingle(
"file:src/test/java/utils/AvailableForecasts.feature",
config
);
config.routingParams = karate.callSingle(
"file:src/test/java/utils/CalculationInput.feature",
config
);
config.vesselParams = karate.callSingle(
"file:src/test/java/utils/VesselStatus.feature",
config
);
Same issue when I use classpath inside callSingle.
When I run all the tests at once with parallel (tried randomly 1-100 threads) enabled, I get the following error:
org.graalvm.polyglot.PolyglotException: Multi threaded access requested by thread Thread[pool-2-thread-8,5,main] but is not allowed for language(s) js.
- com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:132)
- com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:727)
- com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:627)
- com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:526)
- com.oracle.truffle.polyglot.PolyglotEngineImpl.enter(PolyglotEngineImpl.java:1857)
- com.oracle.truffle.polyglot.HostToGuestRootNode.execute(HostToGuestRootNode.java:104)
- com.oracle.truffle.polyglot.PolyglotMap.entrySet(PolyglotMap.java:119)
After playing around with multiple combinations- surprisingly, when I have only 2 "callSingle" functions in karate.config (commenting VesselStatus.feature) then it works fine.
All these 3 "callSingle" things calling 3 different services and sets the variable for other tests to run, so these 3 are critical.
Is there a way, we can re-optimize / bring a different approach to avoid the above issue?
This is a known issue that should be fixed in 1.1.0.RC2
Details here: https://github.com/intuit/karate/issues/1558
Would be good if you can confirm.
I faced this issue in my karate implementation #peter-thomas. I just got an easy workaround for this issue since we know that graalVM js engine doesnt support multithreading of karate-config.js
work around is - we can wait for a certain milliseconds and that milliseconds has to be genrated randomly.
below code inside karate-config.js have a look please -
function fn(){
// karate-config essential coding
var random_millis = Math.floor(Math.random() * 5000 - 1000 +1 )) + 1000;
java.lang.Thread.sleep(random_millis);
return something;
}
with above piece of code i tried my 100+ feature files running with 20 parrellal threads with karate 1.2.0.RC1 and it worked fantastically fine.
How its working - all the 20 threads will jump altogether , reaching karate-config at the same time. but if we apply some delay that too random between 1 to 5 seconds (in millis) , all threads will wait for different time avoiding multithreading issue.
I also know that between 1 to 5000 millisends , still there are suppose 1% chances that we get same numbers but till we get concrete solution of this issue i guess we can use this workaround.
Thanks,
Saurabh

scrapy timeout not controlling twisted timeout

I keep getting this when I run my scrapy spider raise TimeoutError("Getting %s took longer than %s seconds." % (url, timeout))
twisted.internet.error.TimeoutError: User timeout caused connection failure: Getting https://www.exampletest.com/test took longer than 190 seconds..
I have set the following settings but didn't help
'AUTOTHROTTLE_ENABLED':False,
'DOWNLOAD_TIMEOUT':20,
'RETRY_ENABLED': False,
How can I control if the website doesn't respond in under 30 sec to just pass or ignore it.
190 is a weird default, so I’ll go ahead and assume that you are using scrapy-crawlera.
If that is the case, know that scrapy-crawlera ignores DOWNLOAD_DELAY because Crawlera requires higher timeout values, as requests through Crawlera can take much longer.
If you want to decrease the timeout value nonetheless, change CRAWLERA_DOWNLOAD_TIMEOUT instead.

Choose CORS Access-Control-Max-Age value

My API is configured to cache CORS preflight request by using the HTTP header Access-Control-Max-Age. The value is set to 600 seconds. I chose this value because according to Mozilla documentation this is the maximum allowed by Chrome.
Maximum number of seconds the results can be cached.
Firefox caps this at 24 hours (86400 seconds) and Chromium at 10 minutes (600 seconds). Chromium also specifies a default value of 5
seconds.
A value of -1 will disable caching, requiring a preflight OPTIONS check for all calls.
What is a recommended Access-Control-Max-Age value and how to choose it?
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age
Firefox caps this at 24 hours (86400 seconds).
Chromium (prior to v76) caps at 10 minutes (600 seconds).
Chromium (starting in v76) caps at 2 hours (7200 seconds).
Chromium also specifies a default value of 5 seconds.
We use 86400 seconds.

jmeter and apachetop - why I see different values?

Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.

Setting a timeout on webservice consumer built with org.apache.axis.client.Call and running on Domino

I'm maintaining an antedeluvian Notes application which connects to a SAP back-end via a manually done 'Webservice'
The server is running Domino Release 7.0.4FP2 HF97.
The Webservice is not the more recently Webservice Consumer, but a large Java agent which is using Apache soap.jar (org.apache.soap). Below an example of the calling code.
private Call setupSOAPCall() {
Call call = new Call();
SOAPHTTPConnection conn = new SOAPHTTPConnection();
call.setSOAPTransport(conn);
call.setEncodingStyleURI(Constants.NS_URI_SOAP_ENC);
There has been a change in the SAP system which is now taking 8 minutes to complete (verified by SAP Team).
I'm getting an error message as follows:
[SOAPException: faultCode=SOAP-ENV:Client; msg=For input string: "906 "; targetException=java.lang.NumberFormatException: For input string: "906 "]
I found a blog article describing the error message quite closely:
https://thejavablog.wordpress.com/category/jmeter/
and I've come to the hypothesis that it is a timeout message that is returning to my Call object and that this timeout message is being incorrectly parsed, hence the NumberFormat Exception.
Looking at my logs I can see that there is a time difference of 62 seconds between my call and the response.
I recommended that the server setting in the server document, tab Internet Protocols/HTTP/Timeouts/Request timeouts be changed from 60 seconds to 600 seconds, and the http task restarted with
tell http restart
I've re-run the tests and I am getting the same error, and the time difference is still slightly more than 60 seconds, which is not what I was expecting.
I read Michael Rulnau's blog entry
http://www.mruhnau.net/2014/06/how-to-overcome-domino-webservice.html
which points to this APR
http://www-01.ibm.com/support/docview.wss?uid=swg1LO48272
but I'm not convinced that this would apply in this case, since there is no way that IBM would know that my Java agent is in fact making a Soap call.
My current hypothesis is that I have to use either the setTimeout() method on
org.apache.axis.client.Call
https://axis.apache.org/axis/java/apiDocs/org/apache/axis/client/Call.html
or on the org.apache.soap.transport.http.SOAPHTTPConnection
https://docs.oracle.com/cd/B13789_01/appdev.101/b12024/org/apache/soap/transport/http/SOAPHTTPConnection.html
and that the timeout value is an apache default, not something that is controlled by the Domino server.
I'd be grateful for any help.
I understand your approach, and I hope this is the correct one to solve your problem.
Add a debug (console write would be fine) that display the default Timeout then try to increase it to 10 min.
SOAPHTTPConnection conn = new SOAPHTTPConnection();
System.out.println("time out is :" + conn.getTimeout());
conn.setTimeout(600000);//10 min in ms
System.out.println("after setting it, time out is :" + conn.getTimeout());
call.setSOAPTransport(conn);
Now keep in mind that Dommino has also a Max LotusScript/Java execution time, check this value and (at least for a try) change it: http://www.ibm.com/support/knowledgecenter/SSKTMJ_9.0.1/admin/othr_servertasksagentmanagertab_r.html (it's version 9 help but this part should be identical)
I've since discovered that it wasn't my code generating the error; the default timeout for the apache axis SOAPHTTPConnetion is 0, i.e. no timeout.