In my work, we implemented a lot le features that call another feature because de reuse scenarios for many scenarios.
But, when see the html reporte, this one show 5 minutes execution when, in console said 2.5 minutes.
We found in sunfire reports that the time of the feature son, the step that call a web service delay 30 ms, but also the step that call this feature son has 30 ms. So is 60 ms.
feature parent
call (feature Son.feature) 30ms
this is the son
given url 0 ms
where status 200 30 ms
feature report
Column duration 60 ms
Excuse me por my bad english. Thanks for any help
2 things.
If you use the parallel runner, you will see different time (actual / elapsed)
When you call features, just focus on the time reported by the parent
Can you refer this video, so you can troubleshoot better: https://twitter.com/KarateDSL/status/1049321708241317888
Related
I have 100+ tests being covered in 25+ feature files and I have the karate-config.js which has 3 "karate.callSingle" functions as below.
config.weatherParams = karate.callSingle(
"file:src/test/java/utils/AvailableForecasts.feature",
config
);
config.routingParams = karate.callSingle(
"file:src/test/java/utils/CalculationInput.feature",
config
);
config.vesselParams = karate.callSingle(
"file:src/test/java/utils/VesselStatus.feature",
config
);
Same issue when I use classpath inside callSingle.
When I run all the tests at once with parallel (tried randomly 1-100 threads) enabled, I get the following error:
org.graalvm.polyglot.PolyglotException: Multi threaded access requested by thread Thread[pool-2-thread-8,5,main] but is not allowed for language(s) js.
- com.oracle.truffle.polyglot.PolyglotEngineException.illegalState(PolyglotEngineException.java:132)
- com.oracle.truffle.polyglot.PolyglotContextImpl.throwDeniedThreadAccess(PolyglotContextImpl.java:727)
- com.oracle.truffle.polyglot.PolyglotContextImpl.checkAllThreadAccesses(PolyglotContextImpl.java:627)
- com.oracle.truffle.polyglot.PolyglotContextImpl.enterThreadChanged(PolyglotContextImpl.java:526)
- com.oracle.truffle.polyglot.PolyglotEngineImpl.enter(PolyglotEngineImpl.java:1857)
- com.oracle.truffle.polyglot.HostToGuestRootNode.execute(HostToGuestRootNode.java:104)
- com.oracle.truffle.polyglot.PolyglotMap.entrySet(PolyglotMap.java:119)
After playing around with multiple combinations- surprisingly, when I have only 2 "callSingle" functions in karate.config (commenting VesselStatus.feature) then it works fine.
All these 3 "callSingle" things calling 3 different services and sets the variable for other tests to run, so these 3 are critical.
Is there a way, we can re-optimize / bring a different approach to avoid the above issue?
This is a known issue that should be fixed in 1.1.0.RC2
Details here: https://github.com/intuit/karate/issues/1558
Would be good if you can confirm.
I faced this issue in my karate implementation #peter-thomas. I just got an easy workaround for this issue since we know that graalVM js engine doesnt support multithreading of karate-config.js
work around is - we can wait for a certain milliseconds and that milliseconds has to be genrated randomly.
below code inside karate-config.js have a look please -
function fn(){
// karate-config essential coding
var random_millis = Math.floor(Math.random() * 5000 - 1000 +1 )) + 1000;
java.lang.Thread.sleep(random_millis);
return something;
}
with above piece of code i tried my 100+ feature files running with 20 parrellal threads with karate 1.2.0.RC1 and it worked fantastically fine.
How its working - all the 20 threads will jump altogether , reaching karate-config at the same time. but if we apply some delay that too random between 1 to 5 seconds (in millis) , all threads will wait for different time avoiding multithreading issue.
I also know that between 1 to 5000 millisends , still there are suppose 1% chances that we get same numbers but till we get concrete solution of this issue i guess we can use this workaround.
Thanks,
Saurabh
I have a test that was alerting because it was taking extra time for an asset to load. We changed from waitForNoRequest to a pause (at Catchpoint's suggestion). That did not seem to have the expected effect of waiting for things to load. We increased the pause from 3000 to 12000 and that helped to allow the page to load and stop the alert. We noticed some more alerts, so I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
So the main question here is - what functionality does both of these different features provide? What do I gain by pausing instead of waiting, if anything?
Here's the test, data changed to protect company specific info. Step 3 is where we had some failures and we switched between pause and wait.
// Step - 1
open("https://website.com/")
waitForNoRequest("2000")
click("//*[#id=\"userid\"]")
type("//*[#id=\"userid\"]", "${username}")
setStepName("Step1-Login-")
// Step - 2
clickMouseAndWait("//*[#id=\"continue\"]")
waitForVisible("//*[#id=\"challenge-password\"]")
click("//*[#id=\"challenge-password\"]")
type("//*[#id=\"challenge-password\"]", "${password}")
setStepName("Step2-Login-creds")
// Step - 3
clickMouseAndWait("//*[#id=\"signIn\"]")
setStepName("Step3-dashboard")
waitForTitle("Dashboard")
waitForNoRequest("3000")
click("//*[#id=\"account-header-wrapper\"]")
waitForVisible("//*[#id=\"logout-link\"]")
click("//*[#id=\"logout-link\"]")
// Step - 4
clickAndWait("//*[text()=\"Sign Out\"]")
waitForTitle("Login - ")
verifyTextPresent("You have been logged out.")
setStepName("Step5-Logout")
Rachana here, I’m a member of the Technical Service Team here at Catchpoint, I’ll be happy to answer your questions.
Please find the differences below between waitForNoRequest and Pause commands:
Pause
Purpose: This command pauses the script execution for a specified amount of time, whether there are HTTP/s requests downloading or not. Time value is provided in milliseconds, it can range between 100 to 30,000 ms.
Explanation: This command is used when the agent needs to wait for a set amount of time and this is not impacted by the way the requests are loaded before proceeding to the next step or command. Only a parameter is required for this action.
WaitForNoRequest
Purpose: This commands waits for a specified amount of time, when there was no HTTP/s requests downloading. The wait time parameter can range between 1,000 to 5,000 ms.
Explanation: The only parameter for this action is a wait time. The agent will wait for that specified amount of time before moving onto the next step/command. Which will, in return, allow necessary requests more time to load after document complete.
For instance when you add waitforNoRequest(5000), initially agent waits 5000 ms after doc complete for any network activity. During that period if there is any network activity, then the agent waits another 5000 ms for the next network activity to end and the process goes on until no other request loads within the specified timeframe(5000 ms).
A pause command with 12000 ms, gives exactly 12 seconds to load the page. After 12 seconds the script execution will continue to next command no matter the page is loaded or not.
Since waitForNoRequest has a max time value of 5000 ms, you can tell the agent to wait for a gap of 5 seconds when there is no network activity. In this case, the page did not have any network activity for 3 seconds and hence proceeded to the next action. The page was not loaded completely and the script failed.
I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
We allow a maximum of 30 seconds pause time hence 45 seconds will not work.
Please reach out to our support team and we’ll be glad to connect you with our scripting SMEs and help you with any scripting needs you might have.
Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.
I have prepared a script in jmeter with ultimate thread group setting as start thread count:10, Intial delay:0, Startup Time:10, Hold Load:30, Shutdown:10. I have added aggregate report as listener. When i execute the script, samples# go above 10 for each samples. Does this means more than 10 users are entering.
Nope. You will have only 10 (concurrent) users. Jmeter never adds more threads than what we mention in the file.
This is what happening - Once an user finishes the test or a loop, as you had mentioned the test should run for 30 seconds - the same test is repeated again for the user. It will stop automatically after 30 seconds. This is why you might see more than 10 login requests or something similar.
If you do not want your test to behave this way, use Simple thread group and update loop count as 1.
Starting a few weeks ago, compiling a project (VB.NET, .NET 2.0, VS 2010) has taken several times as long as before. In Task Manager, I noticed ResXtoResources.exe taking lots of CPU for a while. I've finally been able to get some data on this using MSBuild's 'Diagnostic' output setting, and comparing that output to what I see in a branch from a few months back. Most striking are the final lines, which give timings. Before:
Target Performance Summary:
[..]
1395 ms CoreResGen 1 calls
1930 ms CompileLicxFiles 1 calls
2135 ms GenerateApplicationManifest 1 calls
2844 ms CoreCompile 1 calls
Task Performance Summary:
[..]
1391 ms GenerateResource 1 calls
1929 ms LC 1 calls
2134 ms GenerateApplicationManifest 1 calls
2843 ms Vbc 1 calls
Build succeeded.
Time Elapsed 00:00:09.50
========== Rebuild All: 5 succeeded, 0 failed, 0 skipped ==========
After:
Target Performance Summary:
1348 ms CompileLicxFiles 1 calls
1747 ms GenerateApplicationManifest 1 calls
2595 ms CoreCompile 1 calls
39575 ms CoreResGen 1 calls
Task Performance Summary:
1347 ms LC 1 calls
1745 ms GenerateApplicationManifest 1 calls
2593 ms Vbc 1 calls
39570 ms GenerateResource 1 calls
Build succeeded.
Time Elapsed 00:00:47.34
========== Rebuild All: 5 succeeded, 0 failed, 0 skipped ==========
Both projects were compiled on the same system with the same settings. We've made numerous changes, to be sure, but nothing to the order of magnitude that would justify such a change in timings (and only for this one task!). I assume resource generation is getting stuck on something — a circular reference, a missing one, etc. I have been unable, however, to find anything useful on how to trace such a problem down to what I assume is just a single resource file.
Short of looking through thousands of checkins or temporarily removing some forms (and thus, their resource files) from the project, is there anything else I can do to figure out the issue? I can't seem to find individual per-resource file timings.
Findings so far:
I've created a new, empty project with all the same .resx files in place.
The issue is not reproducible in .NET 4.0: compiling the exact same test project takes less than a second.
The issue is reproducible in .NET 2.0 as soon as I also add one of the forms from the original project; apparently, it will otherwise not compile the resources "properly".
Removing individual .resx files will reduce the timings 'proportionally'; that is: I have unfortunately not found a single file that is the culprit.
Looks like this blog entry gives the answer.
In a nut, search your .resx files for assembly references that don't actually exist (such as System.Windows.Forms, Version 4.0.0.0), and replace them with ones that do (Version 2.0.0.0). I used grepWin to accomplish this.
My CoreResGen / GenerateResource timings are now roughly what they used to be. CruiseControl.NET says build time is down from 92 seconds to 40. :)
I found the reason here... the resources contained a png file save in special Adobe Fireworks format (PNG). I exported the file to png (without layer information) and now the compile takes 6 seconds.