Jmeter Test Does not Stop - module

I am using Jmeter 3.2.1 with JDK8
I have a test plan with three subsequences:
1: Login
2: Discover
3: List
4: Create new element
I initially had these as a single monolith SimpleController. And it worked fine.
I modularized it as 4 modules, with the main thread group ("Main Test")
invoking each module in sequence as follows:
After I created module controller, the test does not stop, although
it does not doing anything else (according to my debug postprocessor).
What am I missing?
Thanks,
R

I solved the problem by adding a Test Action with "Stop" of All Threads.
However, I do not understand why this is required. If I exit my last module

Related

Browser not quitting when called from another scenario using Karate

I have a scenario which is a series of rest api calls but in the middle is a section that executes a few steps within a chrome browser. The browser steps are common to another scenario so I tried to extract the browser steps into a separate feature that could then be called from multiple scenarios.
When the main scenario executes it executes the browser feature but fails to auto-close the browser after execution. I read in the documentation "Karate will close the browser automatically after a Scenario unless the driver instance was created before entering the Scenario" . The configure driver code is in the callable scenario.
I also tried caling quit() but this resulted in the error: "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?"
Does anyone know how I can ensure the browser closes in this circumstance?
UPDATE: As suggested by #PeterThomas I started to craft a full example to replicate this when I discovered that to replicate is actually quite simple.
If the UI feature is called like this then the browser is closed after execution:
* call read('classpath:/ui/callable/GoogleSearch.feature')
If called like this then the browser remains open:
* def result = call read('classpath:/ui/callable/GoogleSearch.feature')
My UI scenario scrapes a value from a web page which I then stored within a '* def ticket' within the called feature. I was hoping to access it via result.ticket. As I am unable to do this I am successfully using the following:
* def extractedTicket = { value: '' }
* call read('classpath:/ui/callable/GoogleSearch.feature')
* def ticket = extractedTicket.value
And within the called feature:
* set extractedTicket.value = karate.extract(val, '.ticket=(.*?)&', 1)
First, I think you should provide a way to replicate this, so that we can investigate and fix this for everyone. Please follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
That said, maybe for just getting Chrome to do a few steps you should just use the Java API - and you can call it from wherever you want, even within a feature file using Java interop: https://github.com/karatelabs/karate#java-api
Also see if this answer gives you any pointers: https://stackoverflow.com/a/60387907/143475

Retry Failed Automation Test Case from the logical point for E2E Automation

We are trying to automate E2E test cases for an booking application which involves around 60+ steps for each test case. Whenever there is a failure at the final steps it is very much time consuming if we go for traditional retry option since the test case will be executed from step 1 again. On the application we have some logical steps which can be marked somehow through which we would like to achieve resuming the test case from a logical point before the failed step. Say for example, among the 60 steps say every 10th step is a logical point in which the script can be resumed instead of retrying from the step 1. say if the failure is on line number 43 then with the help of booking reference number the test can be resumed from step number 41 since the validation has been completed till step 40 (step 40 is a logical closure point). There might be an option you may suggest to split the test case into smaller modules, which will not work for me since it is an E2E test case for the application which we would want to have in a single Geb Spec. The framework is built using Geb & Spock for Web Application automation. Please share your thoughts / logics on how can we build the recovery scenarios for this case. Thanks for your support.!
As of now i am not able to find out any solution for this kind of problem.
Below are few things which can be done to achieve the same, but before we talk about the solutions, we should also talk about the issues which it will create. You are running E2E test cases and if they fail at step 10 then they should be started from scratch not from step 10 because you can miss important integration defects which are occurring when you perform 10th step after running first 9 steps. For e.g. if you create an account and then immediately search for hotel, you application might through error because its a newly created account, but if you retry it from a step where you are just trying to search for the hotel rooms then it might work, because of the time spent between your test failure and restarting the test, and you will not notice this issue.
Now if you must achieve this then
Create a log every time you reach a checkpoint, which can be a simple text file indicating the test case name and checkpoint number, then use retry analyzer for running the failed tests, inside the test look for the text file with the test case name, if it exists then simple skip the code to the checkpoint mentioned in the text file. It can be used in different ways, for e.g. if your e2e test if going through 3 applications then file can have the test case name and the last passed application name, if you have used page objects then you can write the last successful page object name in the text file and use that for continuing the test.
Above solution is just an idea, because I dont think there are any existing solutions for this issue.
Hope this will give you an idea about how to start working on this problem.
The possible solution to your problem is to first define the way in which you want to write your tests.
I would recommend considering one test Spec (class) as one E2E test containing multiple features.
Also, it is recommended to use opensource Spock Retry project available on GitHub, after implementing RetryOnFailure
your final code should look like:
#RetryOnFailure(times= 2) // times parameter is for retry attempts, default= 0
class MyEndtoEndTest1 extends GebReportingSpec {
def bookingRefNumber
def "First Feature block which cover the Test till a logical step"()
{
// your test steps here
bookingRefNumber = //assign your booking Ref here
}
def "Second Feature which covers a set of subsequent logical steps "()
{
//use the bookingRefNumber generated in the First Feature block
}
def "Third set of Logical Steps"()
{ // your test steps here
}
def "End of the E2E test "()
{ // Your final Test steps here
}
The passing of all the Feature blocks (methods) will signify a successful E2E test execution.
It sounds like your end to end test case is too big and too brittle. What's the reasoning behind needing it all in one script.
You've already stated you can use the booking reference to continue on at a later step if it fails, this seems like a logical place to split your tests.
Do the first bit, output the booking reference to a file. Read the booking reference for the second test and complete the journey, if it fails then a retry won't take anywhere near as long.
If you're using your tests to provide quick feedback after a build and your tests keep failing then I would look to split up the journey into smaller smoke tests, and if required run some overnight end to end tests with as many retries as you like.
The fact it keeps failing suggests your tests, environment or build is brittle.

Error running multiple tests in Specflow/Selenium

I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?
TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.

Error in starting second JVM when one is already started

I am developing a client-server software in which server is developed by python. I want to call a group of methods from a java program in python. All the java methods exists in one jar file. It means I do not need to load different jars.
For this purpose, I used jpype. For each request from client, I invoke a function of python which looks like this:
def test(self, userName, password):
Classpath = "/home/DataSource/DMP.jar"
jpype.startJVM(
"/usr/local/java/jdk1.7.0_60/jre/lib/amd64/server/libjvm.so",
"-ea",
"- Xmx512m",
"-Djava.class.path=%s" % Classpath)
NCh = jpype.JClass("Common.NChainInterface")
n = NCh(self._DB_ipAddress, self._DB_Port, self._XML_SCHEMA_PATH, self._DSTDir)
jpype.shutdownJVM()
For one function it works, but for the second call it cannot start jvm.
I saw a lot of complain about it but I could not find any solution for that. I appreciate it if any body can help.
If jpype has problem in multiple starting jvm, is there any way to start and stop jvm once? The server is deployed on a Ubuntu virtual machine but I do not have enough knowledge to write for example, a script for this purpose. Could you please provide a link, or an example?
Check isJVMStarted() before startJVM().
If JVM is running, it will return True, otherwise False.
def init_jvm(jvmpath=None):
if jpype.isJVMStarted():
return
jpype.startJVM(jpype.getDefaultJVMPath())
For a real example, see here.
I have solved it by adding these lines when defining the connection:
if not jpype.isJVMStarted():
jpype.startJVM(jvmPath, args)
This issue is not resolved by et9's answer above.
The problem is explained here.
Effectively you need to start/stop the JVM at the server/module level.
I have had success with multiple calls using this method in unit tests.

Some of my unit tests tests are not finishing in XCode 4.4

I have seen people posting about this here and elsewhere, but I haven't found any solution that works. I am using XCode 4.4 and have a bunch of unit tests set up. I have ran them all before on this project, so I know that they do pass/fail when they are supposed to if they are actually ran.
I have about 15 test suites, and each one contains 1-7 tests. On most attempts, all of the test suites finished (and passed) except for 1 (FooTests). It gives the warning:
FooTests did not finish
testFoo did not finish
XCode will report that testing was successful, regardless of what happens in unfinished tests. Another thing to note, sometimes it is a different test that will not finish, and sometimes multiple suites will not finish. I have not noticed a case where all tests do finish, but judging by this seemingly random behaviour I believe that it is possible.
So, is this a bug in XCode? I can't think of any other reason that tests randomly don't finish and then cause XCode to report that everything was successful. Are there any solutions?
I am on XCode 4.5.2. For application unit test, if your test suites finish so quick that the main application is not correctly loaded before that, you will get the warning. You can simply avoid the problem by adding a sleep at the end of your test like following. It doesn't happen for logic unit test.
[NSThread sleepForTimeInterval:1.0];
I've just had this problem with XC4.5DP4.
I had a test which does some work in a loop, and does nothing else when it falls out of the loop, and I was getting the "did not finish" error.
In an attempt to prove that the test was finishing, I added this as the very last line:
NSLog(#"done");
Not only does "done" get printed to the output - now Xcode says that the test has finished.
Seems to be a workaround... go figure.
I'm using XCode46-DP3 and I've just resolved this problem in my tests. I've several tests that start a web server and then execute http call to it; at the end of the test the web server is stopped. In last week these tests have begun to have the warning 'did not finish'.
For me has been enough to add the following sleep at the end of these tests (precisely I've added it in the tearDown):
- (void)tearDown {
[self.httpServer stop];
[NSThread sleepForTimeInterval:1.0];
self.httpServer = nil;
self.urlComposer = nil;
}
The problem seems that your tests terminate too quickly for Xcode to receive and parse the log messages that indicate failure or success. Sleeping for 1 second in the last test case run worked reliably for me, where 0.0 didn't. To see which test case is the last test case, check the test action in the Scheme dialog.
I created a separate WorkaroundForTestsFinishingTooFast test case, with a single method:
- (void)testThatMakesSureWeDontFinishTooFast
{
[NSThread sleepForTimeInterval:1.0];
}
The problem is that as you add more test cases, you'll have to ensure that this test is the last method run. That means removing and adding this class from the Test action as reordering of test cases is not allowed. On the other hand, you're only delaying your entire test bundle by 1 second.
I had the same warnings in the Log Navigator. I fixed it for me.
In my project I have got 2 Schemes, one for running the project and one for the unit tests.
Go to Product --> Edit Scheme...
Select the UnitTest Scheme in the scheme picker
Select the "Test"-Icon on the Left
Change the debugger from LLDB to GDB and press OK
Tests should finish now. (For me it worked fine)
For me the solution was to slim down the logging output from the parts of the app that the tests were testing. I think xcode couldn't parse the tests output in time because of the other output I had throughout the app.
I had the same problem running with XCode 4.6, the reason for it, in my case, was inconsistency between the scheme and the actual unit tests in the test suits.
In the scheme I had some suits checked but in their .m file some unit tests were commented.
To solve the problem: either uncomment the test or deselected the file/suit in the scheme and all shall became green again :)
for people like me that forgot how to reach the scheme these are the required steps:
right click on the project name in the scheme section (right to the stop button)
choose edit scheme
choose the test debug
click on the triangle next to the unit test project and next to each file you have a check box
uncheck files that you placed unit test in comments
hope this helps