Elasticsearch throws 'ElasticsearchIllegalStateException' part way through tests - testing

I've got a large Groovy application with a lot of JUnit integration tests (256), most of which use 'com.github.tlrx.elasticsearch-test', version: '1.2.1' to run elasticsearch locally.
part way through running all of the test classes all the test that use elasticsearch start throwing a 'ElasticsearchIllegalStateException' with message 'Failed to obtain node lock, is the following location writable?: [./target/elasticsearch-test/data/cluster-test-kiml42s-MacBook-Pro.local]'.
If I run any of these classes alone, it works fine.
This is my initialising code run in all #Befores:
esSetup = new EsSetup();
CreateIndex createIndex = createIndex(index)
for(int i = 0; i < types.size(); i++){
createIndex.withMapping(types[i], fromClassPath(mappings[i]))
}
esSetup.execute(deleteAll(), createIndex)
client = esSetup.client()
And this if my teardown code run in the #Afters:
client.admin().indices().prepareDelete(index).get()
This problem doesn't seem to happen on our build server, so it's only annoying and inconvinient, not a serious problem, but any help would be most appreciated. Thanks.

This problem seems to have been cause by leaving the test nodes active while jUnit ran through all the tests - eventually it stopped being able to create new nodes. The solution is to use esSetup.terminate() in the after to destroy the nodes at the end of each test.
Here's an example of it being used correctly: https://gist.github.com/tlrx/4117854

Related

Karate - Results from two series of tests aren't merged anymore after upgrading version from 0.9.3 to 1.2.0

We are facing an issue related to the tests results after upgrading our Karate project from V0.9.3 to V1.2.0. We are testing an API after the execution of two batchs. Therefore we have a first series of tests (Runner 1) executed on our API after our first batch run, then a second series of tests (Runner 2 on new features files) executed ofter our 2nd batch.
On the version we were using, the tests results were merged but on the updated version, we cannot achieve to get all the results on the same report : the results of the first run are deleted so we’re left with only the results of the second run.
Previously working code :
Results results1 = Runner.parallel(
Arrays.asList("#tag1,#tag2", "#ignore"),
Collections.singletonList("classpath:features"),
5,
"target/sources-rapports");
int totalFailCount = results1.getFailCount();
Results results2 = Runner.parallel(Arrays.asList("#tag3,#tag4", "#ignore"),Collections.singletonList("classpath:features"),5,"target/sources-rapports");
totalFailCount += results2.getFailCount();
generateReport(results2.getReportDir());
The report would contain all test features of results1 and results2. Whereas now, each execution seems to remove previous karate json files before generating the new ones.
New non-working code with the following syntax :
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.parallel(5);
I'm looking for help to solve this problem. Do not hesitate to ask for more informations if you need.
Try this change:
Runner.path("classpath:features")
.tags(Arrays.asList("#tag1,#tag2", "#ignore"))
.outputCucumberJson(true)
.backupReportDir(false)
.parallel(5);
For further info: https://stackoverflow.com/a/66685944/143475

Codeception test returning [Facebook\WebDriver\Exception\UnknownErrorException] InternalError: too much recursion

Test fails with [Facebook\WebDriver\Exception\UnknownErrorException] InternalError: too much recursion when trying to execute $I->see() on a specific piece of text.
When analyzing the source code I see there's 250k lines of HTML code, so it's causing some sort of a bottle neck. I was wondering if there's a way to identify this issue without having the whole test fail?
I tried wrapping it in try/catch, but it didn't help.
UPDATE
Browser: Geckodriver
Version: Codeception PHP Testing Framework v2.5.6
Code
try {
$this->see($text);
$isFound = true;
} catch (\PHPUnit_Framework_ExpectationFailedException $e) {
$isFound = false;
}
updated codeception 2.5.6 which is 3 yrs old and it fixed the issue. markup was over 10mb per page which is abnormally high.

Static Hangfire RecurringJob methods in LINQPad are not behaving

I have a script in LINQPad that looks like this:
var serverMode = EnvironmentType.EWPROD;
var jobToSchedule = JobType.ABC;
var hangfireCs = GetConnectionString(serverMode);
JobStorage.Current = new SqlServerStorage(hangfireCs);
Action<string, string, XElement> createOrReplaceJob =
(jobName, cronExpression, inputPackage) =>
{
RecurringJob.RemoveIfExists(jobName);
RecurringJob.AddOrUpdate(
jobName,
() => new BTR.Evolution.Hangfire.Schedulers.JobInvoker().Invoke(
jobName,
inputPackage,
null,
JobCancellationToken.Null),
cronExpression, TimeZoneInfo.Local);
};
// psuedo code to prepare inputPackage for client ABC...
createOrReplaceJob("ABC.CustomReport.SurveyResults", "0 2 * * *", inputPackage);
JobStorage.Current.GetConnection().GetRecurringJobs().Where( j => j.Id.StartsWith( jobToSchedule.ToString() ) ).Dump( "Scheduled Jobs" );
I have to schedule in both QA and PROD. To do that, I toggle the serverMode variable and run it once for EWPROD and once for EWQA. This all worked fine until recently, and I don't know exactly when it changed unfortunately because I don't always have to run in both environments.
I did purchase/install LINQPad 7 two days ago to look at some C# 10 features and I'm not sure if that affected it.
But here is the problem/flow:
Run it for EWQA and everything works.
Run it for EWPROD and the script (Hangfire components) seem to run in a mix of QA and PROD.
When I'm running it the 'second time' in EWPROD I've confirmed:
The hangfireCs (connection string) is right (pointing to PROD) and it is assigned to JobStorage.Current
The query at the end of the script, JobStorage.Current.GetConnection().GetRecurringJobs() uses the right connection.
The RecurringJob.* methods inside the createOrReplaceJob Action use the connection from the previous run (i.e. EWQA). If I monitor my QA Hangfire db, I see the job removed and added.
Temporary workaround:
Run it for EWQA and everything works.
Restart LINQPad or use 'Cancel and Reset All Queries' method
Run it for EWPROD and now everything works.
So I'm at a loss of where the issue might lie. I feel like my upgrade/install of LINQPad7 might be causing problems, but I'm not sure if there is a different way to make the RecurringJob.* static methods use the 'updated' connection string.
Any ideas on why the restart or reset is now needed?
LINQPad - 5.44.02
Hangfire.Core - 1.7.17
Hangfire.SqlServer - 1.7.17
This is caused by your script (or a library that you call) caching something statically, and not cleaning up between executions.
Either clear/dispose objects when you're done (e.g., JobStorage.Current?) or tell LINQPad not to re-use the process between executions, by adding Util.NewProcess=true; to your script.

selenium cucumber tests taking longer time while executing in parallel?

So far I'm able to run cucumber-jvm tests in parallel by using multiple runner classes but as my project increasing and new features are adding up each time so it's getting really difficult to optimise execution time
So my question is what is the best approach to optimise execution time
Is it by adding new runner class for each feature/limiting to certain threads and updating runner class with new tags to execute
So far I'm using thread count 8 and I've runner classes are also 8
Using this approach is fine until now, but one of the feature has got more scenarios added recently and it's taking longer time to finish :( so how is it possible to optimise execution time here...
Any help much appreciated!!
This worked for me:
Courgette-JVM
It has added capabilities to run cucumber tests in parallel on a feature level or on a scenario level.
It also provides an option to automatically re-run failed scenarios.
Usage
#RunWith(Courgette.class)
#CourgetteOptions(
threads = 10,
runLevel = CourgetteRunLevel.SCENARIO,
rerunFailedScenarios = true,
showTestOutput = true,
cucumberOptions = #CucumberOptions(
features = "src/test/resources/features",
glue = "steps",
tags = {"#regression"},
plugin = {
"pretty",
"json:target/courgette-report/courgette.json",
"html:target/courgette-report/courgette.html"}
))
public class RegressionTestSuite {
}

No specs found when looping tests on Protractor

I'm using Protractor on Selenium, MacOS and Chrome.
I'm trying to run the same test using an array of elements to provide the test data:
As I read here: Looping on a protractor test with parameters
I was trying that solution but when I run it, my test is not even found:
for(var i = 0; casos.length; i++){
(function(cotizacion){
it('obtener $1,230.00 de la cotizacion', function(){
browser.get('https://mifel-danielarias.c9users.io');
login(user,pass);
fillVidaCotizadorForm(formData);
//browser.sleep(5000);
var primaTotal = element(by.binding('vida.primaTotal'));
browser.wait(EC.visibilityOf(primaTotal),4000);
expect(primaTotal.getText()).toBe(cotizacion);
});
})(casos[i].Fallecimiento);
}
Output message:
> [14:13:01] I/hosted - Using the selenium server at
> http://localhost:4444/wd/hub [14:13:01] I/launcher - Running 1
> instances of WebDriver Started
>
>
> No specs found Finished in 0.003 seconds
This loop is inside my describe function and if I run the test normally without any loop, it runs flawlessly.
As #OptimWorks mentioned in his comment, Data Driven Approach was exactly what I was looking for, and this question provided several good answers.