Retry mechanism in karate testing framework [duplicate] - karate

This question already has an answer here:
Karate framework retry until not working as expected
(1 answer)
Closed 2 years ago.
Retry mechanism in karate testing framework How to retry tests on failure in karate testing framework like Junit and TestNG.
something like
public class Retry implements IRetryAnalyzer {
private int count = 0;
private static int maxTry = 3;
#Override
public boolean retry(ITestResult iTestResult) {
if (!iTestResult.isSuccess()) { //Check if test not succeed
if (count < maxTry) { //Check if maxtry count is reached
count++; //Increase the maxTry count by 1
iTestResult.setStatus(ITestResult.FAILURE); //Mark test as failed
return true; //Tells TestNG to re-run the test
} else {
iTestResult.setStatus(ITestResult.FAILURE); //If maxCount reached,test marked as failed
}
} else {
iTestResult.setStatus(ITestResult.SUCCESS); //If test passes, TestNG marks it as passed
}
return false;
}
}

It works for me on version 0.9.5.RC5 . But, maybe this is one of the before-mentioned "workarounds"?
All you do is something like this, which defaults to 3 attempts:
* retry until responseStatus == 404
When method get

As of now this is an open feature request: https://github.com/intuit/karate/issues/247
But there are multiple work arounds. You may get some ideas if you look at the polling example: https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/polling/polling.feature

Related

How to re execute failed automation scenarios from Jenkins

I am running cucumber tests in the TestNG framework using the maven command. Daily I am executing the test cases from Jenkins and generating the cucumber report in Jenkins. (using cucumber report plugin)
I am looking for a solution to re-run the failed test cases in Jenkins and it should give the final report.
Please provide me the approach to achieve this.
On of the simple way is, use IRetryAnalyzer in TestNG. It will re-run the failed test case.
In final report if re-run passed then i will display as passed (initially failed one display as skipped)
if re-run also failed, then marked as failure.
example:
public class Retry implements IRetryAnalyzer {
private int count = 0;
private static int maxTry = 3;
#Override
public boolean retry(ITestResult iTestResult) {
if (!iTestResult.isSuccess()) { //Check if test not succeed
if (count < maxTry) { //Check if maxtry count is reached
count++; //Increase the maxTry count by 1
iTestResult.setStatus(ITestResult.FAILURE); //Mark test as failed
return true; //Tells TestNG to re-run the test
} else {
iTestResult.setStatus(ITestResult.FAILURE); //If maxCount reached,test marked as failed
}
} else {
iTestResult.setStatus(ITestResult.SUCCESS); //If test passes, TestNG marks it as passed
}
return false;
}
}
Add in Testng.xml file
You can add test to also
#Test(retryAnalyzer = Retry.class)

Inoperation of testng Retry Analyzer

I was following several different Web Sites explaining how to use RetryAnalyzer (they all say basically the same thing, but I checked several to see whether there was any difference). I implemented as they did in the samples, and deliberately caused a failure the first run (which ended up being the only run). Even though it failed, the test was not repeated. I even put a breakpoint inside the analyzer at the first line (res = false). which never got hit. I tell it to try 3 times but it did not retry at all. Am I missing something?
My sample is below: Is it something to do with setting counter = 0? But the "res = false" at least should get hit?
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
boolean res = false;
if (!result.isSuccess() && counter < 3) {
counter++;
res = true;
}
return res;
}
}
and
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
The test usually passes. I deliberately caused it to fail but it did not do a retry. Am I missing something?
===============================================
Default suite
Total tests run: 1, Failures: 1, Skips: 0
Try adding alwaysRun = true to your test method decorator.
#Test(dataProvider = "dp", retryAnalyzer = RetryAnalyzer.class, alwaysRun = true)
public void testA(TestContext tContext) throws IOException {
genericTest("A", "83701");
}
Also, before retrying, you may want to restart your driver instance, so that you start clean with your test. Otherwise, your second run will execute in the same browser instance.
Just do a driver.Quit() following by a reinstantiation of the browser driver.
RetryAnalyzer class has to be public. Also, if it is an inner class, it should be static. TestNg silently ignores retryAnalyzer otherwise.

I need answer of one jade agent to depend on information from others and don't know how to do it

I'm new to jade and I have 5 agents in eclipse that have formula for finding an average and the question is how to send information from agent to this formula for calculation?
I'll be glad if someone can help me with this.
For example, there is one of my agents. There's no formula, because I don't know how to represent it. This is math expression of it: n+=alfa(y(1,2)-y(1,1))
public class FirstAgent extends Agent {
private Logger myLogger = Logger.getMyLogger(getClass().getName());
public class WaitInfoAndReplyBehaviour extends CyclicBehaviour {
public WaitInfoAndReplyBehaviour(Agent a) {
super(a);
}
public void action() {
ACLMessage msg = myAgent.receive();
if(msg != null){
ACLMessage reply = msg.createReply();
if(msg.getPerformative()== ACLMessage.REQUEST){
String content = msg.getContent();
if ((content != null) && (content.indexOf("What is your number?") != -1)){
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Received Info Request from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.INFORM);
try {
reply.setContentObject(7);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
else{
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected request ["+content+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.REFUSE);
reply.setContent("( UnexpectedContent ("+content+"))");
}
}
else {
myLogger.log(Logger.INFO, "Agent "+getLocalName()+" - Unexpected message ["+ACLMessage.getPerformative(msg.getPerformative())+"] received from "+msg.getSender().getLocalName());
reply.setPerformative(ACLMessage.NOT_UNDERSTOOD);
reply.setContent("( (Unexpected-act "+ACLMessage.getPerformative(msg.getPerformative())+") )");
}
send(reply);
}
else {
block();
}
}
}
So from what I can make out you want to (1) send a formula/task to multiple platforms, (2) have them performed locally, (3) and have the results communicated back.
I think there are atleast two ways of doing this:
The first is sending an object in an ACLMessage using Java Serialisation. This is a more OOP approach and not very "Agenty".
The second being the cloning or creating a local task agent.
Using Java SerialZation. (Solution 1)
Create an object for the calculation
class CalculationTask implements serialization
{ int n;
int calculate(){
n+=alfa(y(1,2)-y(1,1));
}
}
send the calculation object via ACLMESSAGE from the senderAgent.
request.setContentObject(new CalculationTask())
recieve the calculation object by recieverAgent and perform calculation on the object. Then response setting the complete task in the response.
CalculationTask myTask = request.getContentObject();
myTask.calculate();
ACLMESSAGE response = request.createReply();
response.setContentObject(myTask());
response.setPerformative(ACLMESSAGE.INFORM)
send(response)
The senderAgent then receives the complete job.
ACLMESSAGE inform = getMessage();
CalculationTask completeTask = inform.getContentObject();
completeTask.process()
Creating local Task Agents (Solution 2)
The Agent Orientated way of doing it would be to launch a task agent on each platform. Have each task agent complete the task and respond appropriately.

Retry only specific Failed tests with TestNG

How to execute only the specific failed tests. By using 'IRetryAnalyzer' we can re-run the failed tests for x number of time. As mentioned here Restart failed test case automatically. I have also implemented iTestListener to make the tests count more meaning full by following Retry Only failed Tests and update test run count by implementing 'ITestListener'
Is there any way to Re-run ONLY specific failed tests.
Example: We need to execute only tests which are failed because of NoSuchElementException and TimeoutException.
Please find the below screen shot where total 8 tests are failed and there are 6 tests which are failed because of NoSuchElementException-1 and TimeoutException-5.
Please help.
You can try out by checking the result of your tests like:
#Override
public boolean retry(ITestResult result) {
try {
if (result.getThrowable().toString()
.contains("NoSuchElementException")) // Checking for specific reasons of failure
if (retryCount < maxRetryCount) {
retryCount++;
return true;
}
return false;
} catch (Exception e) {
return false;
}
}
Since every result has an attribute m_throwable in case of exception has occurred, you can use it to get your task done in Retry class.

Problem using Selenium to automate a postback link that is inside an ASP.NET UpdatePanel [duplicate]

This question already has answers here:
Selenium IDE click() timeout
(2 answers)
Closed 3 years ago.
I have a GridView control with sorting enabled inside an UpdatePanel. I used Selenium IDE to record a test that clicks on the sort link of the table, but when I try to execute the test it get's stuck on the click command. Looking at the log I see:
[info] Executing: |click | link=Name | |
[error] Timed out after 30000ms
I haven't tried it with Selenium-RC yet, I don't know if it will be any different. I don't want Selenium to wait for anything. Any ideas of how to work around it?
Thanks!
when using selenium + Ajax (or the page just get refresh under certain conditions).
I usually use:
selenium.WaitForCondition
or I created the following code recently (the page uses frames).
public bool AccessElementsOnDynamicPage(string frame, Predicate<SeleniumWrapper> condition)
{
DateTime currentTime = DateTime.Now;
DateTime timeOutTime = currentTime.AddMinutes(6);
while (currentTime < timeOutTime)
{
try
{
SelectSubFrame(frame);
if (condition(this))
return true;
}
catch (SeleniumException)
{
//TODO: log exception
}
finally
{
currentTime = DateTime.Now;
}
}
return false;
}
public bool WaitUntilIsElementPresent(string frame, string locator)
{
return AccessElementsOnDynamicPage(frame, delegate(SeleniumWrapper w)
{
return w.IsElementPresent(locator);
});
}
public bool WaitUntilIsTextPresent(string frame, string pattern)
{
return AccessElementsOnDynamicPage(frame, delegate(SeleniumWrapper w)
{
return w.IsTextPresent(pattern);
});
}
Soon you will get to the point you will need selenium RC integrated on your development environment, for this I recommend you to read:
How can I make my Selenium tests less brittle?
It is around waiting but for specific elements that should be (or appear) on the page.