I am using SoapUI with Groovy script and running into an issue when calling multiple APIs. In the system I am testing one WSDL/API handles the account registration, and returns an authenticator. I then use that returned authenticator to call a different WSDL/API and verify some information. I am able to call each of these WSDLs/APIs separate but when I put them together in a Groovy Script it doesn't work.
testRunner.runTestStepByName("RegisterUser");
testRunner.runTestStepByName("Property Transfer");
if(props.getPropertyValue("userCreated") == "success"){
testRunner.runTestStepByName("AuthenticateStoreUser");
To explain the first line will run the TestStep "RegisterUser". I then do a "Property Transfer" step which takes a few response values from "RegisterUser" - the first is "Status" to see if it succeeded or failed, second is the "Authenticator". I then do an if statement to check if "RegisterUser" succeeded then attempt to call "AuthenticateStoreUser". At this point everything looks fine. Though when it calls "AuthenticateStoreUser" it shows the thinking bar then fails like a timeout, and if I check the "raw" tab for the request it says
<missing xml data>.
Note, that if I try the "AuthenticateStoreUser" by itself the call works fine. It is only after calling "RegisterUser" in the Groovy Script that it behaves strange. I have tried this with a few different calls and believe it is an issue calling two different APIs.
Has anyone dealt with this scenario, or can provide further direction to what may be happening?
(I would have preferred to simply comment on the question, but I don't have enough rep yet)
Have you checked the Error log tab at the bottom when this occurs? If so, what does it say and is there a stacktrace you could share?
Related
I have a scenario which is a series of rest api calls but in the middle is a section that executes a few steps within a chrome browser. The browser steps are common to another scenario so I tried to extract the browser steps into a separate feature that could then be called from multiple scenarios.
When the main scenario executes it executes the browser feature but fails to auto-close the browser after execution. I read in the documentation "Karate will close the browser automatically after a Scenario unless the driver instance was created before entering the Scenario" . The configure driver code is in the callable scenario.
I also tried caling quit() but this resulted in the error: "The forked VM terminated without properly saying goodbye. VM crash or System.exit called?"
Does anyone know how I can ensure the browser closes in this circumstance?
UPDATE: As suggested by #PeterThomas I started to craft a full example to replicate this when I discovered that to replicate is actually quite simple.
If the UI feature is called like this then the browser is closed after execution:
* call read('classpath:/ui/callable/GoogleSearch.feature')
If called like this then the browser remains open:
* def result = call read('classpath:/ui/callable/GoogleSearch.feature')
My UI scenario scrapes a value from a web page which I then stored within a '* def ticket' within the called feature. I was hoping to access it via result.ticket. As I am unable to do this I am successfully using the following:
* def extractedTicket = { value: '' }
* call read('classpath:/ui/callable/GoogleSearch.feature')
* def ticket = extractedTicket.value
And within the called feature:
* set extractedTicket.value = karate.extract(val, '.ticket=(.*?)&', 1)
First, I think you should provide a way to replicate this, so that we can investigate and fix this for everyone. Please follow this process: https://github.com/karatelabs/karate/wiki/How-to-Submit-an-Issue
That said, maybe for just getting Chrome to do a few steps you should just use the Java API - and you can call it from wherever you want, even within a feature file using Java interop: https://github.com/karatelabs/karate#java-api
Also see if this answer gives you any pointers: https://stackoverflow.com/a/60387907/143475
I'm trying to send xAPI statements from an "Activity Provider" to the ADL LRS live demo. The goal is to implement this from my C# .NET application, but I was having trouble implementing it so I tried running a simple POST request from JMeter.
I do get a 200 response, but when I try to check whether the statement was successfully stored at https://lrs.adlnet.gov/me/statements, it's empty.
Am I completely misunderstanding how this structure is supposed to work? I'm going to install the ADL LRS eventually for testing purposes, but I wanted to get the actual request structure worked out first.
The path looks incorrect, the POST should be to {endpoint}/statements, so in your case it looks like it should be https://lrs.adlnet.gov/xAPI/statements. Additionally you should make sure you are setting the X-Experience-API-Version header. If this doesn't solve the issue, you should look at more than just the response status code, and see what the body contains (and add it to your question). The body for the type of request you are sending should return JSON, with an array with a single statement identifier in it. Additionally when you retrieve the statements the URL you use should match the one that you specify when you send, so /me/ is not correct.
If it is a basic C# .NET project you may be interested in https://github.com/RusticiSoftware/TinCan.NET. It is showing its age, but in general for a number of projects it will still work or would at least be a reasonable place to start.
I have 2 testcases.
One test case, tests the authentication and gets token info in the header. I want to store it and use it in another testcase.
I tried store processor at 1st teastcase and tried retrieving it from 2nd testcase. But it says "couldnot find the key." Is this approach wrong?
Also, I tried queue /dequeue option. In this case i got,
Time out waiting for queue to return a result.
How to achieve this simple case?
Authentication should be mocked on all the other methods since you'll be testing the functionality and not the actual response from the server.
Take a look to this link:
https://docs.mulesoft.com/munit/2.2/mock-event-processor
The PHPUnit Selenium base class has an option to make a screenshot on failure, which is a huge help in finding out why the test failed. The selenium server, however, returns an error instead of a failure on any error condition other than explicit assert* calls (such us trying to do something with a non-existent element). If I try to make a screenshot after the server reports the error, I get another error saying that the server already discarded the session. Is there any way to change that behavior?
Update: this is because PHPUnit breaks the connection when it receives an error. I was able to change it by some (rather ugly) manipulation of the PHPUnit code.
Make those interactions as test cases.
For example in perl,
If it is written as below and fails due to a non-existent element. the script will error out
$sel->type("email-id","trial\#trial.com");
While if the above step is made as a test case by writing it as follows
$sel->type_ok("email-id","trial\#trial.com");
If there is a non-existent element, the test case will only fail, and the script will continue.
So using TAP (test any protocol) by using the module use Test::More; , if _ok is added after a function, the function return will be used to determine the fate of the test case.
ie. - A return of 'O' means the test Failed
and A return of '1' means the test Passed
It is not the Selenium server but the SeleniumTestCase class for PHPUnit 3.4 which automatically sends a stop command when it detects an error (Driver.php line 921). PHPUnit 3.6 seems handle errors better.
I think you can overwrite method 'travelbox' and make something like this:
public function onNotSuccessfulTest(Exception $e){
file_put_content('/xxx/xxx.jpg', $this->currentScreenshot());
}
We're using WatiN to test our web portals. During the course of an E2E test, we'll occasionally see client-side script errors on the IE status bar. I'd like to chain a handler onto the script error event and record the error for later analysis and bug filing.
Problem is, I don't know that there's a global script error event or how to chain into it. And if there's not a browser-agnostic way to accomplish this, I can create MyIE and MyFF subclasses but then this becomes two browser-specific questions.
In essence, I'm thinking of something like this entirely made-up call:
browser.ScriptEngine.SetCustomErrorHandler(LogScriptingError);
... where LogScriptErrors is my code that does the obvious.
Many of our client-side scripting errors don't necessarily prevent the test from continuing (a pretty UI element didn't animate, for example, but the underlying form is still submittable), so I'd like to log the error and forge ahead in most cases.
You probably looking for this:
window.onerror=function(message, url, line){logError();};
You can add this code to your pages to handle errors in logError(). but this may not work in all browser(works in IE), check this for browser compatibility:
http://www.quirksmode.org/dom/events/error.html
Or you may try this commercial product:
exceptionhub.com/
You could maybe co-opt the ability to inject eval code (described under "Added Eval functionality") to add a script that caught all errors, not just errors from the eval'ed script. I'm not sure if this would work, but it's an area to explore. Another resource might be this blog post, which discusses how to evaluate Javascript in WatiN.