I have 2 testcases.
One test case, tests the authentication and gets token info in the header. I want to store it and use it in another testcase.
I tried store processor at 1st teastcase and tried retrieving it from 2nd testcase. But it says "couldnot find the key." Is this approach wrong?
Also, I tried queue /dequeue option. In this case i got,
Time out waiting for queue to return a result.
How to achieve this simple case?
Authentication should be mocked on all the other methods since you'll be testing the functionality and not the actual response from the server.
Take a look to this link:
https://docs.mulesoft.com/munit/2.2/mock-event-processor
Related
We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks
Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).
I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))
Lets say I have one api and there are different scenarios to check in that one api.So for this can we add different scenarios in one feature file without calling api again and again.
You can certainly have multiple scenarios in one feature file.
And if you get a response back and you can do all the assertions you have in mind against this single response - you don't need to call again. Maybe you need a single scenario.
If you are expecting all your boundary conditions and non-happy paths to be achieved without making multiple HTTP calls, I'm sorry - I don't think any framework will do that magic for you.
Does karate provide any listener support where I can intercept any specific things like rest calls?
This is more like added customization we want to perform apart from what the karate provides. There will be always something or other we need to customize based on the need.
Say that I have 10000 test cases running in parallel and using karate parallel runner I get a nice report with the time it takes for each step and test cases. One of my service is getting called multiple times and I wanted to know what is the average time the service takes out of all the calls. What is the maximum or minimum time it takes.
I think Karate Hooks will get you what you need - if you write a function to aggregate the responseTime.
I'm willing to look at introducing this feature if needed, but you'll have to make a proposal on what the syntax should look like. Feel free to open a feature request. Today we do have configure headers that is like a "before" for all requests. Maybe something along those lines.
I am using SoapUI with Groovy script and running into an issue when calling multiple APIs. In the system I am testing one WSDL/API handles the account registration, and returns an authenticator. I then use that returned authenticator to call a different WSDL/API and verify some information. I am able to call each of these WSDLs/APIs separate but when I put them together in a Groovy Script it doesn't work.
testRunner.runTestStepByName("RegisterUser");
testRunner.runTestStepByName("Property Transfer");
if(props.getPropertyValue("userCreated") == "success"){
testRunner.runTestStepByName("AuthenticateStoreUser");
To explain the first line will run the TestStep "RegisterUser". I then do a "Property Transfer" step which takes a few response values from "RegisterUser" - the first is "Status" to see if it succeeded or failed, second is the "Authenticator". I then do an if statement to check if "RegisterUser" succeeded then attempt to call "AuthenticateStoreUser". At this point everything looks fine. Though when it calls "AuthenticateStoreUser" it shows the thinking bar then fails like a timeout, and if I check the "raw" tab for the request it says
<missing xml data>.
Note, that if I try the "AuthenticateStoreUser" by itself the call works fine. It is only after calling "RegisterUser" in the Groovy Script that it behaves strange. I have tried this with a few different calls and believe it is an issue calling two different APIs.
Has anyone dealt with this scenario, or can provide further direction to what may be happening?
(I would have preferred to simply comment on the question, but I don't have enough rep yet)
Have you checked the Error log tab at the bottom when this occurs? If so, what does it say and is there a stacktrace you could share?
What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps