Gatling, testing concurrency issues - testing

We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks

Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).

I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))

Related

How to create a dynamic URL by a previous scenario response in Karate DSL [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

How to pass values from one test to another in Munit4?

I have 2 testcases.
One test case, tests the authentication and gets token info in the header. I want to store it and use it in another testcase.
I tried store processor at 1st teastcase and tried retrieving it from 2nd testcase. But it says "couldnot find the key." Is this approach wrong?
Also, I tried queue /dequeue option. In this case i got,
Time out waiting for queue to return a result.
How to achieve this simple case?
Authentication should be mocked on all the other methods since you'll be testing the functionality and not the actual response from the server.
Take a look to this link:
https://docs.mulesoft.com/munit/2.2/mock-event-processor

Automated Debugging of failed API test cases

Our application has 35 web servers and around 100 different APIs being executed on it.
These APIs internally calls each other and execute independently as well.
We have automated test cases of around 30 APIs but some of our tests fail because the other APIs fail on which the API under test depends.
So how can we know through our automated test cases the reason for each test failure?
Example Scenario:
We have a test case to validate the API to fetch the User's Bank account balance.
Now we hit this API through rest Assured and tries to assert the expected output. This request goes first to the ledger server which further internally hits auth server to validate the request authenticity, then hits, then hits a counter server to log fetchBalance request, then hits several other servers to get the correct balance of the user and then responds to our request.
But the problem is that this may break at any instance and if it breaks, the ledger server returns always same error string- "Something failed underhood". Now debugging becomes a challenge. We have to go to each server and have to search for the logs to know the actual cause.
I want to write a solution which can trace complete lifecycle of this request and can report where it actually failed.
For this problem, you should be aware of the most common failure reasons. And then you can implement the strategy bases on the failure reasons.
Example: If you have send one request to the server that API may have some security validations and some processing steps and integration with different components.
If you can identify some failure points and can implement checkpoints against that.
Request failed at Security validation. You have possible error codes for that then write logic according to that
You have a failure at the processing step there could be a possible reason
If there is a failure at the integration point then there must be some error codes also. You can implement logic around them
Validate the state of data before each interaction with server. For example
assert expression1 : expression2
Where expression2 will be executed if expression1 fails. (This is a Groovy example, but you can modify this as needed.)
An example expression2 message could be something like: "Failure occured when trying to send 'so-and-so' request!".

Karate API Testing - Reusing variables in different scenarios in the same feature file

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

Worklight JSON Store, can we get race conditions?

Worklight 6.1 on both Windows (colleague) and Mac (me), building an a Hybrid app destined for Android device but to speed up development we do initial testing as Mobile Web App in Chrome browser on desktop.
We get a weird symptom that I'm trying to fine-down to a reproducible test case. I think I see different behaviours when stepping in debugger and just letting it run. Want to check whether a certain coding pattern could be the cause of the symptom before I go any further.
Fundamental question: should we wait for the resolution of a promise returned by a JSONSTore request for an action on a collection before issuing another request? more explanation below.
The overall intent is to load some data into the JSONStore, with some intelligent replace/merge action if a record is already present. Pseudo code:
for each record retrieved from back-end
if ( record already present in Store )
do some data merging
replace record
else
add record
The application code actually works like this, just considering the add() case, the problem manifests when the store is empty, all records need to be added
for each record to add
addPromise = store.get().add(record);
listOfPromises.insert(addPromise);
examine the list of promises recording any errors
That is there is no "wait" for add to finish before issuing the next add request. Hence in effect we've initiated a set of adds "in parallel" whatever that might mean in JavaScript in Chrome.
The code appears to run just fine, no errors reported. On android device it works reliably. In Chrome under normal running (no stepping in debugger) we end up with no reported errors but only one record inserted - indeed as though a snapshot of the initial "empty" store had been taken and each add is working on that "empty" copy.
After writing this I'm now pretty convinced that the coding pattern described above is vulnerable to a kind of race and that the better approach is build a list of documents to be added and insert them in a single operation.
A more detailed answer will be coming later, but I now know that this
the coding pattern described above is vulnerable to a kind of race and
that the better approach is build a list of documents to be added and
insert them in a single operation.
is true. In the browser the JSONStore does require that we wait for the result of one request before issuing another one. The recommended approach is
var dataToAdd = buildArrayOfDataToAdd(responseFromServer);
var dataToReplace = buildArrayOfDataToReplace(responseFromServer);
jsonstore.add( dataToAdd ).then( function() { jsonstore.replace( dataToReplace); })