Our application has 35 web servers and around 100 different APIs being executed on it.
These APIs internally calls each other and execute independently as well.
We have automated test cases of around 30 APIs but some of our tests fail because the other APIs fail on which the API under test depends.
So how can we know through our automated test cases the reason for each test failure?
Example Scenario:
We have a test case to validate the API to fetch the User's Bank account balance.
Now we hit this API through rest Assured and tries to assert the expected output. This request goes first to the ledger server which further internally hits auth server to validate the request authenticity, then hits, then hits a counter server to log fetchBalance request, then hits several other servers to get the correct balance of the user and then responds to our request.
But the problem is that this may break at any instance and if it breaks, the ledger server returns always same error string- "Something failed underhood". Now debugging becomes a challenge. We have to go to each server and have to search for the logs to know the actual cause.
I want to write a solution which can trace complete lifecycle of this request and can report where it actually failed.
For this problem, you should be aware of the most common failure reasons. And then you can implement the strategy bases on the failure reasons.
Example: If you have send one request to the server that API may have some security validations and some processing steps and integration with different components.
If you can identify some failure points and can implement checkpoints against that.
Request failed at Security validation. You have possible error codes for that then write logic according to that
You have a failure at the processing step there could be a possible reason
If there is a failure at the integration point then there must be some error codes also. You can implement logic around them
Validate the state of data before each interaction with server. For example
assert expression1 : expression2
Where expression2 will be executed if expression1 fails. (This is a Groovy example, but you can modify this as needed.)
An example expression2 message could be something like: "Failure occured when trying to send 'so-and-so' request!".
Related
Here is a situation that I want to test with e2e, but I'm not sure the best way. During a specific workflow, an action requires the backend to go make a rest request. This request should never fail, but in the exceptional case that it does (network connectivity or unexpected downtime), I want to at least handle it gracefully and I want to use selenium to check that it would be handled gracefully in the UI. However, from the UI, the design dictates that there should be no way to get it in this error state via normal function.
The question is, should I code into the application some way of creating this exception via frontend actions just so selenium can check that it's handled gracefully? Would that make the test too synthetic to be useful? Or should I just not create an automated test for this requirement and pray that it never occurs?
We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks
Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).
I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))
I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.
I'm trying to learn J-Meter.
When I'm running a sample script of logging into a site using both valid/invalid credentials,it doesn't stop thread execution when invalid login credential is used and also login is not recorded in database.
Does it actually login to the website or only creates virtual login to create a similar environment.Is there any way to achieve this using Samplers?
JMeter is/acts as a headless browser.
Whatever your browser with an UI does, JMeter can also do - except executing a javascript. So, If you had recorded your script correctly - JMeter can login to the actual application as well.
Jmeter is not like QTP/Selenium. It does not know if it is a valid credential/invalid credential. It passes/fails the request based on the HTTP codes. If the HTML response from the server comes with a 200 http code, It passed for JMeter. If the server responds with code 500, JMeter fails the request. But JMeter also provides a way to validate the response you get - Assertion. You can use Response Assertion to see if you are seeing the home page or not to confirm if the user has logged in successfully.
To stop the test on error, select the appropriate option here in thread Group properties.
JMeter is a very nice tool & have been using it for 2 years with no issues.
Good luck!
Does your script have Config Element -> HTTP Cookie Manager? It needs cookie for the login function.
If your script has many transactions with the same level with login transaction and the option you select in your Thread Group is Continue, all transactions will be executed no matter login transaction is passed or failed.
In case you want the other transactions will not be executed if login fails, let add a Regular Expression Extractor as child of the login transaction to retrieve the text Dashboard, put other transactions into a Logic Controller -> If Controller. Suppose the Regular Expression Extractor has name Dashboard and Default value is NotFound, then the Condition of If Controller will be "${Dashboard}"!="NotFound"
JMeter automatically treats 2xx and 3xx HTTP Response Codes successful so it won't be able to detect failed login unless you explicitly tell it to check presence or absence of some specific content in the response data.
So if you add a Response Assertion you will be able to conditionally fail sampler and choose what to do in case of failure via "Action to be taken after a Sampler error" on Thread Group level.
See How to Use JMeter Assertions in Three Easy Steps guide for more details on the assertions domain.
If you're unsure what JMeter Sampler is doing you can check request and response details via View Results Tree listener. If you cannot simulate login event in majority of cases it is due to missing HTTP Cookie Manager and/or failed correlation of dynamic mandatory parameter(s) like Viewstate, CSRF token, etc.
I'm inserting several ApexTestQueueItem records into an Org via the Partner API to queue the corresponding Apex Classes for asynchronous testing. The only field I'm populating is the ApexClassId. (Steps as per Running Tests Using the API)
After the tests have run and I retrieve the corresponding ApexTestResult record(s) the ApexLogId field is always null.
For the ApexLogId field the help documents have the Description:
Points to the ApexLog for this test method execution if debug logging is enabled; otherwise, null
How do I enable debug logging for asynchronous test cases?
I've used the DebuggingHeader in the past with the runTests() method but it doesn't seem to be applicable in this case.
Update:
I've found if I add the user who owns the Salesforce session under Administration Setup > Monitoring > Debug Logs as a Monitored User the ApexLogId will be populated. I'm not sure how to do this via the Partner API or if it is the correct way to enable logging for asynchronous test cases.
You've got it right. That's the intended way to get a log.