How to do softassertion with karate? [duplicate] - karate

This question already has an answer here:
Is it possible to do soft assertion in the karate
(1 answer)
Closed 1 year ago.
I have a feature that use other two features something like that:
When call read(ser.getCarList) headers
When call read(ser.getTaxes) headers
So the first feature getCarList has two validations
When method Get
* configure continueOnStepFailure = true
Then status 200
And match response = read ('this:getCarAssertion')
* configure continueOnStepFailure = true
I have tried with the new keyword but when I get a status code 200 but a bad response the next feature getTaxes does not continue in the execution

The continueOnStepFailure is a new keyword that was meant to be used when looking to validate results and not fail immediately on the first failure. The purpose was for assertions or validations so as much information can be validated when asserting results of tests.
To avoid its usage to be as a pure if condition for several steps (with unexpected consequences), the default behavior of * configure continueOnStepFailure = true will only continue the execute if the fails happen in a match step and once you disable the mechanism with * configure continueOnStepFailure = false the test will fail (but still provide details for each step within the continueOnStepFailure block). This is because match is the recommended keyword for any sort of validations and is how you can leverage the powerful JSON assertions library etc.
It is also recommended to also explicity set * configure continueOnStepFailure = false after the set of match keywords so there are no unexpected behaviors after that conscious decision of continuing to evaluate keywords after a failure.
That being said there are ways to extend and configure the behavior of the continueOnStepFailure beyond the default behavior. The keyword also takes a JSON input, instead of a boolean, that allows some more extensibility. E.g. the default behavior of the keyword can be represented as follows:
* configure continueOnStepFailure = { enabled: true, continueAfter: false, keywords: ['match'] }
This means the continueOnStepFailure mechanism will be enabled, the scenario execution will not continue after the mechanism is disabled and it'll only accept failures if those happen in the match keyword. Note that if you set continueAfter to true the scenario will continue to execute the remaining steps but the scenario itself will still be marked as failed (with appropriate output in the report and typical failed behavior for any caller of that scenario). I highly discourage to set continueAfter to true.
For your specific use case, the status keyword is definitely within the boundaries of assertions that I've described. status 200 is just a shortcut for match responseStatus == 200. Very likely we should add status to the default behavior, given that it's a match assertion. With the extended configuration in JSON you can do the following for your use-case:
When method Get
And configure continueOnStepFailure = { enabled: true, continueAfter: false, keywords: ['match', 'status'] }
Then status 200
And match response = read ('this:getCarAssertion')
And configure continueOnStepFailure = false
Some additional examples can be found in the unit tests in this pull request. For quick reference, this is how your Karate Test report will look like:

Related

reuse karate tests from other feature file by passing in params [duplicate]

This question already has an answer here:
Pass Json to karate-config.js file
(1 answer)
Closed 1 year ago.
We have a set of services that all expose certain common endpoints such as a health check, version information etc. I am trying to use karate to write smoke tests for these multiple services in a reusable way that i can just pass in the service name and endpoint and have the tests executed for each service.
basicChecks.feature
Feature: Smoke Test. verify health check and version and index are ok
Scenario: Verify that test server health check is up and running
Given url '#(baseUrl)'
Given path '/health'
When method get
Then status 200
And match response == "'#(name)' ok"
Given path '/version'
When method get
Then status 200
And match response contains {'#(name)'}
testServices.feature
Feature: Smoke Test for services.
Scenario: Verify that test server health check is up and running
* call read('basic.feature') { name: 'service1' , baseUrl : service1Url }
* call read('basic.feature') { name: 'service2' , baseUrl : service2Url }
karate-config.js
function fn() {
var env = karate.env; // get java system property 'karate.env'
karate.log('karate.env system property was:', env);
if (!env) {
env = 'local'; // a custom 'intelligent' default
}
var config = { // base config JSON
appId: 'my.app.id',
appSecret: 'my.secret',
service1Url: 'https://myserver/service1'
service2Url: 'https://myserver/service2'
};
// don't waste time waiting for a connection or if servers don't respond within 5 seconds
karate.configure('connectTimeout', 5000);
karate.configure('readTimeout', 5000);
return config;
}
When i run this i get an error suggesting that the baseUrl is not being picked up when passed in
20:27:22.277 karate.org.apache.http.ProtocolException: Target host is not specified, http call failed after 442 milliseconds for url: /health#(baseUrl) 20:27:22.278 cas/src/test/java/karate/smoke/basic.feature:7 When method get http call failed after 442 milliseconds for url: /health#(baseUrl) cas/src/test/java/karate/smoke/basic.feature:7
I looked at https://intuit.github.io/karate/#code-reuse--common-routines but could not figure out how to use the same tests but pass in different endpoints?
Or maybe since i am totally new to karate there is a much better way of doing this than what i have outlined?
Thank you for your time.
Edit - I am trying to test different micro services in the same environment and not trying to switch different environments etc.
This is not the recommended approach. When you have different URL-s for different environments, you should switch environments using the approach in the documentation (setting karate.env) and NOT depend on re-use via "call" etc.
Example: https://stackoverflow.com/a/49693808/143475
And if you really want you can run suites one after the other switching the karate.env, although that is rare.
Or if you just trying "data" driven testing, there are plenty of ways, just read the docs and search Stack Overflow for Scenario Outline: https://stackoverflow.com/search?tab=newest&q=%5bkarate%5d%20Scenario%20Outline
If you are trying to do this "clever" re-use using "call" I strongly recommend that you don't and please read this for why: https://stackoverflow.com/a/54126724/143475
EDIT - I think you ran into this problem, read the docs please: https://github.com/intuit/karate#rules-for-embedded-expressions

Can I use the status shortcut in Karate to check a response class instead of just one code

I am using Karate to test other people's implementations of an API. When checking response status codes I often need to accept more than one response. For example in response to a PUT I may see 200, 201, 205 - they are equally valid. I know I can use
Then assert responseStatus >= 200 && responseStatus < 300
to check for success but the shortcut really helps readability of the tests.
Would you consider an enhancement to the language to support response classes such as:
success (meaning 200-299)
redirect (meaning 300-399)
fail (meaning 400-499)
2xx
3xx
4xx
If I were to look at submitting a PR for this would you agree it is useful and would you have a preferred mechanism? Would these classes be best as parsed symbols or strings that force a different match to be implemented when it detects the status is not a number?
Yes, my first reaction is not to add a new keyword. Also to be honest, this seems to be a rare requirement - never had this ask before, I guess API testing would generally mean predictable responses.
My proposal is that you can write a custom function:
* def statusSuccess = function(){ var status = karate.get('responseStatus'); return status >= 200 && status < 300 }
* url 'https://httpbin.org'
* path 'status', 200
* method get
* assert statusSuccess()
EDIT - also see this: https://twitter.com/KarateDSL/status/1364433453412851714

Passing a variable from one feature file into another as a part of request URL(not query parameter) in Karate

I have a feature that generates a vehicle id and is stored as a variable in the feature. I want to pass this id as a part of the request URL in another feature as a sort of a teardown activity.
This is how I called it from a feature called activateVehicle.feature
Scenario : Activate a vehicle
* header X-API-Key = apiKey
* def result = callonce read('createVehicle.feature')
* def vehicleId = result.vId
# some workflow steps
........
........
........
# tear down - delete the vehicle created
* call read('deleteVehicle.feature'){ vehcileId: '#(vehicleId)' }
In the called feature - deleteVehicle.feature
Scenario: Delete a vehicle
* header X-API-Key = apiKey
* def myurl = 'https://xxx/vehicle'+ vehicleId +'?permanent=yes'
Given myurl
And request ''
When method delete
Then status 200
Am I right in the approach? I want to reuse deleteVehicle.feature in other workflows as well and hence not doing this operation in the same activateVehicle.feature(which would have been very easy). I referred to the documentation too but it shows how we can use the variables in in the request body but not as a variable that can be used anywhere in the called feature. I don't want to use it in the request body (but want to use it as a part of the request URL) For example:
Scenario:
Given url loginUrlBase
And request { userId: '#(username)', userPass: '#(password)' }
I also referred to How can I call a variable from one feature file to another feature file using Karate API Testing. I followed suit for a solution but am getting a javascript error:
feature.deleteVehicle: -unknown-:11 - javascript evaluation failed:
'https://xxx/vehicle'+ vehicleId +'?permanent=yes', ReferenceError: "vehicleId"
is not defined in <eval> at line number 1
feature.SVT: SVT.feature:80 - javascript evaluation failed: vehicleId: '#(vehicleId)' }, <eval>:1:14 Expected eof
but found }
vehicleId: '#(vehicleId)' }
^ in <eval> at line number 1 at column number 14
Can someone kindly help and advise please?
Can you simplify your example ? The only thing I can make out is you need a space after the call feature and before the call argument:
* call read('deleteVehicle.feature') { vehcileId: '#(vehicleId)' }
The pattern we generally recommend is to setUp not tearDown as tearDown has a risk of not executing if you had an error. That said, please see hooks: https://github.com/intuit/karate#hooks
Sometimes you should just keep it simple and call a feature (with args) only where you need it.

Counting the number of response codes in JMeter 4.0

I run some load tests (all endpoints) and we do have a known issue in our code: if multiple POST requests are sent in the same time we do get a duplicate error based on a timestamp field in our database.
All I want to do is to count timeouts (based on the message received "Service is not available. Request timeout") in a variable and accept this as a normal behavior (don't fail the tests).
For now I've added a Response Assertion for this (in order to keep the tests running) but I cannot tell if or how many timeout actually happen.
How can I count this?
Thank you
I would recommend doing this as follows:
Add JSR223 Listener to your Test Plan
Put the following code into "Script" area:
if (prev.getResponseDataAsString().contains('Service is not available. Request timeout')) {
prev.setSampleLabel('False negative')
}
That's it, if sampler will contain Service is not available. Request timeout in the response body - JMeter will change its title to False negative.
You can even mark it as passed by adding prev.setSuccessful(false) line to your script. See Apache Groovy - Why and How You Should Use It article fore more information on what else you can do with Groovy in JMeter tests
If you just need to find out the count based on the response message then you can save the performance results in a csv file using simple data writer (configure for csv only) and then filter csv based on the response message to get the required count. Or you can use Display only "errors" option to get all the errors and then filter out based on the expected error message.
If you need to find out at the runtime then you can use aggregate report listener and use "Errors" checkbox to get the count of failure but this will include other failures also.
But, if you need to get the count at the run time to use it later then it is a different case. I am assuming that it is not the case.
Thanks,

How to test if my application does not send a message with SoapUI?

I have a Java EE application that processes a bunch of messages from different interfaces. (Some of the) functional tests are performed with SoapUI.
In one of the cases I have created a SOAP VirtResponse step that receives the output of my application and checks the values in the received message. The test has the following steps:
Datasource step to load input and expected output (multiple scenarios)
JMS step to send input to application on a specific interface.
SOAP step to receive application output on another interface (no validation).
Groovy script to check results (e.g. no message received or message received with specific values). See below for the script, couldn't get it to work in between the list items.
Datasource loop to step 2.
There is a scenario (well, there are more scenarios) in which the input should not generate an output. I want to check if my application did not send an output in such a scenario.
Strategy 1:
I have added a fourth groovy step in which I validate that the result of step 3 is an empty string. To make the test pass, I had to disable the checkbox in TestCase Options that says "Fail TestCase on Error". This works in cases of happy execution of tests. However if an error does occur (e.g. the application did send a response when it was not supposed to or the application send a wrong response), the entire TestCase is set to passed (because of the checkbox) and only the specific step deep down in the logs is failed. This makes it hard to see the results of the entire test suite.
Attempted strategy 2:
Started out by adding a conditional test step that will skip step 3 based on the input. However that way I no longer validate if my application does not send a message when it is not supposed to.
What is the best way to check these kinds of scenarios?
EDITS:
The entire testcase should fail if one of the scenarios from the datasource fails. (It is not a problem if this means that some scenarios were not evaluated yet)
Groovy script:
// Get flag from datasource that indicates if a message should be received
def soapBerichtOntvangen = context.expand('${DataSourceUISBerichten#SoapBerichtOntvangen}' );
// Get the message from the previous step.
def receivedSoapRequest = context.expand( '${SOAPVirtResponse#Request#declare namespace out=\'http://application/messageprocessing/outbound/out:SendOutboundMessage[1]/Message[1]}' )
// If we don't expect a message the flag is set to "N"
if(soapBerichtOntvangen=="N"){
assert(receivedSoapRequest=="")
} else if(receivedSoapRequest!=null && receivedSoapRequest!=""){
def slurpedMessage = new XmlSlurper().parseText(receivedSoapRequest)
def messageType=slurpedMessage.MessageHeader.MessageReference.MessageType
// Get expected values from context
def verwachtMessageType = context.expand('${DataSourceOutboundCIBerichten#messageType}' )
assert(String.valueOf(messageType)==verwachtMessageType)
} else {
// Should have received a message, but none came up.
assert(false)
}