as the title describes, is there an equivalent of GitHub Actions Strategy matrix in CircleCI?
Maybe I'm just blind and couldn't find it. As far as I've seen there's only this which just talks about test parallelization.
If not, is there a way to avoid writing this over and over just with a different parameter each time in my workflow?
- upgrade:
param: param1
- upgrade:
param: param2
- upgrade:
param: param3
Really feels like there should be a way avoid calling the same job with just with a different param over and over...
Related
Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475
We're successfully using Karate to automate tests for REST and SOAP webservices. In addition, we're having some legacy webservices, which are based on the Hessian Web Service protocol (http://hessian.caucho.com/).
Hessian calls are HTTP requests as well, so we would like to add them to our Karate test suites.
My first attempt was to use the Java Interop feature, so the tests are implemeted as Java code and the Java classes are getting called within the Feature files.
Example:
Scenario: Test offer purchase order
* def OfferPurchaseClient = Java.type('com.xyz.OfferPurchaseClient')
* def orderId = OfferPurchaseClient.createOrder('12345', 'xyz', 'max.mustermann#test.de')
* match orderId == '#number'
This approach is working, but I'm wondering if there is a more elegant way which would also use some more features of the Karate DSL.
I'm thinking about something like this (Dummy Code):
Scenario: Test offer purchase order
Given url orderManagementEndpoint
And path offerPurchase
And request serializeHessian(offerPurchase.json)
When method post
Then status 200
And match deserializeHessian(response).orderId == '#number'
Any recommendations/tips about how to implement such an approach?
I actually think you are already on the right track with the Java interop approach. The HTTP client is in my honest opinion a very small sub-set of the capabilities of Karate, there are so many other aspects such as assertions, reports, parallel-execution, even load-testing etc. So I wouldn't worry about not using request, method, status etc.
Maybe OfferPurchaseClient.createOrder() can return a JSON.
Or what I would do is consider an approach like OfferPurchaseClient.request(Map<String, Object> payload) - where payload can include all the parameters including the path and expected response code etc.
To put this another way, I'm suggesting that you write your own mini-framework and custom DSL around some Java code. If you want inspiration and ideas, please look at this discussion, it is an example of building a CLI testing tool around Karate: https://stackoverflow.com/a/62911366/143475
I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566
Is there a way to get a list of all the scenarios executed with their status (passed or failed) in a java object.
I know that we have JSON and XML reports, but I just need a simple list of scenarios along with their status without having to parse any other file.
From what I found in the documentation, we can use the following code
KarateStats stats = CucumberRunner.parallel(getClass(), 5, "target/surefire-reports");
But stats has only the number of failed scenarios with their execution time.
I think your best bet is to use the karate.info API, look at the last row of this table: https://github.com/intuit/karate#the-karate-object
And you have the afterScenario and afterFeature hooks. I think if you instantiate an empty JSON array [] as a variable in the Background using callonce, you should be able to * eval myArray.push(info) into it after each Scenario. Or try create a Java singleton and collect everything, that may be easier. Personally I think attempting to parse the Cucumber JSON may be the 'right' thing to do, but hey.
You may need to upgrade to 0.7.0.RC7 which is available. Please refer to the upgrade guide: https://github.com/intuit/karate/wiki/Upgrading-To-0.7.0
Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475