Error- Validate Multiple APi call response in one scenario in karate - karate

I am trying to use two api call (Post & put call) in one scenario in karate framework. But I am not able to validate 2nd Api call response. when i tried to do like response.id , it passing 1 st api call response.
Is there any solution for it?

In your feature file, you wrote your print response in When instead of writing it after Then. Which means it is showing the previous response only. As per documentation response gets overridden after making a new http request.
Following,
...
When method put
And print 'jokerresult-->',response.joker_result
Then status 201
Should be like,
...
When method put
Then status 201
And print 'jokerresult-->',response.joker_result
let me know if it worked for you or not.

Related

Karate DSL: How to initiate a long running rest call, perform other rest calls, then assert on the first requests' response

I am learning Karate DSL in order to determine if it is a viable automation solution for our new API.
We have a unique environment in which we have a REST API to test, but also use REST services to perform other required actions while the original request is awaiting a response. The REST calls perform robotic actions to manipulate hardware, query our servers, etc.
We need the ability to send a REST request, perform various other REST requests (with assertions) while awaiting a response to the first request. Then, finally assert that the original request gets the correct response payload based on the actions performed between the first request and its response.
Rough example:
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion
Obviously the example above does not work, but I am hoping we can structure our tests similarly.
I know tests can run in parallel, but I am not sure how to encapsulate them as "one test" vs "multiple tests" and control order (which is required for this to work properly).
There is documentation on Async behavior, but I found it difficult to follow. If anyone can provide more context on how to implement the example I would greatly appreciate it.
Any suggestions would be warmly welcomed and examples would be fantastic. Thanks all!
Actually being able to wait for an HTTP call to complete and do other things in the meantime is something Karate cannot do by default. This question has me stumped and it has never been asked for before.
The only way I think we can achieve it in Karate is to create a thread and then run a feature from there. There is a Java API to call a feature, but when you are doing all that, maybe you are better off using some hand-crafted Java code to make an HTTP request. Then your case aligns well with the example mentioned here: https://twitter.com/getkarate/status/1417023536082812935
So to summarize, my opinion of the best plan of action.
for this "special" REST call, be prepared to write Java code to make that HTTP call on a separate thread
call that Java code at the start of your Karate test (using Java interop)
you can pass the karate JS object instance
so that you can call karate.listen() when the long-running job is done
actually instead of the above, just use a CompletableFuture and a java method call (just like the example linked above)
Now that step won't block and you can do anything you want
After you have done all the other work, use the listen keyword or call a Java method on the helper you used at the start of your test (just like the linked example)
That should be it ! If you make a good case for it, we can build some of this into Karate, and I will think over it as well.
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion

How to add payload and use the post method to get response from an API in twincat 3?

enter image description hereNot getting appropriate response like I get on the postman application
I tried a few online APIs which give me response, but I need an option to be able to add a payload in my program. I am using tc3 iotbase library from the twincat library repo.
The HTTP POST request payload is builded by the FB_IotHttpRequest function block, when you call one of the send methods
You can use 2 option here:
FB_IotHttpRequest.SendRequest(...) in this case you need to create the payload JSON manually
FB_IotHttpRequest.SendJsonDomRequest(...) in this case you need just pass the reference of the FB_JsonDomParser, which contains the json document.

How do I verify if the API is throwing 200 OK as status code in JMeter

So, I recently started learning JMeter and want to create an API validation script for a website called dummyapi.io where I got an app-id. But, I don't understand how to use this in JMeter. What does it mean to use the app-id in the API request header?
JMeter automatically considers HTTP Status Codes below 400 as successful so if your app returns status 200 it will be treated as passed.
If you want to add an extra layer of check and verify that particular this request returns 200 response code:
Add Response Assertion as a child of the request you want to validate
Configure it as follows:
Field to Test: Response Code
Pattern Matching Rules: Equals
Patterns to Test: 200
Similarly you can add additional pass/fail criteria like:
presence/absence of text in the response body/message/headers/URL
compare a JMeter Variable with anticipated value
check response duration
check response size
etc. see How to Use JMeter Assertions in Three Easy Steps article to learn more about JMeter Assertions concept

How to use a Postman Mock Server

I have followed the guide here to create a postman mock for a postman collection. The mock seem to be successfully created, but I have no idea how to use the mock service.
I've been given a url for the mock, but how do I specify one of my requests? If I issue a GET request to https://{{mockid}}.mock.pstmn.io I get the following response:
{
"error": {
"name": "mockRequestNotFoundError",
"message": "We were unable to find any matching requests for the mock path (i.e. undefined) in your collection."
}
}
According to the same guide mentioned above the following url to "run the mock" https://{{mockId}}.mock.pstmn.io/{{mockPath}} but what exactly is mockPath?
Within my collection I have plenty of folders, and inside one of these folders I have a request with an example response. How do I access this example response through the mock? Thanks for all help in advance!
Here's the Postman Pro API, which doesnt mention a lot more than just creating reading mocks.
I had the same issue seeing an irrelevant error but finally I found the solution. Unfortunately I cannot find a reference in Postman website. But here is my solution:
When you create a Mock server you define your first request (like GET api/v1/about). So the Mock server will be created but even when you obtain your API key and put it in the header of request (as x-api-key) it still returns an error. It doesn't make sense but it turned out that defining the request is not enough. For me it only started returning a response when I added an Example for the request.
So I suggest for each request that you create, also create at least one example. The request you send will be matched with the examples you have created and the matched response will be returned. You can define body, headers and the HTTP status code of the example response..
I have no Pro Postman subscription and it worked for me using my free subscription.
Menu for adding an example or selecting one of them for editing:
UI for defining the example (See body, headers and status) :
How to go back to the request page:
Here is the correct reply I get based on my example:
If you request in the example is a GET on api.domain.com/api/foo then the mockPath is /api/foo and your mock endpoint is a GET call to https://{{mockid}}.mock.pstmn.io/api/foo.
The HTTP request methods and the the pathname as shown in the image below constitute a mock.
For ease of use the mock server is designed to be used on top of collections. The request in the examples is used as is along with response attached to it. The name of the folder or collection is not a part of the pathname and is not factored in anywhere when using a mock. Mocking a collection means mocking all the examples in within your collection. An example is a tuple of request and response.
An optional response status code if specified lets you fetch the appropriate response for the same path. This can be specified with the x-mock-response-code header. So passing x-mock-response-code as 404 will return the example that matches the pathname and has a response with status code of 404.
Currently if there are examples with the same path but different domains, and mock is unable to distinguish between them it will deterministically return the first one.
Also if you have several examples for the same query :
Mock request accept another optional header, x-mock-response-code, which specifies which integer response code your returned response should match. For example, 500 will return only a 500 response. If this header is not provided, the closest match of any response code will be returned.
Optional headers like x-mock-response-name or x-mock-response-id allow you to further specify the exact response you want by the name or by the uid of the saved example respectively.
Here's the documentation for more details.
{{mockPath}} is simply the path for your request. You should start by adding an example for any of your requests.
Example:
Request: https://www.google.com/path/to/my/api
After adding your mock server, you can access your examples at:
https://{{mockId}}.mock.pstmn.io/path/to/my/api

Is it possible to determine the http request method (POST/GET) using a variable?

I am using a csv file as the basis for my requests. The thing is, I have some GET requests and some POST requests. Is there a way to use the same http request element for both request types where the method will be determined by the variable from the csv file?
This is really simple using Beanshell Preprocessor.
Add a Beanshell preprocessor for your existing HTTP request. Lets assume the default HTTP method is GET.
Now lets change it to POST whenever the csv variable 'method' is 'POST'
if(vars.get("method").equalsIgnoreCase("POST")){
sampler.setMethod("POST"); //this will change current sampler's http method from GET to POST.
}
The most direct solution for this would be to have two requests in the test plan, one a GET and one a POST. This does not quite satisfy your requirement to have it use the SAME request element, but it is probably the best solution.
Nest each of those inside their own IF controllers that reads a value from the CSV.
For example lets say the csv is the following:
http_method,host,path,params...
The first IF could be:
"${http_method}" == "GET"
Then the next:
"${http_method}" == "POST"
Each line from the CSV would only evaluate true to one of the statements, and then make the correct POST or GET call.
There are 2 options:
Use HTTP Raw Request available via JMeter Plugins
Write your custom logic in Java. See "How to Write a Custom AJAX Request Sampler" chapter of How to Load Test AJAX/XHR Enabled Sites With JMeter for idea on how this could be done.