Best practices to test Hessian RPC requests with Karate - karate

We're successfully using Karate to automate tests for REST and SOAP webservices. In addition, we're having some legacy webservices, which are based on the Hessian Web Service protocol (http://hessian.caucho.com/).
Hessian calls are HTTP requests as well, so we would like to add them to our Karate test suites.
My first attempt was to use the Java Interop feature, so the tests are implemeted as Java code and the Java classes are getting called within the Feature files.
Example:
Scenario: Test offer purchase order
* def OfferPurchaseClient = Java.type('com.xyz.OfferPurchaseClient')
* def orderId = OfferPurchaseClient.createOrder('12345', 'xyz', 'max.mustermann#test.de')
* match orderId == '#number'
This approach is working, but I'm wondering if there is a more elegant way which would also use some more features of the Karate DSL.
I'm thinking about something like this (Dummy Code):
Scenario: Test offer purchase order
Given url orderManagementEndpoint
And path offerPurchase
And request serializeHessian(offerPurchase.json)
When method post
Then status 200
And match deserializeHessian(response).orderId == '#number'
Any recommendations/tips about how to implement such an approach?

I actually think you are already on the right track with the Java interop approach. The HTTP client is in my honest opinion a very small sub-set of the capabilities of Karate, there are so many other aspects such as assertions, reports, parallel-execution, even load-testing etc. So I wouldn't worry about not using request, method, status etc.
Maybe OfferPurchaseClient.createOrder() can return a JSON.
Or what I would do is consider an approach like OfferPurchaseClient.request(Map<String, Object> payload) - where payload can include all the parameters including the path and expected response code etc.
To put this another way, I'm suggesting that you write your own mini-framework and custom DSL around some Java code. If you want inspiration and ideas, please look at this discussion, it is an example of building a CLI testing tool around Karate: https://stackoverflow.com/a/62911366/143475

Related

Karate DSL: How to initiate a long running rest call, perform other rest calls, then assert on the first requests' response

I am learning Karate DSL in order to determine if it is a viable automation solution for our new API.
We have a unique environment in which we have a REST API to test, but also use REST services to perform other required actions while the original request is awaiting a response. The REST calls perform robotic actions to manipulate hardware, query our servers, etc.
We need the ability to send a REST request, perform various other REST requests (with assertions) while awaiting a response to the first request. Then, finally assert that the original request gets the correct response payload based on the actions performed between the first request and its response.
Rough example:
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion
Obviously the example above does not work, but I am hoping we can structure our tests similarly.
I know tests can run in parallel, but I am not sure how to encapsulate them as "one test" vs "multiple tests" and control order (which is required for this to work properly).
There is documentation on Async behavior, but I found it difficult to follow. If anyone can provide more context on how to implement the example I would greatly appreciate it.
Any suggestions would be warmly welcomed and examples would be fantastic. Thanks all!
Actually being able to wait for an HTTP call to complete and do other things in the meantime is something Karate cannot do by default. This question has me stumped and it has never been asked for before.
The only way I think we can achieve it in Karate is to create a thread and then run a feature from there. There is a Java API to call a feature, but when you are doing all that, maybe you are better off using some hand-crafted Java code to make an HTTP request. Then your case aligns well with the example mentioned here: https://twitter.com/getkarate/status/1417023536082812935
So to summarize, my opinion of the best plan of action.
for this "special" REST call, be prepared to write Java code to make that HTTP call on a separate thread
call that Java code at the start of your Karate test (using Java interop)
you can pass the karate JS object instance
so that you can call karate.listen() when the long-running job is done
actually instead of the above, just use a CompletableFuture and a java method call (just like the example linked above)
Now that step won't block and you can do anything you want
After you have done all the other work, use the listen keyword or call a Java method on the helper you used at the start of your test (just like the linked example)
That should be it ! If you make a good case for it, we can build some of this into Karate, and I will think over it as well.
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion

Is it possible to call a test feature from a MockServer feature(matched) in KarateDSL?

I'm trying to do something that I don't know if it is even remotely possible or not.
I've a Mock server, and I'd like that when it receives a given request, it "starts another test", calling a test feature. I tried some stuff, including the one bellow. But turns out that this Mockserver scenario do not respond.
Scenario: pathMatches('/ideas')
* def xx = call read('SimpleStart.feature')
* def response = $ideas.*
Is there an elegant way to make this work? AN workaround or a suggestion you can give me?
The use case is:
Perform tests, some tests, make some external services invoke the mockserver, and if the mockserver is requested it triggers other tests.
Thanks in advance.
Yeah Karate certainly isn't designed to do that. The pattern should be set up your mocks and tests from a Java "runner" for maximum control and that's what most teams do.
In short, "orchestrate" things from Java code.
That said, see if this gives you some other creative ideas: https://twitter.com/getkarate/status/1417023536082812935

Is there any way to store API response to a file while performing loadtest with Ghatling using karate

I am performing a load test with karate Gatling. As per my requirement, I need to create the booking and use the bookingId from the response and need to pass it to the update/cancel the booking request.
I have tried with below process:
In the test.feature file:
def createBooking = call read('createBooking')
def updateBooking = call read('updateBooking') { bookingid: createBooking.response.bookingId }
I am trying to apply 1000 ramp users at a time.
In the ghatling simulation file:
val testReq = scenario("testing").exec(karateFeature("classpath:test.feature"))
setUp(
testReq.inject(rampUsers(1000).during(1 seconds))
)
This process is unable to provide me the required throughPut. I am unable to find the bottleneck whether there is a problem with the karate or API server. In each scenario, we have both create and update bookings, so I am trying to capture all the 1000 bookings ids from the response during the load test and pass it to the update/cancel bookings. I will save it to a file and utilize the booking response for updating a booking. As I am new to Karate, can anyone suggest a way to store all the load test API responses to a file?
The 1.0 RC version has better support for passing data across feature files, refer this: https://github.com/intuit/karate/issues/1368
so in the scala code you should be able to do something like this:
session("myVarName").as[String]
And to get the RC version, see: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
That said - please be aware that getting complex data-driven tests to work as a performance test is not easy, so yes - you will need to do some research. My suggestion is read and understand the info in the first link in this answer.
Writing to file is absolutely NOT recommended during a performance test. If you really want to go down that route, please read this: https://stackoverflow.com/a/54593057/143475
Finally if you are still stuck, please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Karate Standalone as Mock Server with multiple Feature Files

I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566

Properly testing an SDK that calls an API

I have an API that I've written and now I'm in the middle of writing an SDK for 3rd parties to more easily interact with my API.
When writing tests for my SDK, it's my understanding that it's best not to simply call all of the API endpoints because:
The tests in the API will be responsible for making sure that the API works.
If the SDK tests did directly call the API, my tests would be really slow.
As an example, let's say my API has this endpoint:
/account
In my API test suite I actually call this endpoint to verify that it returns the proper data.
What approach do I take to test this in my SDK? Should I be mocking a request to /account? What else would I need to do to give my SDK good coverage?
I've looked to other SDKs to see how they're handling this (Stripe, Algolia, AWS), but in some cases it does look like they're calling a sandbox version of the actual API.
(I'm currently working with PHPUnit, but I'll be writing SDKs in other languages as well.)
I ended up taking this approach:
I have both unit tests AND integration tests.
My integration tests call the actual API. I usually run this much less frequently — like before I push code to a remote. (Anyone who consumes my code will have to supply their own API credentials)
My unit tests — which I run very frequently — just make sure that the responses from my code are what I expect them to look like. I trust the 3rd party API is going to give me good data (and I still have the integration tests to back that up).
I've accomplished this by mocking Guzzle, using Reflection to replace the client instance in my SDK code, and then using Mock Handlers to mock the actual response I expect.
Here's an example:
/** #test */
public function it_retrieves_an_account()
{
$account = $this->mockClient()->retrieve();
$this->assertEquals(json_decode('{"id": "9876543210"}'), $account);
}
protected function mockClient()
{
$stream = Psr7\stream_for('{"id": "9876543210"}');
$mock = new MockHandler([new Response(
200,
['Content-Type' => 'application/json'],
Psr7\stream_for($stream)
)]);
$handler = HandlerStack::create($mock);
$mockClient = new Client(['handler' => $handler]);
$account = new SparklyAppsAccount(new SparklyApps('0123456789'));
$reflection = new \ReflectionClass($account);
$reflection_property = $reflection->getProperty('client');
$reflection_property->setAccessible(true);
$reflection_property->setValue($account, $mockClient);
return $account;
}
When writing tests for the SDK you assume that your api DOES work exactly like it should (and you write tests for your api to assure that).
So using some kind of sandbox or even a complete mock of your api is sufficient.
I would recommend to mock your API using something like wiremock and then write your unit tests around that mock API to make sure everything works as desired.
This way when your production app with break, you can at-least make sure (by running unit tests) that nothing broken in your application side but there can be problem with actual API (i.e response format being changed).