Trigger Qlik Sense Task with Python - qlikview

I want to trigger Qlik Sense Task using Python.
So, I want to write code in python language that can trigger Qlik Sense Task in QMC.
Can someone guide me step by step? and also share the example code.
Thanks
I am looking for a solution that can trigger the task.

May be able to provide step-by-step later but you essentially have to choose between using raw python vs using the Qlik Python SDK (see its pipy page here, which is a python wrapper around various Qlik Sense APIs. Either way, you'll want to follow the steps outlined in this Qlik Help page.
If using "raw" Python, you can do something like this:
import requests
url = "https://qlik.example.com/qrs/task/00000000-0000-0000-0000-000000000000/start/synchronous"
querystring = {"Xrfkey":"12345678qwertyui"}
payload = "-----011000010111000001101001--\r\n\r\n"
headers = {
"content-type": "multipart/form-data; boundary=---011000010111000001101001",
"X-Qlik-Xrfkey": "12345678qwertyui"
}
response = requests.request("POST", url, data=payload, headers=headers, params=querystring)
print(response.text)
...where the task ID is specified in URL, shown in the example above as 00000000-0000-0000-0000-000000000000.
That code kicks off a reload, but it assumes you're already authenticated. It also doesn't handle the polling to check if/when the task finishes, if that's even necessary to you.

Related

Karate DSL: How to initiate a long running rest call, perform other rest calls, then assert on the first requests' response

I am learning Karate DSL in order to determine if it is a viable automation solution for our new API.
We have a unique environment in which we have a REST API to test, but also use REST services to perform other required actions while the original request is awaiting a response. The REST calls perform robotic actions to manipulate hardware, query our servers, etc.
We need the ability to send a REST request, perform various other REST requests (with assertions) while awaiting a response to the first request. Then, finally assert that the original request gets the correct response payload based on the actions performed between the first request and its response.
Rough example:
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion
Obviously the example above does not work, but I am hoping we can structure our tests similarly.
I know tests can run in parallel, but I am not sure how to encapsulate them as "one test" vs "multiple tests" and control order (which is required for this to work properly).
There is documentation on Async behavior, but I found it difficult to follow. If anyone can provide more context on how to implement the example I would greatly appreciate it.
Any suggestions would be warmly welcomed and examples would be fantastic. Thanks all!
Actually being able to wait for an HTTP call to complete and do other things in the meantime is something Karate cannot do by default. This question has me stumped and it has never been asked for before.
The only way I think we can achieve it in Karate is to create a thread and then run a feature from there. There is a Java API to call a feature, but when you are doing all that, maybe you are better off using some hand-crafted Java code to make an HTTP request. Then your case aligns well with the example mentioned here: https://twitter.com/getkarate/status/1417023536082812935
So to summarize, my opinion of the best plan of action.
for this "special" REST call, be prepared to write Java code to make that HTTP call on a separate thread
call that Java code at the start of your Karate test (using Java interop)
you can pass the karate JS object instance
so that you can call karate.listen() when the long-running job is done
actually instead of the above, just use a CompletableFuture and a java method call (just like the example linked above)
Now that step won't block and you can do anything you want
After you have done all the other work, use the listen keyword or call a Java method on the helper you used at the start of your test (just like the linked example)
That should be it ! If you make a good case for it, we can build some of this into Karate, and I will think over it as well.
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion

API call inside a JavaScript function present in a feature file

I tried my best but could not find information on calling an API inside the Javascript function when dealing with automation in Karate. Now, I might get suggestions to call the API outside the function and then do operations inside the function. However, my use case is such that I have to call the API inside the function only. Is there a way to do this?
One approach is to create a Java file and then write the code in java. However, I specifically want to know if there is any way to call an API inside a JS function in a FEATURE FILE itself.
First, these kinds of "clever" tests are not recommended, please read this to understand why: https://stackoverflow.com/a/54126724/143475
If you still want to do this, read on.
First - most of the time, this kind of need can be achieved by doing a call to a second feature file:
* if (condition) karate.call('first.feature')
Finally, this is an experimental and un-documented feature in Karate, but there is a JS API to perform HTTP requests:
* eval
"""
var http = karate.http('https://httpbin.org');
http.path('anything');
var response = http.get().body;
karate.log('response:', response);
"""
It is a "fluent API" so you can do everything in one-line:
var body = karate.http('https://httpbin.org/get').get().body;
If you need details, read the source-code of the HttpRequestBuilder and Response classes in the Karate project.

Is there any way to store API response to a file while performing loadtest with Ghatling using karate

I am performing a load test with karate Gatling. As per my requirement, I need to create the booking and use the bookingId from the response and need to pass it to the update/cancel the booking request.
I have tried with below process:
In the test.feature file:
def createBooking = call read('createBooking')
def updateBooking = call read('updateBooking') { bookingid: createBooking.response.bookingId }
I am trying to apply 1000 ramp users at a time.
In the ghatling simulation file:
val testReq = scenario("testing").exec(karateFeature("classpath:test.feature"))
setUp(
testReq.inject(rampUsers(1000).during(1 seconds))
)
This process is unable to provide me the required throughPut. I am unable to find the bottleneck whether there is a problem with the karate or API server. In each scenario, we have both create and update bookings, so I am trying to capture all the 1000 bookings ids from the response during the load test and pass it to the update/cancel bookings. I will save it to a file and utilize the booking response for updating a booking. As I am new to Karate, can anyone suggest a way to store all the load test API responses to a file?
The 1.0 RC version has better support for passing data across feature files, refer this: https://github.com/intuit/karate/issues/1368
so in the scala code you should be able to do something like this:
session("myVarName").as[String]
And to get the RC version, see: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
That said - please be aware that getting complex data-driven tests to work as a performance test is not easy, so yes - you will need to do some research. My suggestion is read and understand the info in the first link in this answer.
Writing to file is absolutely NOT recommended during a performance test. If you really want to go down that route, please read this: https://stackoverflow.com/a/54593057/143475
Finally if you are still stuck, please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Making PUT and GET Responses in Katalon Studio

I’m new in working with RESTful APIs. For my case I want to make an PUT Request and then GET it.
I made an PUT Request and it worked. [1]: https://imgur.com/a/zlUTzYB
But now I want to make an GET Request. Can I somehow make so that GET Request will automatically take PUT Requests statementId and bind it to the link. [2]: https://imgur.com/a/qqBd5nR
I watched a lot of videos and documentations about APIs but still doens't get it how to make it. I’m really new for making such things and sorry if I asked a dumb question.
These steps might help to solve your case:
- Execute the PUT request
- Get the Response, parse it using Jsonslurper, and the the value out.
- Save the value into a variable, ie GlobalVariable
- Create the GET request with parameter taken from the GlobalVariable
- Execute the GET request.
Found an answer:
RequestObject ro = findTestObject("GET_request")
ro.setRestUrl(String.format(ro.getRestUrl(), idValue2))
ResponseObject resp = WS.sendRequest(ro)

Apache Nutch REST api

I'm trying to launch a crawl via the rest api. A crawl starts with injecting urls. Using a chrome developer tool "Advanced Rest Client" I'm trying to build this POST payload up but the response I get is a 400 Bad Request.
POST - http://localhost:8081/job/create
Payload
{
"crawl-id":"crawl-01",
"type":"INJECT",
"config-id":"default",
"args":{ "path/to/seedlist/directory"}
}
My problem is in the args, I think more is needed but I'm not sure. In the NutchRESTAPI page this is the sample it gives for creating a job.
POST /job/create
{
"crawlId":"crawl-01",
"type":"FETCH",
"confId":"default",
"args":{"someParam":"someValue"}
}
POST /job/create
{
"crawlId":"crawl-01",
"jobClassName":"org.apache.nutch.fetcher.FetcherJob"
"confId":"default",
"args":{"someParam":"someValue"}
}
I'm not sure what param or value to give each of the commands to complete a job. (eg. Inject, Generate, Fetch, Parse, and UpdateDb) Can someone clear this up? How do I tell the api where to look for the seedlist at?
UPDATE
When trying to complete the Generate command I came into a classException error where the value for the topN key is to be of type long but the api reads it as either a string or an int. I found a fix that is supposed to included in the 2.3.1 release (release date: TBA) and applied it and recompiled my code. It can now work.
At the time of this posting, the REST API is not yet complete. A much more detailed document exists, though it's still not comprehensive. It is linked to in the following email from the user mailing list (which you might want to consider joining):
http://www.mail-archive.com/user%40nutch.apache.org/msg13652.html
But to answer your question about the seedlist, you can create the seedlist through REST, or you can use the argument "seedDir"
{
"args":{
"seedDir":"/path/to/seed/directory"
},
"confId":"default",
"crawlId":"sample-crawl-01",
"type":"INJECT"
}