Testing assertion with value and response time - Karate - karate

I try to compare the response time with certain amount of time, but I don't know how to do it. I dont even know if the number I give is taken as seconds or milliseconds
This is my code:
Scenario: Case
Given url 'https://reqres.in/api/users?page=2'
When method GET
Then print responseTime
* def time = response.data.responseTime
And assert response.data.responseTime < 10
The response:
I've also tried putting the numbers like milliseconds, but get the same result :(

This worked for me, try it. It is milliseconds:
* url 'https://httpbin.org/get'
* method get
* assert responseTime < 2000
Refer docs: https://github.com/karatelabs/karate#responsetime
That said, I personally don't recommend this kind of assertions in your tests. That's what performance testing is for: https://github.com/karatelabs/karate/tree/master/karate-gatling

Related

Karate -TestNG stop execution when any one of the step fail

Karate step execution stops when any one of the step fails.
Example:
Scenario : verify user details.
Given url "this is my webservice"
When method post
Then status 200
*assert 1==2
Then response
Then match XXXXXXX
The match XXXX
The steps fails Assert , remain steps does not execute. Is there any way even my assert fails remaining steps can continue the process
This is the expected behavior.
But you can use the karate.match() function to perform the assert manually. Then you can use conditional logic to decide if you want to continue next steps or not. But I totally don't recommend this.
For example:
* def temp = karate.match(actual, expected)
* print 'some step'
* assert temp.pass

How to get the historical data from bitfinex.com with out a limit?

I am drawing a chart using the data pulled from bitfinex.com via a simple API query. As the result, i will need to render a chart which is going to show the historical data of BTCUSD for the past two years.
Docs are available right here: https://bitfinex.readme.io/v2/reference#rest-public-candles
Everything works fine except the limit of the retrieved data.
This is my request:
https://api.bitfinex.com/v2/candles/trade:1h:tBTCUSD/hist?start=1514764800000&sort=1
The result can be seen over here or you can copy the request to the browser: https://docs.google.com/document/d/1sG11Ro0X21_UFgUtdqrlitcCchoSh30NzGCgAe6M0u0/edit?usp=sharing
The problem is that I receive candles for only 5 days no matter what dates or parameters I use. I can get more candles if i add the limit parameter to the string. But still, I can not get more than 1100-1000 candles. I even get the 500 error from the server:
Server error: GET https://api.bitfinex.com/v2/candles/trade:1h:tBTCUSD/hist?limit=1100&start=1512086400000&end=1516233600000&sort=1 resulted in a 500 Internal Server Error response:\n ["error",10020,"limit: invalid"]. What should be the valid limit? There is no such information in the docs.
The author of this topic has the same question but no solutions are given. The last answer does not make big changes: Bitfinex data api
How can I get the desired amount of data for the two years period of time? I do not want to break my query down into smaller pieces and go step by step. It will look ugly.
From the looks of it the limit is set to 1000. If you need more then 1000 historical entries you could parse the last timestamp of the response and create another request till you reach the desired end time.
Keep in mind that you can only do 10-90 requests peer minute. So it's smart to make some kind of sleeping mechanism on every request for 6 seconds or something like that.
import json
import time
import requests
start = 1512086400000
end = 1516233600000
timestamp = start
last_timestamp = None
url = 'https://api.bitfinex.com/v2/trades/tBTCUSD/hist/'
historical_data = []
while timestamp <= end and timestamp != last_timestamp:
print("Requesting "+str(timestamp))
params = {'start': timestamp, 'limit': 1000, 'sort': 1}
response = requests.get(url, params=params)
trades = json.loads(response.content)
historical_data.extend(trades)
last_timestamp = timestamp
id, timestamp, amount, price = trades[-1]

JMeter Pass, fail, Warning?

So I am running a few tests on Jmeter, I have assertions set up for Pass/Fail. The issue is, I need to set up a "Warning" or "caution" result.
For example -
Latency < 500ms = Pass
Latency > 1000ms = Fail
Latency < 999ms AND Latency > 501 = Caution
The above is just an example.. The variation in between A and B would be much smaller.
Does anyone know how to set something like this up in Jmeter?
For the moment JMeter does not support caution result, the sampler can either be successful or not. You can set a custom response status code, message, print something to jmeter.log, send an email, etc. but you cannot get anything but Success: true|false without core JMeter changes.
You could try using JSR223 Assertion to implement your pass/fail criteria logic, the relevant code which will set sampler response code to 999 and message to CAUTION would be something like:
def latency = prev.getLatency() as int
def range = new IntRange(501, 999)
if (latency >= 1000) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Latency exceeds 1000 (was ' + latency + ')')
}
if (range.contains(latency)){
prev.setResponseCode('599')
prev.setResponseMessage('CAUTION! High latency: ' + latency)
}
If latency will be between 501 and 999 inclusively you will get the next result:
And failure will look "normally"
More information:
prev is an instance of SampleResult class, see JavaDoc for available methods and fields
the same for AssertionResult
also check out Scripting JMeter Assertions in Groovy - A Tutorial for comprehensive information on using Groovy for setting custom JMeter samplers failure conditions

Bigquery Api Java client intermittently returning bad results

I am executing some long running quires using the big-query java client.
I construct a big-query job and execute like this
val queryRequest = new QueryRequest().setQuery(query)
val queryJob = client.jobs().query(ProjectId, queryRequest)
queryJob.execute()
The problem I am facing is the for the same query, the client returns before the job is complete i.e. the number of rows in result is zero.
I tried printing the response and it shows
{"jobComplete":false,"jobReference":{"jobId":"job_bTLRGrw5_xR26i9Li3a9EQvuA6c","projectId":"analytics-production"},"kind":"bigquery#queryResponse"}
From that I can see that the job is not complete. The why did the client return before the job is complete ?
While building the client, I use the HttpRequestInitializer and in the initialize method I provide the timeout parameters.
override def initialize(request: HttpRequest): Unit = {
request.setConnectTimeout(...)
request.setReadTimeout(...)
}
Tried giving high values for timeout like 240 seconds etc..but no luck. The behavior is still the same. It fails intermitently.
Make sure you set the timeout on the Bigquery request body, and not the HTTP object.
val queryRequest = new QueryRequest().setQuery(query).setTimeoutMs(10000) //10 seconds
The param is timeoutMs. This is documented here: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
Please also read the docs regarding this field: How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with the 'jobComplete' flag set to false. You can call GetQueryResults() to wait for the query to complete and read the results. The default value is 10000 milliseconds (10 seconds).
More about Synchronous queries here
https://cloud.google.com/bigquery/querying-data#syncqueries

How to avoid Hitting the 10 sec limit per user

We run multiple short queries in parallel, and hit the 10 sec limit.
According to the docs, throttling might occur if we hit a limit of 10 API requests per user per project.
We send a "start query job", and then we call the "getGueryResutls()" with timeoutMs of 60,000, however, we get a response after ~ 1 sec, we look for JOB Complete in the JSON response, and since it is not there, we need to send the GetQueryResults() again many times and hit the threshold, that is causing an error, not a slowdown. the sample code is below.
our questions are as such:
1. What is a "user" is it an appengine user, is it a user-id that we can put in the connection string or in the query itslef?
2. Is it really per API project of BigQuery?
3. What is the behavior?we got an error: "Exceeded rate limits: too many user/method api request limit for this user_method", and not a throttling behavior as the doc say and all of our process fails.
4. As seen below in the code, why we get the response after 1 sec & not according to our timeout? are we doing something wrong?
Thanks a lot
Here is the a sample code:
while (res is None or 'jobComplete' not in res or not res['jobComplete']) :
try:
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
except HTTPException:
if independent:
raise
Are you saying that even though you specify timeoutMs=60000, it is returning within 1 second but the job is not yet complete? If so, this is a bug.
The quota limits for getQueryResults are actually currently much higher than 10 requests per second. The reason the docs say only 10 is because we want to have the ability to throttle it down to that amount if someone is hitting us too hard. If you're currently seeing an error on this API, it is likely that you're calling it at a very high rate.
I'll try to reproduce the problem where we don't wait for the timeout ... if that is really what is happening it may be the root of your problems.
def query_results_long(self, jobId, maxResults, res=None):
start_time = query_time = None
while res is None or 'jobComplete' not in res or not res['jobComplete']:
if start_time:
logging.info('requested for query results ended after %s', query_time)
time.sleep(2)
start_time = datetime.now()
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
query_time = datetime.now() - start_time
return res
then in appengine log I had this:
requested for query results ended after 0:00:04.959110