Postman running the same request multiple times - api

I want to run the same request multi times with different pre-request scripts? Any idea how can I do it without using the Data Driven (CSV) test?
Eg., I have to run the below GET url multiple times (every 2 minutes) but whenever I run this, I need to have a different pre request tests!
{{url}}/legacy/COL
enter image description here

Onetime operation:
If you want to send request 10 times (including first request 11) , then create two environment variables that contains the count. you can create the variables by simply copy pasting the below two lines in pre-request or test script ( remove all other code).
pm.environment.set("repeat",10);
pm.environment.set("repeat",10);
Once the variables are added remove the above lines from script.
Now in test script:
we can sendrequest multiple time by using pm.sendrequest or pm.setNextrequest. Here the example shows calling same request 10 more times using pm.setNextRequest.
The delay of 2mins or 3 mins can be set using the setTimeout javascript function which waits the mentioned time (here 3 seconds ) before executing the code inside that. so the setNextrequest will executed only after 3 sec in this case you can change it to 2 mins.
let repeatTemp = pm.environment.get("repeatTemp");
if (repeatTemp === 0) {
pm.environment.set("repeatTemp", pm.environment.get("repeat"));
} else {
let repeatTemp = pm.environment.get("repeatTemp")
let increment = pm.environment.get("increment")===0?15:pm.environment.get("increment")+5
pm.environment.set("increment",increment)
pm.environment.set("repeatTemp", repeatTemp-1);
setTimeout(function () { postman.setNextRequest("something") }, 3000);
}
so if your request name is "yourrequestname" then it will send this request 1+10 times
Pre-request script:
in your format you mentioned yyyy-mm which is wrong mm stands for minutes not month for year-month you have to give capital YYYY-MM
let repeatTemp = pm.environment.get("repeatTemp");
let repeat = pm.environment.get("repeat");
if (repeatTemp===repeat) {
pm.environment.set("increment", 0)
}
let moment = require('moment')
pm.environment.set('estimatedTimeArrival', moment().add(30 + pm.environment.get("increment"), 'minutes').format("YYYY-MM-DDThh:mm:ss"));
pm.environment.set('estimatedTimeDeparture', moment().add(2, 'hours').format("YYYY-MM-DDThh:mm:ss"));
pm.environment.set('scheduledTimeArrival', moment().add(10, 'minutes').format("YYYY-MM-DDThh:mm:ss"));
console.log(pm.environment.get('increment'))
console.log(pm.environment.get('estimatedTimeArrival'))
output:

Related

How can get the list test more than 250 from a run using Test Rail API

Returns a list of tests for a test run
https://www.gurock.com/testrail/docs/api/reference/tests#gettests
There is an API from Test Rail, it will return a list test case from one test run (id)
This is a limitation, only return up to 250 entities in once.
How can I get more than 400 or 500 case from a run?
Maximum and default value of limit parameter is 250, you can't get more than 250 with one request. But they have offset parameter for that, so you can set the start position of the next chunk.
And also you can use "next link" from the response, here is the example:
"_links": { "next": "/api/v2/get_tests/1&limit=250&offset=250", "prev": null }
Here is some python code I wrote that will query to get all of the results.
def _execute_get_until_finished(self, resp, extract):
results = []
finished = False
while not finished:
results.extend(resp[extract])
finished = resp["_links"]["next"] is None
if not finished:
resp = self.send_get(resp["_links"]["next"].replace("/api/v2/", "").replace("&limit=250", ""))
return results
Example:
cases = self._execute_get_until_finished(
self.send_get(f"get_cases/{self.project_id}/{self.suite_id}"),
"cases")
Note, reason for removing limit is a bug I found where next would be null even though there were still results to retrieve. Removing this limit fixed that issue.

How to get Last 1 hour data, every 5 minutes, without grouping?

How to trigger every 5 minutes and get data for the last 1 hour? I came up with this but it does not seem to give me all the rows in the last 1 hr. My reasoning is :
Read the stream,
filter data for last 1 hr based on timestamp column, and
write/print using forEachbatch. And
watermark it so that it does not hold on to all the past data.
spark.
readStream.format("delta").table("xxx")
.withWatermark("ts", "60 minutes")
.filter($"ts" > current_timestamp - expr("INTERVAL 60 minutes"))
.writeStream
.format("console")
.trigger(Trigger.ProcessingTime("5 minutes"))
.foreachBatch{ (batchDF: DataFrame, batchId: Long) => batchDF.collect().foreach(println)
}
.start()
Or do I have to use a Window? But I can't seem to get rid of GroupBy if I use Window and I don't want to group.
spark.
readStream.format("delta").table("xxx")
.withWatermark("ts", "1 hour")
.groupBy(window($"ts", "1 hour"))
.count()
.writeStream
.format("console")
.trigger(Trigger.ProcessingTime("5 minutes"))
.foreachBatch{ (batchDF: DataFrame, batchId: Long) =>
print("...entering foreachBatch...\n")
batchDF.collect().foreach(println)
}
.start()
Instead of using spark streaming to execution your spark code every 5 minutes, you should use either an external scheduler (cron, etc...) or API java.util.Timer if you want to schedule processing in your code
Why you shouldn't spark-streaming to schedule spark code execution
If you use spark-streaming to schedule code, you will have two issues.
First issue, spark-streaming processes data only once. So every 5 minutes, only the new records are loaded. You can think of bypassing this by using window function and retrieving aggregated list of rows by using collect_list, or an user defined aggregate function, but then you will meet the second issue.
Second issue, although your treatment will be triggered every 5 minutes, function inside foreachBatch will be executed only if there are new records to process. Without new records during the 5 minutes interval between two execution, nothing happens.
In conclusion, spark streaming is not designed to schedule spark code to be executed at specific time interval.
Solution with java.util.Timer
So instead of using spark streaming, you should use a scheduler, either external such as cron, oozie, airflow, etc... or in your code
If you need to do it in your code, you can use java.util.Timer as below:
import org.apache.spark.sql.functions.{current_timestamp, expr}
import spark.implicits._
val t = new java.util.Timer()
val task = new java.util.TimerTask {
def run(): Unit = {
spark.read.format("delta").table("xxx")
.filter($"ts" > (current_timestamp() - expr("INTERVAL 60 minutes")))
.collect()
.foreach(println)
}
}
t.schedule(task, 5*60*1000L, 5*60*1000L) // 5 minutes
task.run()

how to send random local notification message in react-native?

I have an app that needs to send out notification/s everyday with random messages to user depending on how many notification they want (up to 5 notifs per day) and between what time they want (for example notifications will fire only between 6:00am - 9:00am everyday).
To elaborate I'm building a functionality with an idea to send out random inspirational messages that I'm pulling from a hardcoded array variable or json file.
Currently I'm using this package: https://github.com/zo0r/react-native-push-notification to create local notification.
I tried the idea of setting a function that returns a string for the message parameter of localNotificationSchedule, but when I do this, instead of using a regular string, it's not showing the notification.
PushNotification.localNotificationSchedule({
id : '1',
userInfo : { id: userId },
message : () => {
return Math.random().toString(36).replace(/[^a-z]+/g, '').substr(0, 5); //trying to return random string every time notification fires.
},
date : moment(Date.now()).add(2, 'seconds').toDate(),
repeatType : 'day',
});
I considered using other approach such as react-native headless JS but it's for android only.
Also considered using https://www.npmjs.com/package/react-native-background-fetch. But I have a complex interval for notifications. For example, the user might set the notification to run from 6:00am - 6:30am everyday and set to fire 5 notifications. In this interval, notifications will run every 6 mins.
But react-native-background-fetch' minimal interval is only 15 minutes.
I know that this can be done by using a push notification instead, but with that, user will need a connection in order for them to receive a notification, which is not ideal for this case.
Iv'e seen this from an Ios app so I know this is possible to achieve.
As per the dev, you can try calling PushNotification.localNotificationSchedule multiple times.
What I've done is this:
const messages = [{text:'',time:0}...];
messages.map(message=>{
PushNotification.localNotificationSchedule({
//... You can use all the options from localNotifications
channelId: "my-channel",
message: message.text, // (required)
date: new Date(Date.now() + (60 + message.time) * 1000), // in 60 secs
});
})
to show a message from the messages array separated by 5 seconds.

How to do get the Retries tab with retries instead of several test with the same name in the Allure report?

I generate the Allure report after launch a test with #RepeatedIfExceptionsTest tag with the graldew test command but got several separated tests with the same name. The Retries tab is empty. How to do get the Retries tab with retries instead of several test with the same name in the report?
#Issue("123")
#Flaky
#Link(value = "Link1")
#TmsLink(value = "TmsLink1")
#Issue(value = "Issue11")
#Tag(value = "tmp")
#RepeatedIfExceptionsTest(name = "Find even number", repeats = 3)
public void findEvenNumberTest(){
int randomNum = ThreadLocalRandom.current().nextInt(1, 3);
assertEquals(randomNum%2, 0);
}
Retries tab is responsible for history of your test runs. So when you run a test 2 times and generate your report then on Retries tab you'll see 2 runs.
Retries tab gets json files that are created after each run. But I guess that when within one run you execute a test 3 times then it's result will be stored in 1 json file so Retries tab won't take it as it needs 2 or more json report files.
So you just misunderstood functionality of Retries tab.
Another tricky tab is History. It's almost the same as Retries but based on info from this tab, the widget on the main page is generated. To make History tab not empty you need to copy "/report/history" folder to "/allure-results/history" and then regenerate your report from "/allure-results"

Bigquery Api Java client intermittently returning bad results

I am executing some long running quires using the big-query java client.
I construct a big-query job and execute like this
val queryRequest = new QueryRequest().setQuery(query)
val queryJob = client.jobs().query(ProjectId, queryRequest)
queryJob.execute()
The problem I am facing is the for the same query, the client returns before the job is complete i.e. the number of rows in result is zero.
I tried printing the response and it shows
{"jobComplete":false,"jobReference":{"jobId":"job_bTLRGrw5_xR26i9Li3a9EQvuA6c","projectId":"analytics-production"},"kind":"bigquery#queryResponse"}
From that I can see that the job is not complete. The why did the client return before the job is complete ?
While building the client, I use the HttpRequestInitializer and in the initialize method I provide the timeout parameters.
override def initialize(request: HttpRequest): Unit = {
request.setConnectTimeout(...)
request.setReadTimeout(...)
}
Tried giving high values for timeout like 240 seconds etc..but no luck. The behavior is still the same. It fails intermitently.
Make sure you set the timeout on the Bigquery request body, and not the HTTP object.
val queryRequest = new QueryRequest().setQuery(query).setTimeoutMs(10000) //10 seconds
The param is timeoutMs. This is documented here: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
Please also read the docs regarding this field: How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with the 'jobComplete' flag set to false. You can call GetQueryResults() to wait for the query to complete and read the results. The default value is 10000 milliseconds (10 seconds).
More about Synchronous queries here
https://cloud.google.com/bigquery/querying-data#syncqueries