How to skip a specified time in a recurring task? - hangfire

hi, I have a task that is executed hourly,but,I do not want to perform at night(such as 00:00-07:00),what should I do?
sample code:
RecurringJob.AddOrUpdate(nameof(SyncArticles), () => SyncArticles(), Cron.HourInterval(1), TimeZoneInfo.Local)
"SyncArticles" Will be executed every hour,however,remote data is not updated at night,so,i hope that this task will not be performed at night or for a specified time period,may I ask how to do it?thanks

You can simply use a text cron expression to create whatever custom interval you need.
Eg:
Should run every hour between and including 7am to 11pm:
"0 7-23 * * *"
Or:
If you wanted to include midnight as well:
"0 0,7-23 * * *"
I haven't tested these in Hangfire specifically but they are valid and should work.
See https://en.wikipedia.org/wiki/Cron for more information.

Related

Splunk showing wrong index time

I have indexed data on splunk but i can see the _time(indexed time) is showing wrong like.
I had indexed this data on 19th oct but this is showing like it is indexed on 18th oct.
Please suggest what would be the solution or i need to manually overwrite the _time key with current date time.
Thanks
_time is not the time the event was indexed - that's _index_time. _time is the time the event happened, which usually is different from when it was indexed (because of transport/processing delays).
From your screenshot I see what I presume is the event time ('date' field) differs from _time. That often happens when the time zone is incorrect or is not interpreted correctly. Were that the case here, however, I would expect the difference between date and _time to be a multiple of 30 minutes.
From what I see in the question, it's possible the props.conf settings are causing Splunk to interpret the wrong field as _time. Closer inspection shows the sourcetype ends with "too_small". This is an indication that Splunk does not have specific settings for the sourcetype so it's trying to guess at where the timestamp is (and getting it wrong, obviously).
The solution is to create a props.conf stanza for the sourcetype. It should be something like this:
[json]
TIME_PREFIX = date:
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N%Z
MAX_TIMESTAMP_LOOKAHEAD = 26
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)
TRUNCATE = 10000
Put this settings on your indexer and restart it. Events that arrive after that should have the right time on them.

BigQuery synchronous query is not returning any results

According to the BigQuery documentation listed at https://cloud.google.com/bigquery/querying-data#asynchronous-queries:
There are two types of querying via the BigQuery API. Synchronous and Asynchronous. Async works perfectly for me using the sample code provided, however synchronous does not.
The sample code I am referring to is shown if you click on the link above. What I noticed is that it does not actually wait until the results are available. If I insert a time.sleep(15) before the while True, my results return as expected. If not, then the it returns an empty result set.
The official documentation example uses the query:
"""SELECT word, word_count
FROM `bigquery-public-data.samples.shakespeare`
WHERE corpus = #corpus
AND word_count >= #min_word_count
ORDER BY word_count DESC;
"""
This query returns very quickly, however my query takes several seconds to return a result.
My question is, why does the documentation state that the run_sync_query command waits until the query completes, if the results are not actually accessible and no results are returned?
I cannot provide the query I used because it is a private data source. To produce, you just need a query that takes several seconds to run.
Looks like the request/call is timing out, not the query itself. The default time is 10s. Try setting timeout_ms in your code:
For example (I'm going to assume you are using Python):
..[auth/client setup stuff]..
query = client.run_sync_query('<your_query>')
query.timeout_ms = 60000 #set the request timeout
query.use_legacy_sql = False
query.use_query_cache = True
query.run()
..[do something with the results]..

How to get the time elapsed in seconds between two timestamps in dql?

I'm using Symfony and Doctrine. I'd like to get the time elapsed between two timestamps. Here is a portion of my query (both a.date and q.date are type: timestamp):
$qb->select('a.date - q.date AS elapsed_time');
This gives a numerical result, but I can't tell what the units are. 9 seconds gave me 49, and 60 seconds gave me 99; I can't make sense of that.
I tried this too:
$qb->select('DATE_DIFF(a.date, q.date) AS elapsed_time');
This works, but gives the result in days. I really need minutes or seconds.
Use UNIX_TIMESTAMP instead. try this:
$qb->select('(UNIX_TIMESTAMP(a.date) - UNIX_TIMESTAMP(q.date)) AS elapsed_time');
You need to use the function DATE_FORMAT.
$qb->select(DATE_FORMAT(DATE_DIFF(a.date, q.date), '%s') AS elapsed_time);
Try this. I dont have a mysql console now, but i think it would work.

Query Access from Excel: Variable Parameters

I've been doing in depth, functional area dashboards in excel that are refreshed upon end-user command of the 'refresh all' button. The only problem I am running into is refreshing daily production when it is past midnight, thus turning access' back end query of 'date()' to, well, the current day at midnight.
This has been the condition I want to work properly: I want everything >= 5 AM for either today or previous day based on the NOW time.
WHERE start_time >=
(iif(timevalue(now()) between #00:00# and #4:59#,date()-1,date()))
AND timevalue(start_time) >= #5:00#;
The thing is that it is returning in such an extremely slow rate.
I don't think I've ever waited for it to complete. I'm not sure if its calculating this logic on every record involved in the back end table or not, which would explain the lock up.
I really want to avoid building any logic dynamically as I am simply using Excel to call upon this Access query through the query wizard.
I would hate to have to resort to an access button triggering a module to build the query dynamically and then focus the excel window and refresh.
It would be nice to create an object on, say, a [Form]! but that is only useful when the form is active.. even then the SQL rejects any sub-calculations within the object of the form.
Any thoughts?
I believe parsing down to the mathematical equivalent of a boolean HOUR(Now)<5 should speed things up considerably.
WHERE start_time >= (Date + (Hour(Now)<5) + TimeSerial(5, 0, 0))
A boolean True is considered -1.
This seems to be working; I needed a ' ' to concat times correctly. Gustav brought up the 'between' and 'or'; this is working fine on my offline test db - I will mark this way down as a possible solution. I also added seconds in order to capture last minute data of 23:59:00 to 23:59:59
WHERE
iif(timevalue(now()) Between #00:00# And #4:59#,
(start_time
Between Date()-1&' '&#05:00# And Date()-1&' '&#23:59:59#)
OR
(start_time Between date()&' '&#00:00# And date()&' '&#23.59:59#),
(start_time Between date()&' '&#00:00# And date()&' '&#23.59:59#));
I just now need to build into it the now() condition in an iif statement to decided which condition to exectute!
You can use:
WHERE start_time
(Between Date() - 1 + #05:00:00# And Date() - 1 + #23:59:59#)
Or
(Between Date() + #05:00:00# And Date() + #23:59:59#)

Rails show different object every day

I want to match my user to a different user in his/her community every day. Currently, I use code like this:
#matched_user = User.near(#user).order("RANDOM()").first
But I want to have a different #matched_user on a daily basis. I haven't been able to find anything in Stack or in the APIs that has given me insight on how to do it. I feel it should be simpler than having to resort to a rake task with cron. (I'm on postgres.)
Whenever I find myself hankering for shared 'memory' or transient state, I think to myself "this is what (distributed) caches were invented for".
#matched_user = Rails.cache.fetch(#user.cache_key + '/daily_match', expires_in: 1.day) {
User.near(#user).order("RANDOM()").first
}
NOTE: While specifying a TTL for cache entry tells Rails/the cache system to try and keep that value for the given timeframe, there's NO guarantee that it will. In particular, a cache that aggressively tries to reclaim memory may expire an entry well before its desired expires_in time.
For this particular use case, it shouldn't be a big deal but in cases where the business/domain logic demands periodically generated values that are durable then you really have to factor that into your database.
How about using PostgreSQL's SETSEED function? I used the date to seed so that every day the seed will change, but within a day, the seed will be consistent.:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
You may want to seed a random value after using this so that any future calls to random aren't biased:
random = User.connection.execute("SELECT RANDOM()").to_a.first["random"]
# Same code as above:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
# Use random value before seed to make new seed:
User.connection.execute "SELECT SETSEED(#{random})"
I have split these steps in different sections just for readability. you can optimise query later.
1) Find all user records till today morning. so that the count will freeze.
usrs_till_today_morning = User.where("created_at <?", DateTime.now.in_time_zone(Time.zone).beginning_of_day)
2) Pluck all ID's
user_ids = usr_till_today_morning.pluck(:id)
3) Today date it will be a range (1..30) but will remain constant throughout the day.
day_today = Time.now.day
4) Select the same ID for the day
todays_user_id = user_ids[day_today % user_ids.count]
#matched_user = User.find(todays_user_id)
So it will give you random user records by maintaining same record throughout the day!!