I would like to use libgit2 to create commits with specific timestamps, OTHER than the current system time. How is this done?
Use git_signature_new() to create the author and/or committer signatures (ie. name + email + date + offset to UTC).
Pass those signatures to git_commit_create().
References:
git_signature_new() documentation
git_commit_create() documentation
Sample usage from the tests
Related
i want to set default value in many2many field for example:
that field in models.py:
# alarms
alarm_ids = fields.Many2many(
'calendar.alarm', 'calendar_alarm_calendar_event_rel',
string='Reminders', ondelete="restrict",
help="Notifications sent to all attendees to remind of the meeting.")
also it is default values created by system, and i want first variant by default:
i know that i can set it by id, but dont know how.
You can use Command to set X2many field values.
From the documentation:
Via Python, we encourage developers craft new commands via the various functions of this namespace. We also encourage developers to use the command identifier constant names when comparing the 1st element of existing commands.
Example:
from odoo import Command, models, fields
class CalendarEvent(models.Model):
_inherit = 'calendar.event'
alarm_ids = fields.Many2many(default=lambda self: [Command.link(self.env.ref('calendar.alarm_notif_1').id)])
i just watch in postgres, there is no id`s like in xml screenshots, i just take id of a record and find some documentation.
next example works:
There are transactional input CSV files coming on a daily basis on an FTP location. I need to read these input files and process them on daily batch execution. The name of the files remains the same every day, but the date gets appended at the end of the filenames every day,
Ex:
Day1
General_Ledger1_2020-07-01,
General_Ledger2_2020-07-01,
General_Ledger3_2020-07-01,
General_Ledger4_2020-07-01,
General_Ledger5_2020-07-01
Day2
General_Ledger1_2020-07-02,
General_Ledger2_2020-07-02,
General_Ledger3_2020-07-02,
General_Ledger4_2020-07-02,
General_Ledger5_2020-07-02
How can I append this Date information to the input file name every time the job runs?
I have faced similar problem earlier and this can be solved using calculated parameter in the file path. Here, you can create expressions that will retrieve the file dynamically.
Example,
CONCAT( UPPER(lit('$(Prefix)')), ADD_DAYS( TODATE(lit('$(currentTime)'), 'yyyy-mm-dd'), 'yyyy-mm-dd' ,-1),'.csv')
Breaking of the expression :
$(currentTime) : this system parameter will get the current date (this will also include timestamp).
(TODATE(lit('$(currentTime)'), 'yyyy-mm-dd') : TODATE will get only date from the whole timestamp with format as ‘yyyy-mm-dd’.
ADD_DAYS(TODATE(lit('$(currentTime)'), 'yyyy-mm-dd'), 'yyyy-mm-dd' ,-1) : ADD_DAYS here will add -1 to the date retrieved from. TODATE(). Hence (2020-04-24) + (-1) would give us 2020-04-23
$(Prefix) : $(Prefix) will be an user defined input parameter of type String which user will be providing at runtime – Since the
prefix will be always dynamic.
CONCAT() : Finally to combine all the results and form the exact file path CONCAT() can be used. Also in between some static
string is added as it will always be fixed for every file to be read.
Suppose you have a Keen IO collection called "survey-completed" that contains events matching the following pattern:
keen.id: <unique autogenerated id>
keen.timestamp: <autogenerated overridable timestamp>
userId: <hex string for user>
surveyScore: <integer from 1 to 10>
...
How would you create a report of only the most up-to-date satisfaction score for each user that responded to one or more surveys within a given amount of time (such as one week)?
There isn't a really elegant way to make it happen, but for a given userId you could successfully return your the most up-to-date event create a count query with a group_by on [surveyScore, keen.timestamp] and an order_by on the keen.timestamp property. You will want to set limit=1 to select only the most recent surveyScore.
If you'd like to use an extraction, the most straight forward way would be to run an extraction with property_names set to ["userId","keen.timestamp","surveyScore"]. Once you receive the results you can then do some client-side post processing. This is probably the best way if you want to take a look at all of your userIds.
If you're interested in a given userId and want to use an extraction, you can run an extraction with a filter on the userId eq X, define the optional parameter latest set to latest=1. The latest property is an integer containing the number of most recent events to extract. Note: The use of latest will call upon the keen.created_at timestamp instead of keen.timestamp (https://keen.io/docs/api/#the-keen-object).
I am trying to extract ticket details from Service Now. Is there a way to extract the details without ODBC ? I have also tried the solution mentioned in [1]: https://community.servicenow.com/docs/DOC-3844, but I am receiving an error 9 -subscript out of range.
Is there a better way to extract details efficiently? I tried asking this in the service now forum but I thought I might get other opinions from here.
It's been a while since this question is asked. Hopefully following is still useful.
I am extracting change data (not incident) , but the process still should be same. You will need to gather incident table and column information. Then there are couple of ways to approach the problem.
1) If the data you are extracting has fixed parameters , such as fixed period or fixed column or group etc., then you can create a report within servicenow and then use REST/SOAP API to get the data in text/csv format. You can use different python modules to convert from csv to xls or xlsx depending on you need. I used openpyXL ,csv , xlsreader ,xlswriter etc.
See here for a example
ServiceNow - How to use SOAP to download reports
2) If the data has dynmaic parameters where you need to change columns, dates or filter etc, you can still use soap / REST API but form query within python scripts instead of having static report. This way you can change it based on your requirement on the fly.
Here is an example query for DB. you can use example for above. Just switch url with following.
table_name = 'u_change_table_name' #SN DB holding change/INCIDENT info
table_limit = 800
table_query = 'active=true&sysparm_display_value=true&planned_start_date=today'
date_query = 'chg_start_date>=javascript:gs.daysAgoStart(1)^active=true^chg_type=normal'
table_fields = 'chg_number,chg_start_date,chg_duration,chg_end_date' #Actual column names from DB and not from SN report.
url= (
'https://yourcompany.service-now.com/api/now/table/' +table_name +\
'?sysparm_query=' + date_query + '&sysparm_fields=' \
+ table_fields + '&sysparm_limit=' + str(table_limit)
)
I want to match my user to a different user in his/her community every day. Currently, I use code like this:
#matched_user = User.near(#user).order("RANDOM()").first
But I want to have a different #matched_user on a daily basis. I haven't been able to find anything in Stack or in the APIs that has given me insight on how to do it. I feel it should be simpler than having to resort to a rake task with cron. (I'm on postgres.)
Whenever I find myself hankering for shared 'memory' or transient state, I think to myself "this is what (distributed) caches were invented for".
#matched_user = Rails.cache.fetch(#user.cache_key + '/daily_match', expires_in: 1.day) {
User.near(#user).order("RANDOM()").first
}
NOTE: While specifying a TTL for cache entry tells Rails/the cache system to try and keep that value for the given timeframe, there's NO guarantee that it will. In particular, a cache that aggressively tries to reclaim memory may expire an entry well before its desired expires_in time.
For this particular use case, it shouldn't be a big deal but in cases where the business/domain logic demands periodically generated values that are durable then you really have to factor that into your database.
How about using PostgreSQL's SETSEED function? I used the date to seed so that every day the seed will change, but within a day, the seed will be consistent.:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
You may want to seed a random value after using this so that any future calls to random aren't biased:
random = User.connection.execute("SELECT RANDOM()").to_a.first["random"]
# Same code as above:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
# Use random value before seed to make new seed:
User.connection.execute "SELECT SETSEED(#{random})"
I have split these steps in different sections just for readability. you can optimise query later.
1) Find all user records till today morning. so that the count will freeze.
usrs_till_today_morning = User.where("created_at <?", DateTime.now.in_time_zone(Time.zone).beginning_of_day)
2) Pluck all ID's
user_ids = usr_till_today_morning.pluck(:id)
3) Today date it will be a range (1..30) but will remain constant throughout the day.
day_today = Time.now.day
4) Select the same ID for the day
todays_user_id = user_ids[day_today % user_ids.count]
#matched_user = User.find(todays_user_id)
So it will give you random user records by maintaining same record throughout the day!!