The problem with pg_stat_statements_calls in Grafana, the Count for a certain period is not displayed. I have tried various rate and irate functions. But when I choose the time "The last 5, 10, 15 minutes and so on."The values don't seem to change, they remain the same huge. I also added interval it didn't help.
My request looks like this:
topk(30, (pg_stat_statements_calls{datname!~"template.*", datname!~"postgres", instance=~"$server.+", datname=~"$database", short_query!~"(BEGIN|COMMIT|SET.*|ROLLBACK|BEGIN ISOLATION LEVEL READ COMMITTED)"}))
enter image description here
I tried:
rate
irate
delta
interval
But my Count does not adjust to the time range
I am little confused, as i have some events ingesting from .csv file in splunk from different different timezones china, pacific, eastern, europe etc...
I have fields like start time, end time, TimeZone, TimeZoneID, sitename, conferenceID & hostname.....etc
for your info(conferenceID=131146947830496273, 130855971227450408......)
was wondering if i have to do a ".......|stats count of conferenceID" for particular time interval(ex., 12:00 pm to 15:00 pm today ) by sitting on pacific timezone, using the start time and end time from the events search should collect all events sorting from there originating timezones time interval but not the taking splunk timezone time interval.
below are some samples of logs which I have
testincsso,130878564690050083, Shona,"GMT-07:00, Pacific (San Francisco)",4,06/17/2019 09:33:17,06/17/2019 09:42:23,10,0,0,0,0,0,0,9,0,0,1,0,0,1,1
host = usloghost1.example.com sourcetype =webex_cdr
6/17/19
12:29:03.060 AM
testincsso,129392485072911500,Meng,"GMT+08:00, China (Beijing)",45,06/17/2019 07:29:03,06/17/2019 07:59:22,31,0,0,0,0,0,0,0,0,30,1,1,0,0,1
host = usloghost1.corp.example.com sourcetype = webex_cdr
6/17/19
12:19:11.060 AM
testincsso,131121310031953680,sarah ward,"GMT-07:00, Pacific (San Francisco)",4,06/17/2019 07:19:11,06/17/2019 07:52:54,34,0,0,0,0,0,0,0,0,34,3,3,0,0,2
host = usloghost1.corp.example.com sourcetype = webex_cdr
6/17/19
12:00:53.060 AM
testincsso,130878909569842780,Patrick Janesch,"GMT+02:00, Europe (Amsterdam)",22,06/17/2019 07:00:53,06/17/2019 07:04:50,4,0,0,0,0,0,0,4,0,2,3,2,0,1,2
host = usloghost1.corp.example.com sourcetype = webex_cdr
update:
there is 2 fields in the events start time and end time for every conference it held in there local timezone(event originating TZ).
also _time refers the splunk time which I don't need in this case. what I need is there is date_hour, date_minutes, date_seconds...etc which shows events local timezone time(china, europe, asia...etc).
so when i sit here pacific TZ and try searching for
index=test "testincsso" | stats count(conferenceID) by _time
taking timeinterval last 4 hours then the output should display the count of Cenferences by taking the count from all events by comparing with there local TZ's time for last 4 hours.
so do I need to use "| eval hour = strftime(_time,"%H")" or "| eval mytime=_time | convert timeformat="%H ctime(mytime)" before stats.
thanks
-also changing timepicker default behavior may give correct results.
I have events with fields "start time" and "end time" from different TZ. so when I try to search events ex., date range "06-16-2019" using time-picker I should get all events by seeing the field "start time" in events not the "_time" of Splunk.
I want change my splunk time picker default behavior and gives output by sieng events fields(ex., "start time" & "end time". below the query I changed in source xml.
index=test sourcetype=webex] "testinc" | eval earliest = $toearliest$ | eval latest=if($tolatest$ < 0, now(),$tolatest$) | eval datefield = strptime($Time$, "%m/%d/%Y %H:%M:%S")|stats count(Conference)
If you have any control over how the logs are generated, it's best to include the time zone as part of the timestamp. For example, "06/17/2019 07:00:53+0200". Then Splunk can easily convert the time.
If that's not an option, perhaps you can specify the time zone when the logs are read. Assuming each log is stored on a system in the originating time zone, the props.conf stanza for the Universal Forwarder should include a TZ attribute telling Splunk where in the world the log is from.
If this doesn't help, please edit your question to say what problem you are trying to solve.
I would like some clarification on tzs for the Jawbone Moves endpoint: https://jawbone.com/up/developer/endpoints/moves. Is this key going to be present on all response items? If not, what types of records will have it vs those that don't. Additionally, the docs indicate it will be an array of arrays with the following format:
"tzs": [
[1384963500, "America/Phoenix"],
[1385055720, "America/Los_Angeles"]
]
However, I am getting response that look like the following:
"tzs": [[1468410383, -14400]]
Is the second an offset I presume in seconds?
The tzs key will appear in responses from the moves endpoint that provide data for a given day's move. It will always be present, but it will only contain more than one entry if the user changes timezones during the given time period (e.g., the user is travelling).
Here's the explanation from the documentation:
Each entry in the list contains a unix timestamp and a timezone. In most instances the timezone entry is a string containing the Olson timezone.
When the timezone entry is just a number, then you are correct it's the GMT offset in seconds, so -14400 corresponds to US/Eastern
I want to match my user to a different user in his/her community every day. Currently, I use code like this:
#matched_user = User.near(#user).order("RANDOM()").first
But I want to have a different #matched_user on a daily basis. I haven't been able to find anything in Stack or in the APIs that has given me insight on how to do it. I feel it should be simpler than having to resort to a rake task with cron. (I'm on postgres.)
Whenever I find myself hankering for shared 'memory' or transient state, I think to myself "this is what (distributed) caches were invented for".
#matched_user = Rails.cache.fetch(#user.cache_key + '/daily_match', expires_in: 1.day) {
User.near(#user).order("RANDOM()").first
}
NOTE: While specifying a TTL for cache entry tells Rails/the cache system to try and keep that value for the given timeframe, there's NO guarantee that it will. In particular, a cache that aggressively tries to reclaim memory may expire an entry well before its desired expires_in time.
For this particular use case, it shouldn't be a big deal but in cases where the business/domain logic demands periodically generated values that are durable then you really have to factor that into your database.
How about using PostgreSQL's SETSEED function? I used the date to seed so that every day the seed will change, but within a day, the seed will be consistent.:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
You may want to seed a random value after using this so that any future calls to random aren't biased:
random = User.connection.execute("SELECT RANDOM()").to_a.first["random"]
# Same code as above:
User.connection.execute "SELECT SETSEED(#{Date.today.strftime("%y%d%m").to_i/1000000.0})"
#matched_user = User.near(#user).order("RANDOM()").first
# Use random value before seed to make new seed:
User.connection.execute "SELECT SETSEED(#{random})"
I have split these steps in different sections just for readability. you can optimise query later.
1) Find all user records till today morning. so that the count will freeze.
usrs_till_today_morning = User.where("created_at <?", DateTime.now.in_time_zone(Time.zone).beginning_of_day)
2) Pluck all ID's
user_ids = usr_till_today_morning.pluck(:id)
3) Today date it will be a range (1..30) but will remain constant throughout the day.
day_today = Time.now.day
4) Select the same ID for the day
todays_user_id = user_ids[day_today % user_ids.count]
#matched_user = User.find(todays_user_id)
So it will give you random user records by maintaining same record throughout the day!!
Trying to get records that were created this year, I stumbled upon this great question. The second answer says you get all records from a model that were created today by saying:
Model.where("created_at >= ?", Time.now.beginning_of_day)
So, naturally, I tried the same thing with Time.now.beginning_of_year, and it works just fine.
However, what struck me as interesting is that the outputted query (I tried it in the console) is
SELECT COUNT(*) FROM `invoices` WHERE (created_at >= '2012-12-31 23:00:00')
I wasn't aware that 2013 already began at 2012-12-31 23:00:00? How's that?
If you haven't set it yet, you should set your timezone in the config/application.rb file. Look for the line that begins with config.time_zone. (If you aren't sure what value to give, you can run rake time:zones:all to get a list of all available timezones.)
Once you've set your timezone, you should use Time.zone.now, as opposed to Time.now. This will properly "scope" your times to your timezone.
Check the API for more details on TimeWithZone.