Splunk - Merging Associated Events - splunk

I have a script which sends individual events into Splunk, each event is essentially a report on a HTTP Request, either GET or POST. The event contains a number of fields but two key ones are StepName and Timing:
StepName will be a title for the HTTPRequest etc. PostLogin
Timing will be a int value of the milliseconds taken by HttpRequest
I'm writing a report which shows the average time taken for each step over last 15 minutes. However, from an end users point of view, some steps are part of one process e.g.
Step1 - GetLoginPage
Step2 - PostLoginPage
Step3 - ProcessUserDetails
Step4 - GetHomePage
In this case Step2 and Step3 would be one process for an end user, therefore I'd like to be able to report on these as if they were one step so the following:
GetLoginPage 50
PostLoginPage 100
ProcessUserDetails 250
GetHomePage 80
would become
GetLoginPage 50
PostLoginPage 350
GetHomePage 80
I can use a replace on the StepName so I have
GetLoginPage 50
PostLoginPage 100
PostLoginPage 250
GetHomePage 80
How can I then merge these results so it summates the two PostLoginPage steps and then gives me an average over the time period for the three individual steps?
Note each step has a field called TransactionGUID which associates a group of steps for the same execution.

If you post your question over at http://splunk-base.splunk.com/answers/ , you'll have access to a greater audience of Splunk expertise , and I will attempt to answer your question there.

Related

pg_stat_statements_calls in Grafana

The problem with pg_stat_statements_calls in Grafana, the Count for a certain period is not displayed. I have tried various rate and irate functions. But when I choose the time "The last 5, 10, 15 minutes and so on."The values don't seem to change, they remain the same huge. I also added interval it didn't help.
My request looks like this:
topk(30, (pg_stat_statements_calls{datname!~"template.*", datname!~"postgres", instance=~"$server.+", datname=~"$database", short_query!~"(BEGIN|COMMIT|SET.*|ROLLBACK|BEGIN ISOLATION LEVEL READ COMMITTED)"}))
enter image description here
I tried:
rate
irate
delta
interval
But my Count does not adjust to the time range

Splunk Concurrency Calculation

I have some data from logs in Splunk where I need to determine what other requests were running concurrently at the time of any single event.
Using the following query, I was able to have it return a column for the number of requests that ran at the same time within my start time and duration.
index="sfdc" source="sfdc_event_log://EventLog_SFDC_Production_eventlog_hourly" EVENT_TYPE IN (API, RestAPI) RUN_TIME>20000
| eval endTime=_time
| eval permitTimeInSecs=(RUN_TIME-20000)/1000
| eval permitAcquiredTime=endTime-permitTimeInSecs
| eval dbTotalTime=DB_TOTAL_TIME/1000000
| concurrency start=permitAcquiredTime duration=permitTimeInSecs
| table _time API_TYPE EVENT_TYPE ENTITY_NAME apimethod concurrency permitAcquiredTime permitTimeInSecs RUN_TIME CPU_TIME dbtotalTime REQUEST_ID USER_ID
| fieldformat dbTotalTime=round(dbTotalTime,0)
| rename permitAcquiredTime as "Start Time", permitTimeInSecs as "Concurrency Duration", concurrency as "Concurrent Running Events", API_TYPE as "API Type", EVENT_TYPE as "Event Type", ENTITY_NAME as "Entity Name", apimethod as "API Method", RUN_TIME as "Run Time", CPU_TIME as "CPU Time", dbtotalTime as "DB Total Time", REQUEST_ID as "Request ID", USER_ID as "User ID"
| sort "Concurrent Running Events" desc
I am now trying to investigate a single event in these results. For example, the top event says that at the time it ran, there were 108 concurrent requests running in the 20 second window of time.
How can I identify those 108 events using this data?
I imagine it would be querying the events that had a specific time frame range, but I am not sure if I need to check something like _time + - 10 seconds to see what was running within the 20 second window?
Just need to understand the data behind this 108 events a little more for this top example. My end goal here is to be able to add a drill-down to the dashboard so that when I click on the 108, I can see those events that were running.
Essentially, you are on right lines. What you want to do is create a search (presumably on the original data) using 'earliest=<beginning of 20 second window> latest=<end of 20 second window> using your calculated values.
You have start time and can calculate end time. Then pipe these as variables into a new search.
| search earliest=start_time latest=end_time index="sfdc" etc..
I cant check this here right now. But its probably something along those lines. Quite likely more elegant ways to do the same. Hope I'm not wildly off mark and this at least helps a little.

Trigger splunk alert when received values do not change

I receive exchange rate from an external web service and I log the response received like below (note both line contain data from a single response):
com.test.Currency#366c1a1e[Id=<Null>,Code=<Null>,Feedcode=Gbparslite,Rate=<Null>,Percentaqechangetrigger=<Null>,Bid=93.4269,Offer=93.43987,Mustinvertprice=False],
com.test.Currency#54acb93a[Id=<Null>,Code=<Null>,Feedcode=Gbphkdlite,Rate=<Null>,Percentaqechangetrigger=<Null>,Bid=10.04629,Offer=10.04763,Mustinvertprice=False],
I want to set up an alert which triggers when the last x (x=5) values received did not changed.
Assuming you're looking to alert when a particular field doesn't change after 5 events, you can try the following.
index=data | head 5 | stats dc(Bid) as dv
Then alert if dv equals 1. dc(Bid) calculates the number of unique values of Bid, in this case, over the last 5 events. If there is no difference, they will be 1. If there are multiple values, dc(Bid) will be greater than 1

How to find results being between two values of different keys with Redis?

I'm creating a game matchmaking system using Redis based on MMR, which is a number that pretty much sums up the skill of a player. Therefore the system can match him/her with others who are pretty much with the same skill.
For example if a player with MMR of 1000 joins the queue, system will try to find other ppl with MMR of range 950 to 1050 to match with this player. But if after one minute it cannot find any player with given stats it will scale up the range to 900 to 1100 (a constant threshold).
What I want to do is really easy with relational database design but I can't figure out how to do it with Redis.
The queue table implementation would be like this:
+----+---------+------+-------+
| ID | USER_ID | MMR | TRIES |
+----+---------+------+-------+
| 1 | 50 | 1000 | 1 |
| 2 | 70 | 1500 | 1 |
| 3 | 350 | 1200 | 1 |
+----+---------+------+-------+
So when a new player queues up, it will check it's MMR against other players in the queue if it finds one between 5% Threshold it will match the two players if not it will add the new player to the table and wait for new players to queue up to compare or to pass 1 minute and the cronjob increment the tries and retry to match players.
The only way I can imagine is to use two separate keys for the low and high of each player in the queue like this
MatchMakingQueue:User:1:Low => 900
MatchMakingQueue:User:1:High => 1100
but the keys will be different and I can't get for example all users in between range of low of 900 to high of 1100!
I hope I've been clear enough any help would be much appreciated.
As #Guy Korland had suggested, a Sorted Set can be used to track and match players based on their MMR, and I do not agree with the OP's "won't scale" comment.
Basically, when a new player joins, the ID is added to a zset with the MMR as its score.
ZADD players:mmr 1000 id:50
The matchmaking is made for each user, e.g. id:50 with the following query:
ZREVRANGEBYSCORE players:mmrs 1050 950 LIMIT 0 2
A match is found if two IDs are returned and at least one of them is different than that of the new player. To make the match, both IDs (the new player's and the matched with one) need to be removed from the set - I'd use a Lua script to implement this piece of logic (matching and removing) for atomicity and communication reduction, but it can be done in the client as well.
There are different ways to keep track of the retries, but perhaps the simplest one is to use another Sorted Set, where the score is that metric.
The following pseudo Redis Lua code is a minimal example of the approach:
local kmmrs, kretries = KEYS[1], KEYS[2]
local id = ARGV[1]
local mmr = redis.call('ZSCORE', kmmrs, id)
local retries = redis.call('ZSCORE', kretries, id)
local min, max = mmr*(1-0.05*retries), mmr*(1+0.05*retries)
local candidates = redis.call('ZREVRANGEBYSCORE', kmmrs, max, min, 'LIMIT', 0, 2)
if #candidates < 2 then
redis.call('ZINCRBY', kretries, 1, id)
return nil
end
local reply
if candidates[1] ~= id then
reply = candidates[1]
else
reply = candidates[2]
end
redis.call('ZREM', kmmrs, id, reply)
redis.call('ZREM', kretries, id, reply)
return reply
Let me get the problem right! Your problem is that you want to find all the users in a given range of MMR value. What if You make other users say that "I'm falling in this range".
Read about Redis Pub/Sub.
When a user joins in, publish its MMR to rest of the players.
Write the code on the user side to check if his/her MMR is falling in the range.
If it is, user will publish back to a channel that it is falling in that range. Else, user will silently discard the message.
repeat these steps if you get no response back in 1 minute.
You can make one channel (let's say MatchMMR) for all users to publish MMR for match request which should be suscribed by all the users. And make user specific channel in case somebody has a MMR in the calculated range.
Form you published messages such that you can send all the information like "retry count", "match range percentage", "MMR value" etc. so that your code at user side can calculate if it is the right fit for the MMR.
Read mode about redis Pub/Sub at: https://redis.io/topics/pubsub

Is MCOIMAPSearchOperation with searchSinceReceivedDate time specific or granular?

Below is the code I have to do a search using searchSinceReceivedDate.
It will return all the messages for a given date.
I want to know if the method also uses the hour, minute , second portion of the NSDate?
So that I can say I want to retrieve all new messages in the last 5 minutes.
MCOIMAPSearchExpression *dateFilter = [MCOIMAPSearchExpression searchSinceReceivedDate:[NSDate dateWithTimeIntervalSinceNow:seconds]];
MCOIMAPSearchOperation *searchOperation = [session searchExpressionOperationWithFolder:folder expression:dateFilter];
Below is the debug output of the interaction:
2014-08-01 08:36:52.753 myApp[3154:360f] - 1 - 8 UID SEARCH SINCE 1-Aug-2014
The answer I received from the mailcore2 forum was that I am to use the last uuid as a basis for my next search.
This worked for me.