Group data by date in Splunk - splunk

I have data that is displayed in Splunk query as below: (data for 3 column displayed in 3 separate rows)
|Date |Tier 1|Tier 2|Tier 3
|1/1/2022|33|BLANK|BLANK
|1/1/2022|BLANK |56|BLANK
|1/1/2022|BLANK|BLANK|121
|1/2/2022|21|BLANK|BLANK
|1/2/2022|BLANK |78|BLANK
|1/2/2022|BLANK|BLANK|543
I need to display data as follows in the table
|Date |Tier 1|Tier 2|Tier 3
|1/1/2022|33|56|121
|1/2/2022|21|78|543
Here's a small snippet of my query
|eval Tier1=(StatusCode>400)
|eval Tier2=(StatusCode>499)
|eval Tier3=(StatusCode>500)
| fields Date Tier1 Tier2 Tier3
| sort Date

To regroups the results, use the stats command.
| eval Tier1=(StatusCode>400)
| eval Tier2=(StatusCode>499)
| eval Tier3=(StatusCode>500)
| fields Date Tier1 Tier2 Tier3
| stats values(*) as * by Date

Related

Splunk query group by multiple fields

I have following splunk fields
Date,Group,State
State can have following values InProgress|Declined|Submitted
I like to get following result
Date. Group. TotalInProgress. TotalDeclined TotalSubmitted. Total
-----------------------------------------------------------------------------
12-12-2021 A. 13. 10 15 38
I couldn't figured it out. Any help would be appreciated
Perhaps this example query will help.
| makeresults | eval _raw="Date,Group,State
12-12-2021,A,InProgress
12-12-2021,B,InProgress
12-12-2021,A,Declined
12-12-2021,A,InProgress
12-12-2021,A,Submitted
12-12-2021,B,Submitted
12-12-2021,A,InProgress
12-12-2021,A,InProgress
12-12-2021,B,Declined
12-12-2021,A,InProgress
12-12-2021,A,Submitted
12-12-2021,A,Submitted"
| multikv forceheader=1
```Above lines just set up test data```
```Set variables based on the State field```
| eval InProgress=if(State="InProgress", 1, 0), Declined=if(State="Declined", 1, 0), Submitted=if(State="Submitted", 1, 0)
```Count events```
| stats count as Total, sum(InProgress) as TotalInProgress, sum(Declined) as TotalDeclined, sum(Submitted) as TotalSubmitted by Date,Group
| table Date Group TotalInProgress TotalDeclined TotalSubmitted Total

Display result count of multiple search query in Splunk table

I want to display a table in my dashboard with 3 columns called Search_Text, Count, Count_Percentage
How do I formulate the Splunk query so that I can display 2 search query and their result count and percentage in Table format.
Example,
Heading Count Count_Percentage
SearchText1 4 40
SearchText2 6 60
The below query will create a column named SearchText1 which is not what I want:
index=something "SearchText1" | stats count AS SearchText1
Put each query after the first in an append and set the Heading field as desired. Then use the stats command to count the results and group them by Heading. Finally, get the total and compute percentages.
index=foo "SearchText1" | eval Heading="SearchText1"
| append [ | search index=bar "SearchText2" | eval Heading="SearchText2" ]
| stats count as Count by Heading
| eventstats sum(Count) as Total
| eval Count_Percentage=(Count*100/Total)
| table Heading Count Count_Percentage
Showing the absence of search results is a little tricky and changes the above query a bit. Each search will need its own stats command and an appendpipe command to detect the lack of results and create some. Try this:
index=main "SearchText1"
| eval Heading="SearchText1"
| stats count as Count by Heading
| appendpipe
[ stats count
| eval Heading="SearchText1", Count=0
| where count=0
| fields - count]
| append
[| search index=main "SearchText2"
| eval Heading="SearchText2"
| stats count as Count by Heading
| appendpipe
[ stats count
| eval Heading="SearchText2", Count=0
| where count=0
| fields - count] ]
| eventstats sum(Count) as Total
| eval Count_Percentage=(Count*100/Total)
| table Heading Count Count_Percentage

Splunk dbxquery merge with splunk search

I am trying to merge Splunk search query with a database query result set. Basically I have a Splunk dbxquery 1 which returns userid and email from database as follows for a particualr user id:
| dbxquery connection="CMDB009" query="SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('xy67383') "
Above query outputs
VALUE EMAIL
xv67383 xyz#test.com
Another query is a Splunk query 2 that provides the user ids as follows:
index=index1 (host=xyz OR host=ABC) earliest=-20m#m
| rex field=_raw "samlToken\=(?>user>.+?):"
| join type=outer usetime=true earlier=true username,host,user
[search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=#w0
| rex field=_raw "Origusername\((?>username>.+?)\)"
| rex field=username"^(?<user>,+?)\:"
| rename _time as epoch1]
| "stats count by user | sort -count | table user
This above query 2 returns a column called user but not email.
What I want to do is add a column called email from splunk dbxquery 1 for all matching rows by userid in output of query 1. Basically want to add email as additional field for each user returned in query 2.
What I tried so far is this but it does not give me any results. Any help would be appreciated.
index=index1 (host=xyz OR host=ABC) earliest=-20m#m
| rex field=_raw "samlToken\=(?>user>.+?):"
| join type=outer usetime=true earlier=true username,host,user
[search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=#w0
| rex field=_raw "Origusername\((?>username>.+?)\)"
| rex field=username"^(?<user>,+?)\:"
| rename _time as epoch1]
| "stats count by user | sort -count
| table user
| map search="| | dbxquery connection=\"CMDB009\" query=\"SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('$user'):\""
Replace $user with $user$ in the map command. Splunk uses a $ on each end of a token.
The username field is not available at the end of the query because the stats command stripped it out. The only fields available after stats are the ones mentioned in the command (user and count in this case). To make the username field available, add it to the stats command. That may, however, change your results.
| rex field=_raw "samlToken\=(?<user>.+?):"
| join type=outer usetime=true earlier=true username,host,user
[search index=index1 source="/logs/occurences.log" SERVER_SERVER_CONNECT NOT AMP earliest=#w0
| rex field=_raw "Origusername\((?<username>.+?)\)"
| rex field=username"^(?<user>,+?)\:"
| rename _time as epoch1]
| stats count by user, username | sort -count
| table user, username
| map search="| dbxquery connection=\"CMDB009\" query=\"SELECT dra.value, z.email FROM DRES_PRINTABLE z, DRES.CREDENTIAL bc, DRES.CRATTR dra WHERE z.userid = bc.drid AND z.drid = dra.dredid AND dra.value in ('$user'):\""```

Get the row with latest start date from multiple tables using sub select

I have data from 3 tables as copied below . I am not using joins to get data. I dont know how to use joins for multiple tables scenario. My situation is to update the OLD(eff_start_ts) date rows to sydate in one of the tables when we find the rows returned for a particular user is more than 2. enter code here
subscription_id |Client_id
----------------------------
20685413 |37455837
reward_account_id|subscription_id |CURRENCY_BAL_AMT |CREATE_TS |
----------------------------------------------------------------------
439111697 | 20685413 | -40 |1-09-10 |
REWARD_ACCT_DETAIL_ID|REWARD_ACCOUNT_ID |EFF_START_TS |EFF_STOP_TS |
----------------------------------------------------------------------
230900968 | 439111697 | 14-06-11 | 15-01-19
47193932 | 439111697 | 19-02-14 | 19-12-21
243642632 | 439111697 | 18-03-23 | 99-12-31
247192972 | 439111697 | 17-11-01 | 17-11-01
The SQL should update the EFF_STOP_TS of last table except the second row - 47193932 bcz that has the latest EFF_START_TS.
Expected result is to update the EFF_STOP_TS column of 230900968, 243642632 and 247192972 to sysdate.
As per my understanding, You need to update it per REWARD_ACCOUNT_ID. So, You can try the below code -
UPDATE REWARD_ACCT_DETAIL RAD
SET EFF_STOP_TS = SYSDATE
WHERE EFF_START_TS NOT IN (SELECT MAX(EFF_START_TS)
FROM REWARD_ACCT_DETAIL RAD1
WHERE RAD.REWARD_ACCOUNT_ID = RAD1.REWARD_ACCOUNT_ID)

Selecting records in groups by date - possible?

I don't think there is an elegant way to do this, but here goes. Database contains 5000 records with timestamps of 2 years. I need to pull the records under each day of the year.
So it looks like..
09/09/2009 - record938, record2, record493
09/10/2009 - record260, record485, record610
...etc
I cannot use GROUP BY. There are duplicates and that's OK. I need to show them.
Is this possible? PHP/MySQL?
One way of doing it is looping through every day of the year and doing a query with "WHERE DAY(created_at)..." but obviously this isn't elegant.
HOW can I do this? I posted this question before without a satisfactory answer (answer was what I just stated above)
MySQL has the group_concat() aggregate function:
SELECT date(rec_time), group_concat(rec_id)
FROM records GROUP BY date(rec_time);
Will return all rec_id values from table joined by commas, for each date. If you want a separator other than , use group_concat(some_column SEPARATOR '-')
Example
For example if your table looks like:
+--------+---------------------+
| rec_id | rec_time |
+--------+---------------------+
| 1 | 2009-11-28 10:00:00 |
| 2 | 2009-11-28 20:00:00 |
| 3 | 2009-11-27 15:00:00 |
| 4 | 2009-11-27 07:00:00 |
| 5 | 2009-11-28 08:00:00 |
+--------+---------------------+
Then this query gives:
mysql> SELECT date(rec_time), group_concat(rec_id)
-> FROM records GROUP BY date(rec_time);
+----------------+----------------------+
| date(rec_time) | group_concat(rec_id) |
+----------------+----------------------+
| 2009-11-27 | 3,4 |
| 2009-11-28 | 1,2,5 |
+----------------+----------------------+
Caveat
Beware that the result is limited by the group_concat_max_len system variable, which defaults to only 1024 bytes! To avoid hitting this wall, you should execute this before running the query:
SET SESSION group_concat_max_len = 65536;
Or more, depending on how many results you expect. But this value cannot be larger than max_allowed_packet