Splunk query to get non matching ID from two query - splunk

Index=* sourcetype="publisher" namespace="app_1" | table ID message | where message="published"
Index=* sourcetype="consumer" namespace="app_1" | table ID message | where message="consumed"
I want to display non matching ID by comparing both the query, how can I achieve this.
If query 1 is giving 100 records and query 2 is giving 90 records and all 90 records are present in query 1 them I want to see 10 records which are not present in query 2.

There are several ways to achieve this outcome.
The following counts the number of consumer and producers events for each ID, then shows the IDs of the events that occur only once.
index=* sourcetype="publisher" OR sourcetype="consumer" namespace="app_1" ID="*" | stats count by ID | where count<2
In this next method, we use a sub search and a join. This has the benefit of giving you the full event, not just the ID.
index=* sourcetype="publisher" namespace="app_1" ID="*" | join type=outer ID [ search index=* sourcetype="consumer" namespace="app_1" ID="*" | eval does_match=1 ] | where isnull(does_match)

Related

Splunk - I want to add a value from stats count() to a value from a lookup table and show that value in a table

The objective of the query im trying to write is to take a count of raw data from the previous month and add that to a count from a lookup table (.csv)
What I have attempted to do is…
index=*** source=***
| stats count(_raw) as monthCount
| join
[ | inputlookup Log_Count_YTD.csv]
| eval countYTD = toNumber(monthCount) + toNumber(TOTAL_COUNT_YTD)
| table countYTD
This query doesn’t return any value on a table. The TOTAL_COUNT_YTD is the only field from the inputlookup file. Let me know if there is any other information you need to help me out with this one. Thanks!
The stats command transforms the data so it has only 1 field: monthCount. The inputlookup returns only the TOTAL_COUNT_YTD field. The join command works by comparing values of common fields between the main search and the subsearch. Since there are no common fields no events are joined.
There is no need for join in this case. The appendcols command will do, assuming the CSV contains a single field in a single row.
index=*** source=***
| stats count() as monthCount
| appendcols
[ | inputlookup Log_Count_YTD.csv]
| eval countYTD = toNumber(monthCount) + toNumber(TOTAL_COUNT_YTD)
| table countYTD
FWIW, the tonumber function is unnecessary, but doesn't hurt.

Splunk left jion is not giving as exepcted

Requirement: I want to find out, payment card information used in a particular day are there any tele sales order placed with the same payment card information.
I tried with below query it is supposed to give me all the payment card information from online orders and matching payment info from telesales. But i am not giving correct results basically results shows there are no telesales for payment information, but when i search splunk i am finding telesales as well. So the query wrong.
index="orders" "Online order received" earliest=-9d latest=-8d
| rex field=message "paymentHashed=(?<payHash>.([a-z0-9_\.-]+))"
| rename timestamp as onlineOrderTime
| table payHash, onlineOrderTime
| join type=left payHash [search index="orders" "Telesale order received" earliest=-20d latest=-5m | rex field=message "paymentHashed=(?<payHash>.([a-z0-9_\.-]+))" | rename timestamp as TeleSaleTime | table payHash, TeleSaleTime]
| table payHash, onlineOrderTime, TeleSaleTime
Please help me in fixing the query or a query to find out results for my requirement.
If you do want to do this with a join, what you had, slightly changed, should be correct:
index="orders" "Online order received" earliest=-9d latest=-8d
| rex field=message "paymentHashed=(?<payHash>.([a-z0-9_\.-]+))"
| stats values(_time) as onlineOrderTime by payHash
| join type=left payHash
[search index="orders" "Telesale order received" earliest=-20d latest=-5m
| rex field=message "paymentHashed=(?<payHash>.([a-z0-9_\.-]+))"
| rename timestamp as TeleSaleTime
| stats values(TeleSaleTime) by payHash ]
| rename timestamp as onlineOrderTime
Note the added | stats values(...) by in the subsearch: you need to ensure you've removed any duplicates from the list, which this will do. By using values(), you'll also ensure if there're repeated entries for the payHash field, they get grouped together. (Similarly, added a | stats values... before the subsearch to speed the whole operation.)
You should be able to do this without a join, too:
index="orders" (("Online order received" earliest=-9d latest=-8d) OR "Telesale order received" earliest=-20d))
| rex field=_raw "(?<order_type>\w+) order received"
| rex field=message "paymentHashed=(?<payHash>.([a-z0-9_\.-]+))"
| stats values(order_type) as order_type values(_time) as orderTimes by payHash
| where mvcount(order_type)>1
After you've ensured your times are correct, you can format them - here's one I use frequently:
| eval onlineOrderTime=strftime(onlineOrderTime,"%c"), TeleSaleTime=strftime(TeleSaleTime,"%c")
You may also need to do further reformatting, but these should get you close
fwiw - I'd wonder why you were trying to look at Online orders from only 9 days ago, but Telesale orders from 20 days ago to now: but that's just me.
The join command expects a list of field names on which events from each search will be matched. If no fields are listed then all fields are used. In the example, the fields 'onlineOrderTime' and 'TeleSaleTime' exist only on one side of the join so no matches can be made. The fix is simple: specify the common field name. ... | join type=left payHash ....
First of all, you can delete the last row | table payHash, onlineOrderTime, TeleSaleTime beacuse it doesn't do anything(the join command already joins both tables you created).
Secondly, when running both queries separately - both queries have the same "payHash"es? both queries return back a table with the true results?
Because by the looks of it, you used the join command correctly...

Query for calculating duration between two different logs in Splunk

As part of my requirements, I have to calculate the duration between two different logs using Splunk query.
For example:
Log 2:
2020-04-22 13:12 ADD request received ID : 123
Log 1 :
2020-04-22 12:12 REMOVE request received ID : 122
The common String between two logs is " request received ID :" and unique strings between two logs are "ADD", "REMOVE". And the expected output duration is 1 hour.
Any help would be appreciated. Thanks
You can use the transaction command, https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction
Assuming you have the field ID extracted, you can do
index=* | transaction ID
This will automatically produce a field called duration, which is the time between the first and last event with the same ID
While transaction will work, it's very inefficient
This stats should show you what you're looking for (presuming the fields are already extracted):
(index=ndxA OR index=ndxB) ID=* ("ADD" OR "REMOVE")
| stats min(_time) as when_added max(_time) as when_removed by ID
| eval when_added=strftime(when_added,"%c"), when_removed(when_removed,"%c")
If you don't already have fields extracted, you'll need to modify thusly (remove the "\D^" in the regex if the ID value isn't at the end of the line):
(index=ndxA OR index=ndxB) ("ADD" OR "REMOVE")
| rex field=_raw "ID \s+:\s+(?<ID>\d+)\D^"
| stats min(_time) as when_added max(_time) as when_removed by ID
| eval when_added=strftime(when_added,"%c"), when_removed(when_removed,"%c")

How do I create a Splunk query for unused event types?

I have found that I can create a Splunk query to show how many times results of a certain event type appear in results.
severity=error | stats count by eventtype
This creates a table like so:
eventtype | count
------------------------
myEventType1 | 5
myEventType2 | 12
myEventType3 | 30
So far so good. However, I would like to find event types with zero results. Unfortunately, those with a count of 0 do not apear in the query above, so I can't just filter by that.
How do I create a Splunk query for unused event types?
There are lots of different ways for that, depending on what you mean by "event types". Somewhere, you have to get a list of whatever you are interested in, and roll them into the query.
Here's one version, assuming you had a csv that contained a list of eventtypes you wanted to see...
severity=error
| stats count as mycount by eventtype
| inputcsv append=t mylist.csv
| eval mycount=coalesce(mycount,0)
| stats sum(mycount) as mycount by eventtype
Here's another version, assuming that you wanted a list of all eventtypes that had occurred in the last 90 days, along with the count of how many had occurred yesterday:
earliest=-90d#d latest=#d severity=error
| addinfo
| stats count as totalcount count(eval(_time>=info_max_time-86400)) as yesterdaycount by eventtype

The record represented by the ID with the Highest aggregate value

I already have the code to display the highest aggregate value for a ID.
select max(fk3_job_role_id),max(sum(no_of_placements))
from fact_accounts
group by fk3_job_role_id
the result looks like:
[max(fk3_job_role_id)] | [max(sum(no_of_placements))]
-----------------------|-----------------------------
5 | 25
However, i want to display the job_role_desc instead of fk3_job_role_id represented by the same id.
The table for it looks like:
[job_role_id] | [job_role_desc]
--------------------------------
1 | job1
2 | job2
3 | job3
4 | job4
5 | job5
select job_role_desc,T.total_sum from fact_accounts where job_role_id in (select max(fk3_job_role_id),max(sum(no_of_placements)) as total_sum from fact_accounts group by fk3_job_role_id) T
You need to query for the job description by using a subquery. The above query first fetches the data according to the query inside the brackets( also known popularly as a subquery ). The result returned from this query is used to compare with all the other id's in the main table by a simple "in" clause.
Edit
If you also need the sum of placements you can get it by using a reference to the table created during the execution of the subquery