Splunk Query to get comma separated value as single value - splunk

In logs we have a value "device=xyz,1" here we need to consider "xyz,1" as a single value and display it in a table format. But now when we run a query it just displays device value as "xyz" and misses out ",1". how to consider it as a single value.
Query example: ....|eval device = if(isnull(device), "notFound", device) | table device
from above query
Expection:
Table should have column name as device and value should be "xyz,1"
What is actually happening:
Table has column name as device but value is "xyz"
I have tried mvjoin but it's not helping.
Please suggest a solution

You may need to custom-extract the value (until you can get the sourcetype's props.conf and transforms.conf updated).
Something like this should work:
<search>
| rex field=_raw "device=(<device>\S+)"
<rest of search>

Related

Splunk join two query to based on result of first query

In Splunk query I have two query like below
Query 1- index=mysearchstring1
Result - employid =123
Query 2- index=mysearchstring2
Here I want to use employid=123 in my query 2 to lookup and return final result.
Is it possible in Splunk?
It sounds like you're looking for a subsearch.
index=mysearchstring2 [ search index=mysearchstring1 | fields employid | format ]
Splunk will run the subsearch first and extract only the employid field. The results will be formatted into something like (employid=123 OR employid=456 OR ...) and that string will be appended to the main search before it runs.

How to get only distinct values from list

What I have: A datasource with a string column, let's call it "name".
There are more, but those are not relevant to the question.
The "name" column in the context of a concrete query contains only 2 distinct values:
""
"SomeName"
But any of the two a varying amount of times. There will only be those two.
Now, what I need is: In the context of a summarize statement, I need a column filled with the two distinct values strcated together, so I end up with just "SomeName".
What I have is not meeting this requirement and I cannot bring myself to find a solution for this:
datatable(name:string)["","SomeName","SomeName"] // just to give a minimal reproducible example
| summarize Name = strcat_array(make_list(name), "")
which gives me
| Name
> SomeNameSomeName
but I need just
| Name
> SomeName
I am aware that I need to do some sort of "distinct" somehow and somewhere or maybe there is a completely different solution to get to the same result?
So, my question is: What do I need to change in the shown query to fullfill my requirement?
take_any()
When the function is provided with a single column reference, it will
attempt to return a non-null/non-empty value, if such value is
present.
datatable(name:string)["","SomeName","SomeName", ""]
| summarize take_any(name)
name
SomeName
Fiddle
Wow, just as I posted the question, I found an answer:
datatable(name:string)["","SomeName","SomeName", ""]
| summarize Name = max(name)
I have no idea, why this works for a string column, but here I am.
This results in my desired outcome:
| Name
> SomeName
...which I suppose is probably less efficient than David's answer. So I'll prefer his one.

Splunk search the key in json

Could anyone help me with the below Splunk query?
I want to get the count of records by message.type. The message.type can take value either 'typeA' or 'typeB'.
I tried the below query but it lists and doesn't give the count in the result. That is, separate count for typeA and typeB.
The messages are below.
message: name=app1,version=1, type=typeA,task=queryapp
message: name=app2,version=1, type=typeB,task=testapp
message: name=app1,version=1, type=typeB,task=issuefix
index=myapp message="name=app1"
| stats count by message.type
Ideally, you would modify the logs so that type is its own json field.
However, if you are stuck with with
{"message" : "name=app1,version=1, type=typeA,task=queryapp"}
Then I suggest the following solution:
index=myapp message=*
| rex field=message "type=(?<myType>[a-zA-Z]+)"
| stats count by myType
The rex command here is extracting a new splunk field named myType from the existing message field based on the supplied regular expression.

only get numberic value in qlikview

i have this kind of data
-
B-3-I11
B-3-I12
BI1-I190
BI1-I191
BI1-I192L
BI1-I194A
BI1-I195L
BI1-I198R
BI1-I199L
BI1-I200Ac
BI1-I201L
conasde
Installation
Madqw
Medsfg
Woasd
this is the data I have .. now I want only those which start from B and have some numeric character in data..how I get in qlikview script
how to extract only those data ..
To filter to those that start with B you'd do
where left(Field,1)=B
Then to filter on those with numbers, you could add
and len(keepchar(Field,'1234567890'))>0
So that would give something like this:
LOAD Field
From Table
Where left(Field,1)=B
AND len(keepchar(Field,'1234567890'))>0
(where Field is the name of the field your data is in and Table is the name of the table your data is in)
Or, if you want to keep all the data but create a new field you would do:
LOAD
Field,
if(left(Field,1)=B AND len(keepchar(Field,'1234567890'))>0`,Field) as FieldFiltered
From Table

GCP Bigquery - query empty values from a record type value

I'm trying to query all resources that has empty records on a specific column but I'm unable to make it work. Here's the query that I'm using:
SELECT
service.description,
project.labels,
cost AS cost
FROM
`xxxxxx.xxxxx.xxxx.xxxx`
WHERE
service.description = 'BigQuery' ;
Here's the results:
As you can see, I'm getting everything with that query, but as mentioned, I'm looking to get resources with empty records only for example record 229,230 so on.
Worth to mention that schema for the column I'm trying to query is:
project.labels RECORD REPEATED
The above was mentioned because I tried using several combinations of WHERE but everything ends up in error.
To identify empty repeated record - you can use ARRAY_LENGTH in WHERE clause like in below example
WHERE ARRAY_LENGTH(project.labels) = 0