CloudWatch Log Insights using "in" to match any message that has any item in array - amazon-cloudwatch

I have an array with a list of unique literal strings (ids) and I want to use the "in" keyword to test for set membership. I've used the following query, the ephemeral field "id" extracts the id from the message.
fields #timestamp,#message, #logStream
| filter #message like /mutation CreateOrder/
| parse #message 'Parameters: *}], "id"=>"*"}}, "graphql"*' as rest_of_message, id
| parse #message '"variables"=>{"createOrderInput"=>*}, "graphql"' as variables
| filter id in ["182841661","182126710"]
| sort #timestamp desc
| limit 10000
| display id, variables
It was my assumption that it would match any message whose ephemeral field "id" matches any of the literal ids in the array. However, it's only matching the message that contain the first literal id in the array.
I've searched for both ids using the "like" key word and they both come up in the selected period.
Is it possible to do what I want to do? Is there a better way of doing it?

Related

Splunk: How to extract a field containing spaces at end

I try to extract the value of a field that contains spaces. Apparently it is hard to find a regular expression for this case (even the question is if it is possible at all).
Example: 03 Container ID - ALL_ELIGIBLE_STG_RTAIN Offer Set ID
From Above example, we have to get the count Container ID - ALL_ELIGIBLE_STG_RTAIN
I am Expecting like this.
Container ID
Count
ALL_ELIGIBLE_STG_RTAIN
xxxx
Assuming all container IDs are preceded by "Container ID - " then this command will extract them.
| rex "Container ID - (?<ContainerID>\S+)"

How to only extract match strings from a multi-value field and display in new column in SPLUNK Query

i am trying to extract matched strings from the multivalue field and display in another column. I have tried various options to split the field by delimiter and then mvexpand and then user where/search to pull those data. I was trying to find if there is an easier way to do this without all this hassle in SPLUNK query.
Example: Lets say i have below multi-value column1 field with data separated by delimiter comma
column1 = abc1,test1,test2,abctest1,mail,send,mail2,sendtest2,new,code,results
I was splitting this column using delimiter |eval column2=split(column1,",") and using regex/where/search to search for data with *test* in this column and return results, where i was able to pull the results but the column1 still shows all the values abc1,test1,test2,abctest1,mail,send,mail2,sendtest2,new,code,results , what i want is either to trim1 column1 to show only words match with test or show those entries in new column2 which should only show this words test1,test2,abctest1,sendtest2 as they were only matching *test*.
I would appreciate your help, thanks.
Found the answer after posting this question, its just using exiting mvfilter function to pull the match resutls.
column2=mvfilter(match(column1,"test"))
| eval column2=split(column1,",") | search column2="*test*"
doesn't work, as the split creates a multi-value field, which is a single event containing a single field containing many values. The search for *test* will still find that event, even though it contains abc1, etc... as there is at least one field that is *test*.
What you can use is the mvfilter command to narrow down the multi-value field to the events you are after.
| eval column2=split(column1,",") | eval column2=mvfilter(match(column2,".*test.*"))
Alternatively to this approach, you can use a regular expression to extract what you need.
| rex field=column1 max_match=0 "(<?column2>[^,]*test[^,]*)"
Regardless, at the end, you would need to use mvjoin to join your multiple values into a single string
| eval column2=mvjoin(column2, ",")

Splunk - Extract multiple values not equaling a value from a string

In Splunk I'm trying to extract multiple parameters and values that do not equal a specific word from a string. For example:
Anything in this field that does not equal "negative", extract the parameter and value:
Field:
field={New A=POSITIVE, New B=NEGATIVE, New C=POSITIVE, New D=BAD}
Result:
New A=POSITIVE
New C=POSITIVE
New D=BAD
Try this search. It uses a regular expression to extract parameters and values where the value is not "NEGATIVE".
index=foo
| rex field=field max_match=0 "(?<result>New \w=(?!NEGATIVE)\w+)"
| mvexpand result
To extract the actual field/value pairs, add this to the end of Rich's solution
| rename _raw as _orig_raw, result AS _raw
| extract pairdelim="," kvdelim="=" clean_keys=true
| rename _orig_raw as _raw

Extract numeric value from string in Cloudwatch to use in metrics (e.g. "64MB")

Is it possible to create a metric that extracts a numeric value from a string in Cloudwatch logs so I can graph / alarm it?
For example, the log may be:
20190827 1234 class: File size: 64MB
I realize I can capture the space delimited fields by using a filter pattern like: [date, time, class, word1, word2, file_size]
And file_size will be "64MB", but how do I convert that to a numeric 64 to be graphed?
Bonus question, is there any way of matching "File size:" as one field instead of creating a field for each space delimited word?
Use abs to cast to number, or any other numberic function
Using Glob Expressions
fields #message
| parse #message "File size: *MB" as size
| filter abs(size)<64
| limit 20
Using Regular Expressions
fields #message
| parse #message /File size:\s+(?<size>\d+)MB/
| filter abs(size)<64
| limit 20
To learn how glob or regular expression can be used, see Cloud Watch Logs Query Syntax

Get a Count of a Field Including Similar Entries MS Access

Hey all I'm trying to parse out any duplicates in an access database. I want the database to be usable for the access illiterate and therefore I am trying to set up queries that can be run without any understanding of the program.
My database is setup where there are occasionally special characters attached to the entries in the Name field. I am interested in checking for duplicate entries based of the fields field1 and name. How can I include the counts for entries with special characters with their non-special character counterparts? Is this possible in a single step or do I need to add a step where I clean the data first?
Currently my code (shown below) only returns counts for entries not including special characters.
SELECT
table.[field1],
table.[Name],
Count(table.[Name]) AS [CountOfName]
FROM
table
GROUP BY
table.[field1],
table.[Name]
HAVING
(((table.[Name]) Like "*") AND ((Count(table.[Name]))>1));
I have tried adding a leading space to the Like statement (Like " *"), but that returns zero results.
P.S. I have also tried the Replace statement to replace the special characters, but that did not work.
field1 ##
1234567
1234567
4567890
4567890
name ##
brian
brian
ted
ted‡
Results
field1
1234567
name
brian
countofname
2
GROUP BY works by placing rows into groups where values are the same. So, when you run your query on your data and it groups by field1 and name, you are saying "Put these records into groups where they share a common field1 and name value". If you want 4567890, ted and 4567890, ted† to show in the same group, and thus have a count of 2, both the field1 and name have to be the same.
If you only have one or two possible special characters on the end of the names, you could potentially use Replace() or Substring() to remove all the special chars from the end of the names, but remember you must also GROUP BY the new expression you create; you can't GROUP BY the original name field or you won't get your desired count. You could also create another column that contains a sanitized name, one without any special character on the end.
I don't have Access installed, but something like this should do it:
SELECT
table.[field1],
Replace(table.[Name], "†", "") AS Name,
Count(table.[Name]) AS [CountOfName]
FROM
table
GROUP BY
table.[field1],
Replace(table.[Name], "†", "")