Parse/Ignore specific string in CloudWatch Logs Insights - amazon-cloudwatch

I have the following AWS Cloudwatch query:
fields #timestamp, #message
| filter #message like /(?i)(error|except)/
| filter !ispresent(level) and !ispresent(eventType)
| stats count(*) as ErrorCount by #message
| sort ErrorCount desc
Results end up looking something like this with the message and a count:
The first 4 results are actualy the same error. However, since they have different (node:*) values at the beginning of the message, it ends up grouping them as different errors.
Is there a way for the query to parse/ignore the (node:*) part so that the first 4 results in the image would be considered just one result with a total count of 2,997?

Related

Cloudwatch Stats Count if greater than zero

In Cloudwatch Log Insights, we have a query which totals some transactions based on the logs. We'd like to add one more count - that is the number of transactions that have a value above zero or is not null for a given query.
fields #timestamp, #message
| filter #message like /ingest success/
| fields concat(data.transaction.source.BusinessName, '-', toupper(data.transaction.orderType)) as clientOrderMode
| stats count(), sum(data.transaction.order.paymentAmount),sum(data.transaction.order.serviceCharge),sum(data.transaction.order.gratuity),
count(if(data.transaction.order.gratuity>0)),sum(data.transaction.guest.emailMarketingOptIn) by clientOrderMode
| sort data.transaction.source.OBBusinessName asc
The above clearly doesn't work, but hopefully you can see what I'm trying to achieve - the number of orders where gratuity is greater than zero.
Any advice, gratefully received.
Thanks

Display empty bin as a zero value in AWS Log Insights graph

With this count query by bin:
filter #message like / error /
| stats count() as exceptionCount by bin(30m)
I get a discontinuous graph, which is hard to grasp:
Is is possible for AWS Cloudwatch Log Insights to consider the empty bin as zero count to get a continuous graph?
Found your question looking for my own answer to this.
The best that I came up with is to calculate a 'presence' field and then use sum to get 0's in the time bins.
I used strcontains, which returns a 1 when matched or 0 when not. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html#CWL_QuerySyntax-operations-functions
Mine looks like this:
fields #timestamp, #message
| fields strcontains(#message, 'Exit status 1') as is_exit_message
| stats sum(is_exit_message) as is_exit_message_count by bin(15m) as time_of_crash
| sort time_of_crash desc
So, yours would be:
fields strcontains(#message, 'error') as is_error
| stats sum(is_error) as exceptionCount by bin(30m)
Use strcontains + sum or parse + count.
The point is not using filter. You should query all of logs.

Combine two Cloudwatch insights queries

I have two Cloudwatch insights queries that I would love to be able to run side by side and compare the results of both two.
stats count(*) as requestIdCount by #requestId
| filter #message like /START RequestId/
| filter requestIdCount > 1
stats count(*) as requestIdCount by #requestId
| filter #message like /END RequestId/
| filter requestIdCount > 1
It would be great to be able to do
fields (
stats count(*) as requestIdCount by #requestId
| filter #message like /END RequestId/
| filter requestIdCount > 1) as EndRequestCount,
(
stats count(*) as requestIdCount by #requestId
| filter #message like /START RequestId/
| filter requestIdCount > 1) as StartRequestCount
But I don't see any way to do subqueries in insights right now. Is there a method to combine queries like this?
Try this:
parse #message 'START RequestId' as #startRequestId
| parse #message 'END RequestId' as #endRequestId
| stats count(#startRequestId) as startRequestIdCount , count(#endRequestId) as endRequestIdCount by bin(5m)
| filter startRequestIdCount > 1
| filter endRequestIdCount > 1
CloudWatch Logs Insights Query Syntax
You can create a logic via API or CLI in order to use the output of a query as the input of another query
Amazon CloudWatch Logs API Reference
AWS CLI Command Reference - logs
It works as a script where you make a request, interpret the results, and then issue another requests with the results of the first one. It's a bit more work but I'm not aware of another way to do so

Splunk match partial result value of field and compare results

I have 3 fields in my splunk result like message, id and docId.
Need to group the results by id and doc id which has specific messages
message="successfully added" id=1234 docId =1345
message="removed someUniqueId" id=1234 docId =1345
I have to group based on the results by both id's which has the specific message
search query | rex "message=(?<message[\S\s]*>)" | where message="successfully added"
which is giving result for the first search, when i tried to search for second search query which is not giving result due to the someUniqueId"
search query | rex "message=(?<message[\S\s]*>)" | where match(message, "removed *")
Could you pelase help me to filter the results which has the 2 messages and group by id and docID
The match function expects a regular expression, not a pattern, as the second argument. Try search query | rex "message=(?<message>[\S\s]*)" | where match(message, "removed .*").
BTW, the regex strings in the rex commands are invalid, but that may be a typing error in the question.

How do I create a Splunk query for unused event types?

I have found that I can create a Splunk query to show how many times results of a certain event type appear in results.
severity=error | stats count by eventtype
This creates a table like so:
eventtype | count
------------------------
myEventType1 | 5
myEventType2 | 12
myEventType3 | 30
So far so good. However, I would like to find event types with zero results. Unfortunately, those with a count of 0 do not apear in the query above, so I can't just filter by that.
How do I create a Splunk query for unused event types?
There are lots of different ways for that, depending on what you mean by "event types". Somewhere, you have to get a list of whatever you are interested in, and roll them into the query.
Here's one version, assuming you had a csv that contained a list of eventtypes you wanted to see...
severity=error
| stats count as mycount by eventtype
| inputcsv append=t mylist.csv
| eval mycount=coalesce(mycount,0)
| stats sum(mycount) as mycount by eventtype
Here's another version, assuming that you wanted a list of all eventtypes that had occurred in the last 90 days, along with the count of how many had occurred yesterday:
earliest=-90d#d latest=#d severity=error
| addinfo
| stats count as totalcount count(eval(_time>=info_max_time-86400)) as yesterdaycount by eventtype