With this count query by bin:
filter #message like / error /
| stats count() as exceptionCount by bin(30m)
I get a discontinuous graph, which is hard to grasp:
Is is possible for AWS Cloudwatch Log Insights to consider the empty bin as zero count to get a continuous graph?
Found your question looking for my own answer to this.
The best that I came up with is to calculate a 'presence' field and then use sum to get 0's in the time bins.
I used strcontains, which returns a 1 when matched or 0 when not. https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html#CWL_QuerySyntax-operations-functions
Mine looks like this:
fields #timestamp, #message
| fields strcontains(#message, 'Exit status 1') as is_exit_message
| stats sum(is_exit_message) as is_exit_message_count by bin(15m) as time_of_crash
| sort time_of_crash desc
So, yours would be:
fields strcontains(#message, 'error') as is_error
| stats sum(is_error) as exceptionCount by bin(30m)
Use strcontains + sum or parse + count.
The point is not using filter. You should query all of logs.
Related
I have the following AWS Cloudwatch query:
fields #timestamp, #message
| filter #message like /(?i)(error|except)/
| filter !ispresent(level) and !ispresent(eventType)
| stats count(*) as ErrorCount by #message
| sort ErrorCount desc
Results end up looking something like this with the message and a count:
The first 4 results are actualy the same error. However, since they have different (node:*) values at the beginning of the message, it ends up grouping them as different errors.
Is there a way for the query to parse/ignore the (node:*) part so that the first 4 results in the image would be considered just one result with a total count of 2,997?
In Cloudwatch Log Insights, we have a query which totals some transactions based on the logs. We'd like to add one more count - that is the number of transactions that have a value above zero or is not null for a given query.
fields #timestamp, #message
| filter #message like /ingest success/
| fields concat(data.transaction.source.BusinessName, '-', toupper(data.transaction.orderType)) as clientOrderMode
| stats count(), sum(data.transaction.order.paymentAmount),sum(data.transaction.order.serviceCharge),sum(data.transaction.order.gratuity),
count(if(data.transaction.order.gratuity>0)),sum(data.transaction.guest.emailMarketingOptIn) by clientOrderMode
| sort data.transaction.source.OBBusinessName asc
The above clearly doesn't work, but hopefully you can see what I'm trying to achieve - the number of orders where gratuity is greater than zero.
Any advice, gratefully received.
Thanks
I am trying to filter out all negative values in my metrics, I would like to know if the filtering within the mstats call itself possible, to add something like AND metrics_name:data.value > 0 to the query below?
| mstats avg(_value) WHERE metric_name="data.value" AND index="my_metrics" BY data.team
Currently, I am using the msearch and then filtering out the events, so my query is something like the one below but its too slow as I am pulling all the events:
| msearch index=my_metrics
| fields "metrics_name:data.value"
| where mvcount(mvfilter(tonumber(metrics_name:data.value') > 0)) >= 1 OR isnull('metrics_name:data.value')
Unfortunately, you cannot filter or group-by the _value field with Metrics.
You may be able to speed up your search with msearch by including the metric_name in the filter.
| msearch index=my_metrics filter="metric_name=data.value"
Note that using msearch returns a sample of the metric values, not all of them, unless you specify target_per_timeseries=0
Refer to https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Mstats
I'm trying to do a query that will first aggregate by field count and after by bin(1h) for example I would like to get the result like:
# Date Field Count
1 2019-01-01T10:00:00.000Z A 123
2 2019-01-01T11:00:00.000Z A 456
3 2019-01-01T10:00:00.000Z B 567
4 2019-01-01T11:00:00.000Z B 789
Not sure if it's possible though, the query should be something like:
fields Field
| stats count() by Field by bin(1h)
Any ideas how to achieve this?
Is this what you need?
fields Field | stats count() by Field, bin(1h)
If you want to create a line chart, you can do it by separately counting each value that your field could take.
fields
Field = 'A' as is_A,
Field = 'B' as is_B
| stats sum(is_A) as A, sum(is_B) as B by bin(1hour)
This solution requires your query to include a string literal of each value ('A' and 'B' in OP's example). It works as long as you know what those possible values are.
This might be what Hugo Mallet was looking for, except the avg() function won't work here so he'd have to calculate the average by dividing by a total
Not able to group by a certain field and create visualizations.
fields Field
| stats count() by Field, bin(1h)
Keep getting this message
No visualization available. Try this to get started:
stats count() by bin(30s)
I'm running a fairly simple search to identify index usage on my Splunk install by source, as we're running through the Enterprise 30-day trial with the intention of using Splunk Free after it expires:
index=_internal source=*license_usage.log | eval MB=b/1024/1024 | timechart span=1d sum(MB) by s where count in top50
The results for all of my data sources are returned as expected but there's an additional column titled "NULL" at the end of the results:
Splunk index search NULL column
All of my data has an input source and when I click on the column and choose to view the data, it brings back no results.
Can anyone help me understand what this NULL column is please? If it's correct it suggests I'm using over the 500MB/day limit for Splunk Free, which I need to address before the trial period ends.
The NULL column appears because some events do not have an 's' field. You only want to sum those events with an s field so modify your query to
index=_internal source=*license_usage.log type=Usage
| eval MB=b/1024/1024
| timechart span=1d sum(MB) by s where count in top50