Current query :
fields #message
| filter #message like /ABCD/
| stats count(#message)
result: #messages
1 55
now need to add more like a filter in this query like/BCDE/,/EFGH/,/IJKL/.....
the expected result should be like
#ABCD #BCDE #EFGH #IJKL...
55 66 77 88.
Can get like this? all the search keywords must be searched in the entire CloudWatch log.
This should work for you:
fields #message
| filter #message like /ABCD|BCDE|EFGH|IJKL/
| fields strcontains(#message, "ABCD") as #CONTAINS_ABCD,
strcontains(#message, "BCDE") as #CONTAINS_BCDE,
strcontains(#message, "EFGH") as #CONTAINS_EFGH,
strcontains(#message, "IJKL") as #CONTAINS_IJKL
| stats sum(#CONTAINS_ABCD) as #ABCD,
sum(#CONTAINS_BCDE) as #BCDE,
sum(#CONTAINS_EFGH) as #EFGH,
sum(#CONTAINS_IJKL) as #IJKL
Related
I have the following AWS Cloudwatch query:
fields #timestamp, #message
| filter #message like /(?i)(error|except)/
| filter !ispresent(level) and !ispresent(eventType)
| stats count(*) as ErrorCount by #message
| sort ErrorCount desc
Results end up looking something like this with the message and a count:
The first 4 results are actualy the same error. However, since they have different (node:*) values at the beginning of the message, it ends up grouping them as different errors.
Is there a way for the query to parse/ignore the (node:*) part so that the first 4 results in the image would be considered just one result with a total count of 2,997?
I am working on a dashboard query where I want to have a count of all the transactions that took more than a certain amount to complete.
My query is something like:
fields message
| filter kubernetes.namespace_name = 'feature-7355'
| filter message like "INFO"
| filter message like "Metric"
| parse '[*] * [*] * - *' as logLevel, timeStp, threadName, classInfo, logMessage
| parse logMessage 'Header: [*]. Metric: [*]. TimeSpent: [*]. correlationId: [*]' as headers, metric, timeSpent, correlationId
| filter ispresent(correlationId)
| stats sum(timeSpent) as TotalTimeSpentByTransaction by correlationId
| filter TotalTimeSpentByTransaction > 2000
| stats count(timeCorrelationId) as correlationIdCount
When I try to execute this I am getting an error:
mismatched input 'stats' expecting {K_PARSE, K_SEARCH, K_FIELDS, K_DISPLAY, K_FILTER, K_SORT, K_ORDER, K_HEAD, K_LIMIT, K_TAIL}
Is there a way to work this out, Can someone help me resolve this?
I have two Cloudwatch insights queries that I would love to be able to run side by side and compare the results of both two.
stats count(*) as requestIdCount by #requestId
| filter #message like /START RequestId/
| filter requestIdCount > 1
stats count(*) as requestIdCount by #requestId
| filter #message like /END RequestId/
| filter requestIdCount > 1
It would be great to be able to do
fields (
stats count(*) as requestIdCount by #requestId
| filter #message like /END RequestId/
| filter requestIdCount > 1) as EndRequestCount,
(
stats count(*) as requestIdCount by #requestId
| filter #message like /START RequestId/
| filter requestIdCount > 1) as StartRequestCount
But I don't see any way to do subqueries in insights right now. Is there a method to combine queries like this?
Try this:
parse #message 'START RequestId' as #startRequestId
| parse #message 'END RequestId' as #endRequestId
| stats count(#startRequestId) as startRequestIdCount , count(#endRequestId) as endRequestIdCount by bin(5m)
| filter startRequestIdCount > 1
| filter endRequestIdCount > 1
CloudWatch Logs Insights Query Syntax
You can create a logic via API or CLI in order to use the output of a query as the input of another query
Amazon CloudWatch Logs API Reference
AWS CLI Command Reference - logs
It works as a script where you make a request, interpret the results, and then issue another requests with the results of the first one. It's a bit more work but I'm not aware of another way to do so
I have results like below:
1. DateTime=2019-07-02T16:17:20,913 Thread=[], Message=[Message(userId=124, timestamp=2019-07-02T16:17:10.859Z, notificationType=CREATE, userAccount=UserAccount(firstName=S, lastName=K, emailAddress=abc#xyz.com, status=ACTIVE), originalValues=OriginalValue(emailAddress=null)) Toggle : true]
2. DateTime=2019-07-02T16:18:20,913 Thread=[], Message=[Message(userId=124, timestamp=2019-07-02T16:17:10.859Z, notificationType=CREATE, userAccount=UserAccount(firstName=S, lastName=K, emailAddress=abc#xyz.com, status=ACTIVE), originalValues=OriginalValue(emailAddress=new#xyz.com)) Toggle : true]
3. DateTime=2019-07-02T16:19:20,913 Thread=[], Message=[Message(userId=124, timestamp=2019-07-02T16:17:10.859Z, notificationType=CREATE, userAccount=UserAccount(firstName=S, lastName=K, emailAddress=abc#xyz.com, status=ACTIVE), originalValues=OriginalValue(emailAddress=new#xyz.com)) Toggle : true]
And I am trying to group results where the contents of the entire "Message" field is same and "emailAddress=null" is not contained in the Message.
So in the results above 2 and 3 should be the output.
The following query works fine for me but I need to optimize it further according to the following conditions:
Working Query: index=app sourcetype=appname host=appname* splunk_server_group=us-east-2 | fields Message | search Message= "[Message*" | regex _raw!="emailAddress=null" | stats count(Message) as count by Message | where count > 1
Conditions to optimize
Cannot rex against raw
Message key/value pair needs to be in the main search, not a sub-search
You don't have any subsearches in your current query. A subsearch is a query surrounded by square brackets.
What's wrong with rex against _raw?
Try this:
index=app sourcetype=appname host=appname* splunk_server_group=us-east-2 Message="[Message*"
| fields Message
| regex Message!="emailAddress=null"
| stats count(Message) as count by Message | where count > 1
I have a log statement like 2017-06-21 12:53:48,426 INFO transaction.TransactionManager.Info:181 -{"message":{"TransactionStatus":true,"TransactioName":"removeLockedUser-1498029828160"}} .
How can i extract TransactionName and TranscationStatus and print in table form TransactionName and its count.
I tried below query but didn't get any success. It is always giving me 0.
sourcetype=10.240.204.69 "TransactionStatus" | rex field=_raw ".TransactionStatus (?.)" |stats count((status=true)) as success_count
Solved it with this :
| makeresults
| eval _raw="2017-06-21 12:53:48,426 INFO transaction.TransactionManager.Info:181 -{\"message\":{\"TransactionStatus\":true,\"TransactioName\":\"removeLockedUser-1498029828160\"}}"
| rename COMMENT AS "Everything above generates sample event data; everything below is your solution"
| rex "{\"TransactionStatus\":(?[^,]),\"TransactioName\":\"(?[^\"])\""
| chart count OVER TransactioName BY TransactionStatus