CloudWatch Logs Insights display a filed from the Json in the log message - amazon-cloudwatch

This is my log entry from AWS API Gateway:
(8d036972-0445) Method request body before transformations: {"TransactionAmount":225.00,"OrderID":"1545623982","PayInfo":{"Method":"ec","TransactionAmount":225.00},"CFeeProcess":0}
I want to write a CloudWatch Logs Insights query which can display AWS request id, present in the first parenthesis and the order id present in the json.
I'm able to get the AWS request id by parsing the message. How can I get the OrderID json field?
Any help is greatly appreciated.
| parse #message "(*) Method request body before transformations: *" as awsReqId,JsonBody
#| filter OrderID = "1545623982" This did not work
| display awsReqId,OrderID
| limit 20

You can do it with two parse steps, like this:
fields #message
| parse #message "(*) Method request body before transformations: *" as awsReqId, JsonBody
| parse JsonBody "\"OrderID\":\"*\"" as OrderId
| filter OrderID = "1545623982"
| display awsReqId,OrderID
| limit 20
Edit:
Actually, they way you're doing it should also work. I think it doesn't work because you have 2 space characters between brackets and the word Method here (*) Method. Try removing 1 space.

Related

Splunk query to get field from JSON cell

The splunk query outputs a table where one of the column has these kind of json
the part of the query that gives this output is details.ANALYSIS
{"stepSuccess":false,"SR":false,"propertyMap":{"Url":"https://example.com","ErrCode":"401","transactionId":"7caf34342524-3d232133da","status":"API failing with error code 401"}}
I want to edit my splunk query so that instead of this json, I get only Url in this same column.
Here is my splunk query I was using
|dbxquery connection="AT" query="select service.req_id, service.out,details.ANALYSIS from servicerequest service,SERVICEREQUEST_D details where service.out like 'XYZ is%' and service.row_created > sysdate-1 and service.SERVICEREQUEST_ID = details.SERVICEREQUEST_ID and details.ANALYSIS_CLASS_NAME = 'GetProduction' " shortnames=0 maxrows=100000001
I tried using details.ANALYSIS.propertyMap.Url but it throws error.
You can probably use spathto extract the fields from details.ANALYSIS
Try the following to extract all fields
| spath field="details.ANALYSIS"
Or this just for the url field you are after
| spath field="details.ANALYSIS" path="propertyMap.Url"

How can we write the Splunk Query to find subField2 is present or not and if present get the counts of all subFiled2

{
index:"myIndex",
field1: "myfield1",
field2: {"subField1":"mySubField1","subField2":145,"subField3":500},
...
..
.
}
SPL : index:"myIndex" eval result = if(field.subField2) .....
is the dot operator works in SPL ?
I am assuming your data is in JSON format. If so, you can use spath to extract fields from your structured data. Then just check if the field is present or not with isnotnull
index="myIndex" | spath | where isnotnull(field2.subField2)
Presuming your data is in JSON format, this should do it:
index=myIndex sourcetype=srctp field2{}.subField2=*
If those are multivalue fields, you'll need to do an mvexpand first

How to deserialize json payload passed to other feature file which accept multiple arguments

I am sending multiple arguments to .feature file one of the argument is request json payload generated by using karate table. How to iterate through request payload so that post request will get one payload at a time.
Scenario: post booking
* table payload
| firstname | lastname | totalprice | depositpaid |
| 'foo' | 'IN' | 10 | true |
| 'bar' | 'out' | 20 | true |
#date will calculate using js function in background and baseURL is configured in karate.config.js file
* set payload[*].bookingdates = { checkin: '#(date())', checkout: '#(date())' }
* def result = call read('createrecord.feature') {PayLoad: #(payload) , URL: #(baseURL)}
######################################
createrecord.feature file will have
#ignore
Feature: To create data
Background:
* header Accept = 'application/json'
Scenario:
Given url __arg.URL
And path 'booking'
And request __arg.PayLoad
When method post
Then status 200
Here in createrecord.feature file how I can iterate through passed payload so that single payload will be passed to post request.
The simple rule you are missing is that if the argument to call is a JSON array (of JSON objects) it will iterate automatically.
Read the docs carefully please: https://github.com/intuit/karate#data-driven-features
So make this change:
* def result = call read('createrecord.feature') payload
And baseURL will be available in createrecord.feature so you don't need to worry about passing it.
Note that this may not work: * set payload[*].bookingdates refer this answer: https://stackoverflow.com/a/54928848/143475

Accessing values in JSON array

I am following the instruction in the documentation for how to access JSON values in CloudWatch Insights where the recomendation is as follows
JSON arrays are flattened into a list of field names and values. For example, to specify the value of instanceId for the first item in requestParameters.instancesSet, use requestParameters.instancesSet.items.0.instanceId.
ref
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html
I am trying the following and getting nothing in return. The intellisense autofills up to processList.0 but no further
fields processList.0.vss
| sort #timestamp desc
| limit 1
The JSON I am woking with is
"processList": [
{
"vss": xxxxx,
"name": "aurora",
"tgid": xxxx,
"vmlimit": "unlimited",
"parentID": 1,
"memoryUsedPc": 16.01,
"cpuUsedPc": 0.01,
"id": xxxxx,
"rss": xxxxx
},
{
"vss": xxxx,
"name": "aurora",
"tgid": xxxxxx,
"vmlimit": "unlimited",
"parentID": 1,
"memoryUsedPc": 16.01,
"cpuUsedPc": 0.06,
"id": xxxxx,
"rss": xxxxx
}]
Have you tried the following?
fields ##timestamp, #processList.0.vss
| sort ##timestamp desc
| limit 5
It may be a syntax error. If not, please post a couple of records worth of the overall structure, with #timestamp included.
The reference link that you have posted also states the following.
CloudWatch Logs Insights can extract a maximum of 100 log event fields
from a JSON log. For extra fields that are not extracted, you can use
the parse command to parse these fields from the raw unparsed log
event in the message field.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html
For very large JSON messages, Insights intellisense may not be parsing all the fields into named fields. So, the solution is to use parse on the complete JSON string in the field where you expect your data field to be present. In your example and mine it is processList.
I was able to extract the value of specific cpuUsedPc under processList by using a query like the following.
fields #timestamp, cpuUtilization.total, processList
| parse processList /"name":"RDS processes","tgid":.*?,"parentID":.*?,"memoryUsedPc":.*?,"cpuUsedPc":(?<RDSProcessesCPUUsedPc>.*?),/
| sort #timestamp asc
| display #timestamp, cpuUtilization.total, RDSProcessesCPUUsedPc

Extracting particular value using regex in splunk

In the below event "status" key has the value either "1" or "0".
I am looking out to extract those "status" having the value "0" and put them in a field
please help me out in getting a regular expression for this.
- 2017-02-14 18:47:28.572 INFO SomePlaceHolder-5 [.abc.def.nothingishere] - string response: <200 OK,{"clips":[{"myid":"123456","historyid":"777-888-999","provider":"somecompany","status":1,"userType":1}]},{X-Backside-Transport=[OK OK], Connection=[Keep-Alive], Transfer-Encoding=[chunked], Content-Type=[application/json], X-Powered-By=[ARR/3.0,ASP.NET], Date=[Tue, 14 Feb 2017 18:47:28 GMT], X-Client-IP=[10.0.0.0.], X-Global-Transaction-ID=[9876543]}>
Presuming Splunk hasn't already extracted these automatically (it looks close to JSON, perhaps), this will do it:
index=ndx sourcetype=srctp
| rex field=_raw "status\":(?<status>\d+)"
| search status=0