multiple patterns in same conf file not working - indexing

i am unable to create index using multiple groks in same file please help
#request "%{TIMESTAMP_ISO8601:date_time},%{LOGLEVEL:log_level}\s*\[\s*%{DATA:servicedetail}\s*\]\s*%{NUMBER:refid:int}\s*---\s*\[\s*%{DATA:thread}\s*\]\s*,\s*\s\"Title\"\:\"%{DATA:Title}\",\s*\"RequestId\"\:\"%{DATA:RequestId}\",\s*\"API\"\:\"%{DATA:API}\",\s*\"ApiURL\"\:\"%{DATA:ApiURL}\",\s*\"detailedMessage\":\{"%{DATA:detailedMessage}\"\{\s*\"username\"\:\"%{DATA:username}\",\s*\"password\"\:\"%{DATA:password}\",\s*\"Bank_Mnemonic\"\:\"%{DATA:Bank_Mnemonic}\",\s*\"Consumer_number\"\:\"%{DATA:Consumer_number}\",\s*\"Tran_Auth_Id\"\:\"%{NUMBER:Tran_Auth_Id}\",\s*\"Transaction_Amount\"\:\"%{NUMBER:Transaction_Amount}\",\s*\"Tran_Date\"\:\"%{NUMBER:Tran_Date}\",\s*\"Tran_Time\"\:\"%{NUMBER:Tran_Time}\",\s*\"Reserved\"\:\"%{DATA:Reserved}\"",
#Response "%{TIMESTAMP_ISO8601:date_time},%{LOGLEVEL:log_level}\s*\[\s*%{DATA:servicedetail}\s*\]\s*%{NUMBER:refid:int}\s*---\s*\[\s*%{DATA:thread}\s*\],\s*\s\"Title\"\:\"%{DATA:Title}\",\s*\"RequestId\"\:\"%{DATA:RequestId}\",\s*\"API\"\:\"%{DATA:API}\",\s*\"ApiURL\"\:\"%{DATA:ApiURL}\",\s*\"detailedMessage\":\{"%{DATA:detailedMessage}\:\{\"Response_code\":\"%{NUMBER:Response_code:INT}\",\s*\"Identification_Parameter\":\"%{DATA:Identification_Parameter}\",\s*\"Reserved\":\"%{DATA:Reserved}\""
below are the logs
REQUEST :>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
` 2021-06-25 14:53:50.078,INFO [CASHIN,ff742d4fd5ea5ed0,ff742d4fd5ea5ed0,true] 3175 --- [http-nio-8383-exec-9] , "Title":"Request", "RequestId":"86e29454-7799-4e32-8640-22d4b48b7fd9", "API":"BillPayment", "ApiURL":"CashIn/BillPayment", "detailedMessage":{"BillPayment"{"username":"2RucXgyG8KO3nKx5d2mnfgSzO9rPMei6LM4Pipi/TVZgjn/sxs4ru1q6uyc9ECNoWlo","password":"BDaFalMF5C8o1LgQEsPmpBSgOqc8bpwCs7YmPdq4ZW2YZs=","Bank_Mnemonic":"PARKRT01","Consumer_number":"923240045677","Tran_Auth_Id":"456460","Transaction_Amount":"0000020000000","Tran_Date":"20200211","Tran_Time":"091546","Reserved":""}}
RESPONSE>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>.
2021-06-25 14:53:50.078,INFO [CASHIN,ff742d4fd5ea5ed0,ff742d4fd5ea5ed0,true] 3175 --- [http-nio-8383-exec-9] , "Title":"Response","RequestId":"d9dd1f2a-b87e-4928-ab43-b8bfb293f2d0","API":"BillPayment","ApiURL":"CashIn/BillPayment","detailedMessage":{"GetBillPaymentRes":{"Response_code":"03","Identification_Parameter":"","Reserved":""}} `

Related

CloudWatch Logs Insights display a filed from the Json in the log message

This is my log entry from AWS API Gateway:
(8d036972-0445) Method request body before transformations: {"TransactionAmount":225.00,"OrderID":"1545623982","PayInfo":{"Method":"ec","TransactionAmount":225.00},"CFeeProcess":0}
I want to write a CloudWatch Logs Insights query which can display AWS request id, present in the first parenthesis and the order id present in the json.
I'm able to get the AWS request id by parsing the message. How can I get the OrderID json field?
Any help is greatly appreciated.
| parse #message "(*) Method request body before transformations: *" as awsReqId,JsonBody
#| filter OrderID = "1545623982" This did not work
| display awsReqId,OrderID
| limit 20
You can do it with two parse steps, like this:
fields #message
| parse #message "(*) Method request body before transformations: *" as awsReqId, JsonBody
| parse JsonBody "\"OrderID\":\"*\"" as OrderId
| filter OrderID = "1545623982"
| display awsReqId,OrderID
| limit 20
Edit:
Actually, they way you're doing it should also work. I think it doesn't work because you have 2 space characters between brackets and the word Method here (*) Method. Try removing 1 space.

Unable to convert multiple API response into xml in single call

I am sending multiple API calls and getting their respective responses back. I am trying to convert each response into XML but it is failing with an error.
The setup:
* table requestTable
| nameTags| ageGroups| status |
| nameTag1| ageGroup1| 200 |
| nameTag2| ageGroup2| 200 |
| nameTag3| ageGroup3| 200 |
* def getRequest = call read('target.feature#getRequest') requestTable
* xml transformResponse = $getRequest[*].response
Getting the following error:
class com.sun.org.apache.xerces.internal.dom.DeferredDocumentImplAccAccess (in unnamed module #0x1414800e)
cannot access class com.sun.org.apache.xerces.internal.dom.DeferredDocumentImpl (in module java.xml) because
module java.xml does not export com.sun.org.apache.xerces.internal.dom to unnamed module #0x1414800e
If I do a single conversion such as: * xml transformResponse = $getRequest[0].response it works as expected.
Please advise.

HIVE-SQL_SERVER: HadoopExecutionException: Not enough columns in this line

I have a hive table with the following structure and data:
Table structure:
CREATE EXTERNAL TABLE IF NOT EXISTS db_crprcdtl.shcar_dtls
ID string,
CSK string,
BRND string,
MKTCP string,
AMTCMP string,
AMTSP string,
RLBRND string,
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/on/hadoop/dir/'
-------------------------------------------------------------------------------
ID | CSK | BRND | MKTCP | AMTCMP
-------------------------------------------------------------------------------
782 flatn,grpl,mrtn hnd,mrc,nsn 34555,56566,66455 38900,59484,71450
1231 jikl,bngr su,mrc,frd 56566,32333,45000 59872,35673,48933
123 unsrvl tyt,frd,vlv 25000,34789,33443 29892,38922,36781
Trying to push this data into the SQL Server. But while doing so, getting the following error message:
SQL Error [107090] [S0001]: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopExecutionException: Not enough columns in this line.
What I tried:
There's an online article where the author has documented similar kind of issues. I tried to implement one of them Looked in Excel and found two columns that had carriage returns but this also doesn't come handy.
Any suggestion/help would be really appreciated. Thanks
If I'm able to understand your issue, then it seems that your , separated data is getting divided into various columns rather one column on the SQL-SERVER, something like:
------------------------------
ID |CSK |BRND |MKTCP |AMTCMP
------------------------------
782 flatn grpl mrtn hnd mrc nsn 345 56566 66455 38900 59484 71450
1231 jikl bngr su mrc frd 56566 32333 45000 59872 35673 48933
123 unsrvl tyt frd vlv 25000 34789 33443 29892 38922 36781
So, if you look on Hive there are only 5 columns. While on SQL-SERVER the same. This I presume as you haven't shared the schema. But if that's the case, then you see that there are more than 5 values are being passed. While the schema definition is only of 5 columns.
So the error is populating.
Refer this Document by MS and try to create a FILE_FORMAT with FIELD_TERMINATOR ='\t',
like:
CREATE EXTERNAL FILE FORMAT <name>
WITH (   
FORMAT_TYPE = DELIMITEDTEXT,   
FORMAT_OPTIONS (        
FIELD_TERMINATOR ='\t',
| STRING_DELIMITER = string_delimiter
| First_Row = integer -- ONLY AVAILABLE SQL DW
| DATE_FORMAT = datetime_format
| USE_TYPE_DEFAULT = { TRUE | FALSE }
| Encoding = {'UTF8' | 'UTF16'} )
);
Hope that helps to resolve to your issue :)

Exporting SQL Query to a local text file

This is for an approach that WRITES to a local file.
I am using SQL WorkBench and I'm connected to an AWS Redshift instance (which uses postgresql). I would like to run the query and have data exported from AWS Redshift to a local csv or text file. I have tried:
SELECT transaction_date ,
Variable 1 ,
Variable 2 ,
Variable 3 ,
Variable 4 ,
Variable 5
From xyz
into OUTFILE 'C:/filename.csv'
But I get the following error:
ERROR: syntax error at or near "'C:/filename.csv'"
Position: 148
into OUTFILE 'C:/filename.csv'

Read part of column value

I have the following SQL Server 2012 table:
create table dbo.Packs {
Id int identity not null,
Info nvarchar (200) not null
}
How can I get all rows where Info starts with "File" ... I might have:
"File #200 View", "File #100 Delete", "Stat F#20"
So I would get the first two records.
Thank You,
Miguel
You can use the LIKE operator with the % wildcard:
SELECT [Id], [Info]
FROM [dbo].[Packs]
WHERE [Info] LIKE 'File%'
Example
CREATE TABLE packs ([Info] varchar(16));
INSERT INTO packs ([Info])
VALUES ('File #200 View'), ('File #100 Delete'), ('Stat F#20');
SELECT [Info] FROM [packs] WHERE [Info] LIKE 'File%';
Returns
| INFO |
|------------------|
| File #200 View |
| File #100 Delete |
See a demo
SELECT Id,
Info
FROM Packs
WHERE Info LIKE 'File%'
Use LIKE with a wildcard:
SELECT * FROM dbo.Packs WHERE Info LIKE 'File%'
The link above has a reference of the available wildcards; you want to match any string, including zero characters, after the File part so you want the % wildcard.
SELECT * FROM [dbo].[Packs] WHERE [Info] LIKE 'File%'
Should do the trick :)