Rewriting log data - syslog-ng

I am sending syslog data to my LogZilla server and am unable to rewrite the data using the
Event message:
{"event_type":"Threat_Event","ipv4":"172.31.100.13","hostname":"server1.something.net","source_uuid":"df4df304c3-93f2a-41f89-8dfefd-7f54bdsf5e429f","occured":"06-Aug-2019 02:38:44","severity":"Warning","threat_type":"test file","threat_name":"Eicar","scanner_id":"Real-time file system protection","engine_version":"1498036 (20190805)","object_type":"file","object_uri":"file:///home/admin/g4.txt","action_taken":"cleaned by deleting","threat_handled":true,"need_restart":false,"username":"root","processname":"/usr/bin/vi","circumstances":"Event occurred on a newly created file.","firstseen":"06-Aug-2019 02:38:44","hash":"CF8BD9DFDDFF007F75ADF4C2BE48005CEA317C62"}
Code for automatic key value detection to rewrite message above:
{
"rewrite_rules": [
{
"match": {
"field": "program",
"value": "ESServer"
},
"update": {
"message": "${event_type}, ${ipv4}"
},
"kv": {"separator": ":", "delimiter": ","
}
}
]
}
'''
I am expecting the message to be parsed so that I can setup dashboards based on various fields from the message.

LogZilla doesn't parse kv pairs within quotes, so first you'll need to strip those out. Here's a syslog-ng rule that will do that:
filter f_program {program("ESServer")};
rewrite r_quotes { subst("\"", "", value("MESSAGE") flags("global") condition( filter(f_program))); };
log {
source(s_logzilla);
rewrite (r_quotes);
#filter(f_fwdrops);
destination(d_logzilla_network);
# Uncomment line below for debug/testing of incoming events
#destination(df_debug);
#destination(d_unix_stream);
flags(flow-control,final);
};
You should create a 'rules' directory to store any custom configurations in. Save the above in that directory as syslog.conf (or any name you prefer). The copy it to the conainer and restart syslog-ng:
docker cp syslog.conf lz_syslog:/etc/logzilla/syslog-ng
docker restart lz_syslog
Now those events should have the quotes removed when they come in. Next, create a LogZilla parser rule with the following:
first_match_only: true
rewrite_rules:
- comment:
- 'Name: ESET Security Manager KV'
- 'Sample: "event_type":"Threat_Event","ipv4":"172.31.100.13","hostname":"server1.something.net","source_uuid":"df4df304c3-93f2a-41f89-8dfefd-7f54bdsf5e429f","occured":"06-Aug-2019 02:38:44","severity":"Warning","threat_type":"test file","threat_name":"Eicar","scanner_id":"Real-time file system protection","engine_version":"1498036 (20190805)","object_type":"file","object_uri":"file:///home/admin/g4.txt","action_taken":"cleaned by deleting","threat_handled":true,"need_restart":false,"username":"root","processname":"/usr/bin/vi","circumstances":"Event occurred on a newly created file.","firstseen":"06-Aug-2019 02:38:44","hash":"CF8BD9DFDDFF007F75ADF4C2BE48005CEA317C62"'
- 'Description: ESET K/V Detection and User Tag creation'
match:
field: program
op: =~
value: 'lzadmin'
kv:
delimiter: ""
separator: ":"
pair_separator: ","
tag:
ut_event_type: ${event_type}
ut_ipv4: ${ipv4}
ut_hostname: ${hostname}
Then add the rule:
logzilla rules add kv.json

Related

Update BigQuery scheduled query with notificationPubsubTopic fails

I am using the DataServiceTransferClient API/SDK for Node to create scheduled queries in BigQuery with a notificationPubsubTopic. Creating them works fine, no issues. Updating them results in an error:
INVALID_ARGUMENT: notificationPubsubTopic cannot be updated.
How I'm calling it:
const config = {
transferConfig: {
/* other config options */
notificationPubsubTopic: "projects/engineering/topics/test"
},
updateMask: {
paths: [
"params.query",
"params.write_disposition",
"params.destination_table_name_template",
"schedule",
"notificationPubsubTopic"
],
},
}
dataTransferClient.updateTransferConfig(config)
Some other info:
The topics I've tested with do exist. I can update the scheduled query in the UI to these other topic with no issue.
Fails even when re-using the already associated topic.
Updates without notificationPubsubTopic succeed. By this I specifically mean I am not passing the notificationPubsubTopic property and have removed it from the updateMask.
The updateMask property needed to be turned into snakecase.
updateMask: {
paths: [
"params.query",
"params.write_disposition",
"params.destination_table_name_template",
"schedule",
"notification_pubsub_topic" // <--- here
],
},
The documentation even shows an example of using camelCasing
https://cloud.google.com/bigquery-transfer/docs/reference/datatransfer/rest/v1/projects.locations.transferConfigs/patch#body.QUERY_PARAMETERS.update_mask

How to use Atlassian Document Format in create issue rest api

I am trying to create an issue via Jira API -
{
// other fields is here
description: {
type: "doc",
version: 1,
content: [
{
type: "text",
text: summary
}
}
}
but I get an error - "Operation value must be a string".
so how can I create an issue correctly?
Most likely you're using API version 2 - which accepts text for this field.
However, you're providing value as json (Atlassian Document Format) which is for API version3

Hashicorp Vault API - Create User - unsupported path

Can we create a user in vault API using special characters. For example below POST url is used to create users.
POST : http://localhost:8200/v1/auth/userpass/users/myuser-1#beta_1.0$
Paylod :
{
"password": "myPassword",
"policies": "myuser-1#beta_1.0$",
"ttl": "120",
"max_ttl": "120"
}
Result :
{
"errors": [
"1 error occurred:\n\t* unsupported path\n\n"
]
}
the user myuser-1#beta_1.0$ contains special characters # _ $. I think the # $ chars should be encoded before passing to vault. However there is no information about url encoding in below documentation.
https://www.vaultproject.io/api/auth/userpass/index.html
Is encoding supported here or should it be replaced with other char before sending it to vault??
Note: Removing # $ chars, api works fine.

Creating a titled Google Sheets results in a "Proto field" error when using the NodeJs client library

I am trying to create a Google Spreadsheet using a NodeJs backend and the Google Sheets v4 API.
I was following the spreadsheets.create tutorial in documentation. However, when I create the file using some specified properties, I always get the following error:
Error: Invalid JSON payload received. Unknown name "title" at 'spreadsheet.properties': Proto field is not repeating, cannot start list.
In the tutorial nothing, is mentioned about a "Proto" field. Is this a bug or am I missing something?
Creating the file does work, if I don't specify properties. However the properties are used to set a name for the file and the sheets, so I do need a way to set this metadata.
Here is the request I am sending with the properties included:
const request = {
auth,
resource: {
properties: {
title: name,
},
sheets: [
{
properties: {
title: 'General',
},
},
],
},
};

apache nutch to index to solr via REST

newbie in apache nutch - writing a client to use it via REST.
succeed in all the steps (INJECT, FETCH...) - in the last step - when trying to index to solr - it fails to pass the parameter.
The Request (I formatted it in some website)
{
"args": {
"batch": "1463743197862",
"crawlId": "sample-crawl-01",
"solr.server.url": "http:\/\/x.x.x.x:8081\/solr\/"
},
"confId": "default",
"type": "INDEX",
"crawlId": "sample-crawl-01"
}
The Nutch logs:
java.lang.Exception: java.lang.RuntimeException: Missing SOLR URL. Should be set via -D solr.server.url
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Was that implemented? the param passing to solr plugin?
You need to create/update a configuration using the /config/create/ endpoint, with a POST request and a payload similar to:
{
"configId":"solr-config",
"force":"true",
"params":{"solr.server.url":"http://127.0.0.1:8983/solr/"}
}
In this case I'm creating a new configuration and specifying the solr.server.url parameter. You can verify this is working with a GET request to /config/solr-config (solr-config is the previously specified configId), the output should contain all the default parameters see https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4 for an example/default output. If everything worked fine in the returned JSON you should see the solr.server.url option with the desired value https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4#file-nutch-solr-config-json-L464.
After this just hit the /job/create endpoint to create a new INDEX Job, the payload should be something like:
{
"type":"INDEX",
"confId":"solr-config",
"crawlId":"crawl01",
"args": {}
}
The idea is that need to you pass the configId that you created with the solr.server.url specified along with the crawlId and other args. This should return something similar to:
{
"id": "crawl01-solr-config-INDEX-1252914231",
"type": "INDEX",
"confId": "solr-config",
"args": {},
"result": null,
"state": "RUNNING",
"msg": "OK",
"crawlId": "crawl01"
}
Bottom line you need to create a new configuration with the solr.server.url setted instead of specifying it through the args key in the JSON payload.