How to assign the log path to "source" field in Splunk while using fluent-bit? - splunk

I am using fluent-bit as the log collector and redirecting the log files to Splunk HEC. In event metadata, I would like to set "source" as the log path but it failed.
[INPUT]
Name tail
Tag jsm_on_centos8-2
Path /home/dujas/fluent-bit/conf/inputs/atlassian-jira-snip3.log
DB /home/dujas/fluent-bit/db/test.db
Read_from_Head True
Buffer_Chunk_Size 20480KB
Buffer_Max_Size 20480KB
multiline.parser jsm
[OUTPUT]
Name Splunk
Match *
Host centos8-1
Port 8043
tls on
tls.verify on
tls.ca_file /home/dujas/certs/ca.pem
tls.crt_file /home/dujas/certs/centos8-2_all.pem
tls.key_file /home/dujas/certs/centos8-2.key
tls.key_passwd <key password>
Splunk_Token <token value>
Splunk_Send_Raw off
event_host ${HOSTNAME}
event_source $path
event_key $time $message
With such configuration, Splunk took token name as the source:
Anything I missed?
One workaround is to set the event_source manually with the log path, but what if the log path is like:/home/dujas/logs/*.log?

Issue got fixed after adding path_key in input plugin and reserve_data in filter plugin.

Related

Fluentd grep + output logs

I have a service, deployed into a kubernetes cluster, with fluentd set as a daemon set. And i need to diversify logs it receives so they end up in different s3 buckets.
One bucket would be for all logs, generated by kubernetes and our debug/error handling code, and another bucket would be a subset of logs, generated by the service, parsed by structured logger and identified by a specific field in json. Think of it one bucket is for machine state and errors, another is for "user_id created resource image_id at ts" description of user actions
The service itself is ignorant of the fluentd, so i cannot manually set the tag for logs based on which s3 bucket i want them to end in.
Now, the fluentd.conf i use sets s3 stuff like this:
<match **>
# docs: https://docs.fluentd.org/v0.12/articles/out_s3
# note: this configuration relies on the nodes have an IAM instance profile with access to your S3 bucket
type copy
<store>
type s3
log_level info
s3_bucket "#{ENV['S3_BUCKET_NAME']}"
s3_region "#{ENV['S3_BUCKET_REGION']}"
aws_key_id "#{ENV['AWS_ACCESS_KEY_ID']}"
aws_sec_key "#{ENV['AWS_SECRET_ACCESS_KEY']}"
s3_object_key_format %{path}%{time_slice}/cluster-log-%{index}.%{file_extension}
format json
time_slice_format %Y/%m/%d
time_slice_wait 1m
flush_interval 10m
utc
include_time_key true
include_tag_key true
buffer_chunk_limit 128m
buffer_path /var/log/fluentd-buffers/s3.buffer
</store>
<store>
...
</store>
</match>
So, what i would like to do is to have something like a grep plugin
<store>
type grep
<regexp>
key type
pattern client-action
</regexp>
</store>
Which would send logs into a separate s3 bucket to the one defined for all logs
I am assuming that user action logs are generated by your service and system logs include docker, kubernetes and systemd logs from the nodes.
I found your example yaml file at the official fluent github repo.
If you check out the folder in that link, you'll see two more files called kubernetes.conf and systemd.conf. These files have got source sections where they tag their data.
The match section in fluent.conf is matching **, i.e. all logs and sending them to s3. You want to split your log types here.
Your container logs are being tagged kubernetes.* in kubernetes.conf on this line.
so your above config turns into
<match kubernetes.* >
#type s3
# user log s3 bucket
...
and for system logs match every other tag except kubernetes.*

Dockerized DCTM 7.3 and Dockerized DCTM REST 7.3 not able to retrieve global registry or its documents

My setup consists of
Documentum Content Server 7.3 (dctm-cs) running in a docker container (from EMC)
Documentum REST Services 7.3 (dctm-rest) running in a docker container (from EMC)
I am definitively able to get information from within dctm by running queries against it with iapi, for example:
API> ?,c,select user_name from dm_user enable (return_top 5)
user_name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docu
ubuntudb
dm_superusers
dm_superusers_dynamic
dm_browse_all
(5 rows affected)
I am also able to $ curl http://localhost:8080/dctm-rest/repositories.json from both the dctm-rest container as well as its host container and get the results:
{"id":"http://localhost:8080/dctm-rest/repositories","title":"Repositories","author":[{"name":"EMC Documentum"}],"updated":"2017-08-16T21:42:44.177+00:00","page":1,"items-per-page":1000,"total":1,"links":[{"rel":"self","href":"http://localhost:8080/dctm-rest/repositories.json"}],"entries":[{"id":"http://localhost:8080/dctm-rest/repositories/ubuntudb","title":"ubuntudb","summary":"ubuntudb","updated":"2017-08-16T21:42:44.178+00:00","published":"2017-08-16T21:42:44.178+00:00","links":[{"rel":"edit","href":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}],"content":{"type":"application/json","src":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}}]}
Attempting to $ curl http://localhost:8080/dctm-rest/repositories/ubuntudb.json, however hangs indefinitely.
I have attempted to provide the default username and password via basic HTTP authentication, also with the same results.
The contents of the dfc.properties file in dctm-cs:
dfc.data.dir=/opt/dctm
dfc.tokenstorage.dir=/opt/dctm/apptoken
dfc.tokenstorage.enable=false
dfc.docbroker.host[0]=ubuntustateless
dfc.docbroker.port[0]=1489
dfc.crypto.repository=ubuntudb
dfc.session.secure_connect_default=try_secure_first
dfc.globalregistry.repository=ubuntudb
dfc.globalregistry.username=dm_bof_registry
dfc.globalregistry.password=AAAAEL9wp8c6k3K2UTQJwTYO5kMnE3rDrHJVDL+LijAg+zLk
The contents of the dfc.properties file in dctm-rest:
dfc.docbroker.host[0]=172.18.0.1
dfc.docbroker.port[0]=1489
#Add the global registry repository name to the following key.
dfc.globalregistry.repository=ubuntudb
#Add the username of the global registry user to the following key.
dfc.globalregistry.username=dmadmin
#Add an encrypted password value for the following key.
dfc.globalregistry.password=password
dfc.exception.include_id=false
dfc.exception.include_decoration=false
I have attempted to change the value of dfc.globalregistry.username to be the same as in dctm-cs, to no avail and same hang on request.
I have also attempted to use both encrypted and decrypted values for dfc.globalregistry.password, in both dctm-cs and dctm-rest also with no luck.

Is there a way to add additional configurable settings in OpsCenter 6.0.2 Lifecycle Manager config profiles?

I would really like to add the following settings to our spark-defaults.conf using OpsCenter 6.0.2 in order to avoid configuration drift. Is there a way to add these config items to the config profile template?
spark.cores.max 4
spark.driver.memory 2g
spark.executor.memory 4g
spark.python.worker.memory 2g
NOTE: As Mike Lococo has pointed out in the comments for this answer -- this answer may work to update the config profile values but will not result in those values being written to spark-defaults.conf.
The following is not a solution!
You can; you have to update the config profile via the LCM Config Profile API (https://docs.datastax.com/en/opscenter/6.0/api/docs/lcm_config_profile.html#lcm-config-profile).
First, identify the config profile that needs updating:
$ curl http://localhost:8888/api/v1/lcm/config_profiles
Get the href for the specific config profile that needs updating, request it, and save the response body to a file:
$ curl http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 > profile.json
Now, in the profile.json file you just saved to, you add or edit the key at json > spark-defaults-conf to include the following keys:
"spark-defaults-conf": {
"spark-cores-max": 4,
"spark-python-worker-memory": "2g",
"spark-ssl-enabled": false,
"spark-drivers-memory": "2g",
"spark-executor-memory": "4g"
}
Save the updated profile.json. Finally, execute an HTTP PUT to the same config profile URL, using the edited file as the request data:
$ curl -X PUT http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 -d #profile.json

JBAS010153: Node identifier property is set to the default value. Please make sure it is unique

I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.
I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136
According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>
Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly
Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.
For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.

Upload failed while using Jmeter ZK Plugin

I'm currently facing a problem when trying to upload a file after running the Jmeter using the zk-plugin. It works fine when uploading without running the Jmeter.
It displays a pop-up message in ZK:
Upload Aborted : (contentId is required)
Inside the Jmeter:
Thread Name: Thread Group 1-1
Sample Start: 2015-04-16 17:35:15 SGT
Load time: 2
Connect Time: 0
Latency: 0
Size in bytes: 2549
Headers size in bytes: 0
Body size in bytes: 2549
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: java.io.FileNotFoundException
Response message: Non HTTP response message: 13 4 2015.txt (The system cannot find the file specified)
Response headers: HTTPSampleResult fields: ContentType: DataEncoding: null
How to fix this problem?
Basically ZK could return not very meaningful messages so it can be different route causes of this issues.
Look below for possible points in deployment components configuration and check they one by one:
First of all - check that directory pointed to by java.io.tmpdir exists.
In case you use Tomcat java.io.tmpdir will be set to $CATALINA_BASE/temp by default.
Look into catalina.sh and check that directory pointed to by $CATALINA_TMPDIR exists and has corresponding permissions applied:
if [ -z "$CATALINA_TMPDIR" ] ; then
# Define the java.io.tmpdir to use for Catalina
CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi
. . .
. . .
-Dcatalina.base=\"$CATALINA_BASE\" \
-Dcatalina.home=\"$CATALINA_HOME\" \
-Djava.io.tmpdir=\"$CATALINA_TMPDIR\" \
org.apache.catalina.startup.Bootstrap "$#" start
WEB-INF/zk.xml: max-upload-size value in ZK configuration descriptor (5120 Kb by default, should be enough).
WEB-INF/web.xml: max-file-size and max-request-size values in deployment descriptor:
<multipart-config>
<!-- 52MB max -->
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
conf/server.xml: maxPostSize value in Connector section (the maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing):
<Connector port="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxPostSize="67589953" />
It seems like we can only upload file that is inside our jmeter/bin. I upload using some files inside the jmeter/bin and the message is gone.
During recording you need to put the file you want to upload in jmeter/bin folder. This is due to some limitations of browsers which do not transmit the full path.
Reference : File upload fails during recording using JMeter , the first answer by pmpm