Can we index only one field of json log file in splunk - splunk

When running an application in EKS, the container logs are coming in the format of:
{ log: 2021-08-04 12:28:24,803INFO hostname com.application.RequestListener Sent response with request_id=1 stream: stdout time: 2021-08-04T12:28:24.803533339Z }
As I am only interested in the value relating to log, if there a way I can extract only:
2021-08-04 12:28:24,803INFO hostname com.application.RequestListener Sent response with request_id=1
I have tried setting source to log in outputs.conf, however, this does not seem to work and I still get the json format present in the splunk search head

Related

Found error 'CRASH REPORT Process' on RabbitMQ in every 10 mins

I found error on RabbitMQ in every 10 mins. Please help me to investigate this problem.
Error message.
021-09-09 13:25:30.084 [error] <0.14464.32> CRASH REPORT Process <0.14464.32> with 0 neighbours crashed with reason: no function clause matching rabbit_mgmt_wm_node:find_type(rabbit#controller1, []) line 79
2021-09-09 13:25:30.085 [error] <0.14457.32> Ranch listener rabbit_web_dispatch_sup_15672, connection process <0.14457.32>, stream 1 had its request process <0.14464.32> exit with reason function_clause and stacktrace [{rabbit_mgmt_wm_no
I had the same issue with Zabbix monitoring the RabbitMQ server every minute which generated a crash-error with the same frequency.
The URL used by Zabbix to monitor contained a domain part to the node name ie. rabbit#my_host.zzz.aws instead of the actual node name as displayed by the console: rabbit#my_host. this explains why rabbit_mgmt_wm_node:find_type failed and crashed.
This was verified using curl as shown below:
curl -v -u user:passwd 'http://127.0.0.1:15672/api/nodes/rabbit#my_host?memory=true'
which returned a valid response, HTTP/1.1 200 OK, when the node name matched and the crash/error when it did not.
please refer to this thread:
https://groups.google.com/g/rabbitmq-users/c/N0EgrLn55XQ

How to import old log files to graylog as input?

I am able to to setup graylog-server and graylog-web and able to setup input for generated log of apache2, tomcat and other applications with the help of graylog-collector
e.g.
apache-access {
type = "file"
path = "/var/log/apache2/access.log"
outputs = "gelf-tcp,console"
}
tomcat-debug {
type = "file"
path = "/home/alok/packages/apache-tomcat-7.0.59/logs/mydomain.debug.log"
outputs = "gelf-tcp,console"
}
How to see log from old log files in graylog? I tried to setup graylog-collector for old log file, graylog is listening to it but not showing content of log file. if someone know the way to achieve this please share
I am able to see my old log files (.log file) in graylog-web with help of logstash.
I just installed logstash and created a simple logstash configuration file having content
input {
file {
path => "/home/alok/Downloads/old_apache_access.log"
start_position => "beginning"
}
}
#filter {
# add filter according to need
#}
output {
gelf {
host => "10.149.235.66"
}
}
path is path for my old log file that I want to import to graylog.
start_position tell logstash from where log lines to be read.
gelf to output logs in graylog's format.
host is address of graylog server.
now I can run logstash to read log file by running command.
$/opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-simple.conf
Now I will add input in graylog for receiving logs from logstash. for that in main menu goto System >> Inputs
Then choose GELF UDP and lauch this newly selected input and give title to this and finally click on launch button.
Now one can see newly created input and click on Show received messages to see logs

How to configure custom access-log format for Glassfish-4.1 (worked in v3.1.2.2, ignored in v4.1)

I am converting some in-house systems from Glassfish-3.1.2.2 (on Java 1.7) to Glassfish-4.1 (on Java 1.8). We need a custom access log format to capture some data not present in the default log format specifier.
Glassfish-4.1 seems to be ignoring the format specifier (and for that matter, all other custom settings) in the "access-log" element in the domain.xml file. These config options worked flawlessly in Glassfish-3.1.2.2.
Specifically, consider the following from a working Glassfish-3.1.2.2 system, in the "..../configs/domain.xml" file. Certain values have been redacted, but the actual text is not relevant.
<configs>
<config name="server-config">
<http-service access-logging-enabled="true">
<access-log buffer-size-bytes="128000" write-interval-seconds="1" format="'%client.name% %datetime% %request% %status% %response.length% %session.com.redacted.redacted.User% %header.user-agent% %header.X-REDACTED%'"></access-log>
This works great in Glassfish-3.1.2.2. However, in Glassfish-4.1, the settings (format and write-interval-seconds) seem to be ignored (not sure how to test 'buffer-size-bytes').
I can see the custom access-log format string in the Glassfish-4.1 admin console: (https://host:4848/) -> Configurations -> server-config -> HTTP Service -> Access Logging -> Format.
I performed a few experiments (all failed).
I placed the format string into the "default-config" as well. This is contrary to the documentation (for GF-3, the "default-config" is used as a template to create new configs for new domains, and it NOT used by any running domain). As expected, this edit had no effect on the actual access log file (post service restart).
I edited the log format string from the admin web interface. I appended the static string "ABC123TEST", saved the config and restarted the server. Sure enough, the literal text "ABC123TEST" appears in the correct location in domain.xml, but it totally ignored when the access logfile is written out.
Example of incorrect access log file (some data edited for secrecy):
"1.2.3.4" "NULL-AUTH-USER" "09/Jun/2015:10:59:10 -0600" "GET /logoff-action.do HTTP/1.1" 200 0
Correct/desired access log sample:
"1.2.3.4" "09/Jun/2015:11:00:01 -0600" "GET /logoff-action.do;jsessionid=0000000000000000000000000000 HTTP/1.1" 200 0 "REDACTED-USER-NAME" "AwesomeUserAgentStr/1.0" "REDACTED-X-HEADER-VALUE"

Upload failed while using Jmeter ZK Plugin

I'm currently facing a problem when trying to upload a file after running the Jmeter using the zk-plugin. It works fine when uploading without running the Jmeter.
It displays a pop-up message in ZK:
Upload Aborted : (contentId is required)
Inside the Jmeter:
Thread Name: Thread Group 1-1
Sample Start: 2015-04-16 17:35:15 SGT
Load time: 2
Connect Time: 0
Latency: 0
Size in bytes: 2549
Headers size in bytes: 0
Body size in bytes: 2549
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: java.io.FileNotFoundException
Response message: Non HTTP response message: 13 4 2015.txt (The system cannot find the file specified)
Response headers: HTTPSampleResult fields: ContentType: DataEncoding: null
How to fix this problem?
Basically ZK could return not very meaningful messages so it can be different route causes of this issues.
Look below for possible points in deployment components configuration and check they one by one:
First of all - check that directory pointed to by java.io.tmpdir exists.
In case you use Tomcat java.io.tmpdir will be set to $CATALINA_BASE/temp by default.
Look into catalina.sh and check that directory pointed to by $CATALINA_TMPDIR exists and has corresponding permissions applied:
if [ -z "$CATALINA_TMPDIR" ] ; then
# Define the java.io.tmpdir to use for Catalina
CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi
. . .
. . .
-Dcatalina.base=\"$CATALINA_BASE\" \
-Dcatalina.home=\"$CATALINA_HOME\" \
-Djava.io.tmpdir=\"$CATALINA_TMPDIR\" \
org.apache.catalina.startup.Bootstrap "$#" start
WEB-INF/zk.xml: max-upload-size value in ZK configuration descriptor (5120 Kb by default, should be enough).
WEB-INF/web.xml: max-file-size and max-request-size values in deployment descriptor:
<multipart-config>
<!-- 52MB max -->
<max-file-size>52428800</max-file-size>
<max-request-size>52428800</max-request-size>
<file-size-threshold>0</file-size-threshold>
</multipart-config>
conf/server.xml: maxPostSize value in Connector section (the maximum size in bytes of the POST which will be handled by the container FORM URL parameter parsing):
<Connector port="80" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"
maxPostSize="67589953" />
It seems like we can only upload file that is inside our jmeter/bin. I upload using some files inside the jmeter/bin and the message is gone.
During recording you need to put the file you want to upload in jmeter/bin folder. This is due to some limitations of browsers which do not transmit the full path.
Reference : File upload fails during recording using JMeter , the first answer by pmpm

Send request to google's geocoding API from the terminal

I'm trying to geocode a lot of data. I have a lot of machines across which to spread the load (so that I won't go over the 2,500 requests per IP address per day). I am using a script to make the requests with either wget or cURL. However, both wget and cURL yield the same "request denied" message. That being said, when I make the request from my browser, it works perfectly. An example request is:
wget http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=true
And the resulting output is:
[1] 93930
05:00 PM ~: --2011-12-19 17:00:25-- http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA
Resolving maps.googleapis.com... 72.14.204.95
Connecting to maps.googleapis.com|72.14.204.95|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/json]
Saving to: `json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA'
[ <=> ] 54 --.-K/s in 0s
2011-12-19 17:00:25 (1.32 MB/s) - `json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA' saved [54]
The file it wrote to only contains:
{
"results" : [],
"status" : "REQUEST_DENIED"
}
Any help is much appreciated.
The '&' character that separates the address and sensor parameters isn't getting passed along to the wget command, but instead is telling your shell to run wget in the background. The resulting query is missing the required 'sensor' parameter, which should be set to true or false based on your input.
wget "http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=false"