filebeat marking the log file inactive even when there is unread content in it - filebeat

I am using filebeat version 5.6.16 on a centos server to push logs to logstash from path /opt/news-bff/logs/Icis.Genesis*.log
There are many matching log files
-rw-r--r--. 1 root root 5049 Sep 25 10:30 Icis.Genesis.News.Bff.Api-2019092510.log
-rw-r--r--. 1 root root 1551 Sep 25 12:15 Icis.Genesis.News.Bff.Api-2019092512.log
-rw-r--r--. 1 root root 2650 Sep 25 13:55 Icis.Genesis.News.Bff.Api-2019092513.log
-rw-r--r--. 1 root root 39447 Sep 25 14:50 Icis.Genesis.News.Bff.Api-2019092514.log
-rw-r--r--. 1 root root 6191 Sep 25 15:31 Icis.Genesis.News.Bff.Api-2019092515.log
But the issue is, filebeat opens harvester for every file, after five minutes marks it as inactive withiust shipping any logs.
The /var/lib/filebeat/registry file is empty.
I prvision multiple centos server and insall filebeat on it using puppet.
5 out of 10 times , the filebeat picks the log files wothout any issues.
otherwise i get this issue.
If delete the registry file and resart the service it works fine.
The filebeat prospector looks like below:
filebeat:
prospectors:
- input_type: log
paths:
- /opt/news-bff/logs/Icis.Genesis*.log
encoding: plain
fields_under_root: false
document_type: log
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760`
logs:
2019-09-25T12:16:01+01:00 INFO Harvester started for file: /opt/news-bff/logs/Icis.Genesis.News.Bff.Api-2019092512.log
2019-09-25T12:16:10+01:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.publish.read_bytes=36
2019-09-25T12:16:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:17:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:17:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:18:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:18:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:19:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:19:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=30
2019-09-25T12:20:10+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:20:40+01:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=36
2019-09-25T12:21:06+01:00 INFO File is inactive: /opt/news-bff/logs/Icis.Genesis.News.Bff.Api-2019092512.log. Closing because close_inactive of 5m0s reached.

Related

Filebeat not read all logs from directory

I am configuring filebeat to send to elastic logs located in /var/log/myapp/batch_*
Here my filebeat configuration:
# Version
filebeat version 7.11.0 (amd64), libbeat 7.11.0 [84c4d4c4034fcb49c1a318ccdc7311d70adee15b built 2021-02-08 22:42:11 +0000 UTC]
# Filebeat config
logging.metrics.period: 1h
logging.to_files: true
logging.files:
rotateeverybytes: 16777216
keepfiles: 7
permissions: 0600
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*"
output.elasticsearch:
hosts: ["server:9200"]
index: "log_test_app-%{+yyyy.MM.dd}"
setup.ilm.enabled: false
setup.template.name: "log_test_app"
setup.template.pattern: "log_test_app-*"
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
I only see that two logs are being sent and within the established directory there are a total of eight logs:
2022-05-24T19:39:55.904Z INFO log/input.go:157 Configured paths: [/var/log/myapp/batch_*]
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:141 Starting input (ID: 3328309751929357009)
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/myapp/batch_emails.log
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/loyalty/batch_import.log
I show you a list of directory files:
ls -l /var/log/loyalty/batch_*
-rw-r--r-- 1 batch batch 154112 May 24 03:20 /var/log/myapp/batch_gifts.log
-rw-r--r-- 1 batch batch 112319 May 24 02:30 /var/log/myapp/batch_http.log
-rw-r--r-- 1 batch batch 7575342 May 24 02:30 /var/log/myapp/batch_vouchers.log
-rw-r--r-- 1 batch batch 4847849 May 24 19:30 /var/log/myapp/batch_ftp.log
-rw-r--r-- 1 batch batch 99413 May 24 03:40 /var/log/myapp/batch_category.log
-rw-r--r-- 1 root root 367207 May 24 19:50 /var/log/myapp/batch_emails.log
-rw-r--r-- 1 batch batch 479 Jan 1 23:00 /var/log/myapp/batch_history.log
-rw-r--r-- 1 batch batch 2420916 Jan 1 23:00 /var/log/myapp/batch_lists.php
-rw-r--r-- 1 batch batch 25779499 May 24 19:50 /var/log/myapp/batch_import.log
Is there something wrong with my setup? I tried using the ignore_older parameter: 36h but only two log files are processed.
Thanks for the help.
Welcome to stack overflow, Emanuel :)
I believe you are reading only the log files (*.log) thus you can mention the same.
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*.log"
Keep Posted!!! Thanks!!!

Filebeat does not complete on close_eof + --once

Using filebeat 7.5.2:
I'm using a filebeat configuration with close_eof enabled and I run filebeat with the flag --once. I can see the harvester reaching eof but the filebeat keeps going.
Flebeat conf:
filebeat.inputs:
- type: log
close_eof: true
enabled: true
paths:
- "${LOGS_PATH}"
scan_frequency: 1s
fields: {
machine: "${HOST}"
}
output.logstash:
hosts: ["192.168.41.6:5044"]
bulk_max_size: 1024
timeout: 30s
pipelining: 1
workers: 1
And I run it using:
filebeat run --once -v -c "PATH TO CONF..."
And some logs from the filebeat instance:
...
2020-02-04T18:30:16.950Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-04T18:30:17.059Z INFO [publisher] pipeline/module.go:97 Beat name: logstash
2020-02-04T18:30:17.167Z WARN beater/filebeat.go:152 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.168Z INFO instance/beat.go:429 filebeat start running.
2020-02-04T18:30:17.168Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-04T18:30:17.168Z INFO registrar/migrate.go:104 No registry home found. Create: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat
2020-02-04T18:30:17.179Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-04T18:30:17.192Z INFO registrar/registrar.go:108 No registry file found under: /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json. Creating a new re
gistry file.
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:145 Loading registrar data from /tmp/tmp.BXJtfiaEzb/data/registry/filebeat/data.json
2020-02-04T18:30:17.193Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-04T18:30:17.193Z WARN beater/filebeat.go:368 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch out
put is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-02-04T18:30:17.193Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-04T18:30:17.194Z INFO log/input.go:152 Configured paths: [/tmp/tmp.BXJtfiaEzb/*.log]
2020-02-04T18:30:17.206Z INFO input/input.go:114 Starting input of type: log; ID: 13918413832820009056
2020-02-04T18:30:17.225Z INFO input/input.go:167 Stopping Input: 13918413832820009056
2020-02-04T18:30:17.225Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-04T18:30:17.225Z INFO log/harvester.go:251 Harvester started for file: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:384 Running filebeat once. Waiting for completion ...
2020-02-04T18:30:17.231Z INFO beater/filebeat.go:386 All data collection completed. Shutting down.
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:139 Stopping Crawler
2020-02-04T18:30:17.231Z INFO crawler/crawler.go:149 Stopping 1 inputs
2020-02-04T18:30:17.258Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:30:17.296Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... Only metrics here ...
2020-02-04T18:35:55.686Z INFO log/harvester.go:274 End of file reached: /tmp/tmp.BXJtfiaEzb/dcbgw-20200124080032_darkblue.log. Closing because close_eof is enabled.
2020-02-04T18:35:55.686Z INFO crawler/crawler.go:165 Crawler stopped
... MORE METRICS ...
2020-02-04T18:36:26.609Z ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 192.168.41.6:49662->192.168.41.6:5044: i/o timeout
2020-02-04T18:36:26.621Z ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-04T18:36:28.520Z ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-04T18:36:28.520Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://192.168.41.6:5044))
2020-02-04T18:36:28.521Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://192.168.41.6:5044)) established
... MORE METRICS ...
From this I'm outputing this to Logstash 7.5.2 running in the same Ubuntu 18 VM. Running Logstash with log level trace does not output any error.

Making Dockerized Flask server concurrent

I have a Flask server that I'm running on AWS Fargate. My task has 2 vCPUs and 8 GB of memory. My server is only able to respond to one request at a time. If I run 2 API requests at the same, each that takes 7 seconds, the first request will take 7 seconds to return and the second will take 14 seconds to return.
This is my Docker file (using this repo):
FROM tiangolo/uwsgi-nginx-flask:python3.7
COPY ./requirements.txt requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
RUN python3 -m spacy download en
RUN apt-get update
RUN apt-get install wkhtmltopdf -y
RUN apt-get install poppler-utils -y
RUN apt-get install xvfb -y
COPY ./ /app
I have the following config file:
[uwsgi]
module = main
callable = app
enable-threads = true
These are my logs when I start the server:
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-10-05 06:29:53,438 CRIT Supervisor running as root (no user in config file)
2019-10-05 06:29:53,438 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2019-10-05 06:29:53,446 INFO RPC interface 'supervisor' initialized
2019-10-05 06:29:53,446 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-10-05 06:29:53,446 INFO supervisord started with pid 1
2019-10-05 06:29:54,448 INFO spawned: 'nginx' with pid 9
2019-10-05 06:29:54,450 INFO spawned: 'uwsgi' with pid 10
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
;uWSGI instance configuration
[uwsgi]
cheaper = 2
processes = 16
ini = /app/uwsgi.ini
module = main
callable = app
enable-threads = true
ini = /etc/uwsgi/uwsgi.ini
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
hook-master-start = unix_signal:15 gracefully_kill_them_all
need-app = true
die-on-term = true
show-config = true
;end of configuration
*** Starting uWSGI 2.0.18 (64bit) on [Sat Oct 5 06:29:54 2019] ***
compiled with version: 6.3.0 20170516 on 09 August 2019 03:11:53
os: Linux-4.14.138-114.102.amzn2.x86_64 #1 SMP Thu Aug 15 15:29:58 UTC 2019
nodename: ip-10-0-1-217.ec2.internal
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.7.4 (default, Jul 13 2019, 14:20:24) [GCC 6.3.0 20170516]
Python main interpreter initialized at 0x55e1e2b181a0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
2019-10-05 06:29:55,483 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-10-05 06:29:55,484 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

RabbitMQ not starting with message "init terminating in do_boot, noproc" on Ubuntu 18.04

I cannot seem to start or install my RabbitMQ server anymore for my Ubuntu 18.04 anymore. I tried to remove and install it again, but it cannot finish the install because configuration fails. When I try to run sudo apt-get install --fix-broken. This is the result of it failing:
Reading package lists... Done
Building dependency tree
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 61 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up rabbitmq-server (3.6.10-1) ...
Job for rabbitmq-server.service failed because the control process exited with error code.
See "systemctl status rabbitmq-server.service" and "journalctl -xe" for details.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
● rabbitmq-server.service - RabbitMQ Messaging Server
Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2018-08-22 09:16:51 EEST; 5ms ago
Process: 20997 ExecStartPost=/usr/lib/rabbitmq/bin/rabbitmq-server-wait (code=exited, status=70)
Process: 20996 ExecStart=/usr/sbin/rabbitmq-server (code=exited, status=0/SUCCESS)
Main PID: 20996 (code=exited, status=0/SUCCESS)
elo 22 09:16:48 ubuntu-dev systemd[1]: Starting RabbitMQ Messaging Server...
elo 22 09:16:49 ubuntu-dev rabbitmq[20997]: Waiting for 'rabbit#ubuntu-dev'
elo 22 09:16:49 ubuntu-dev rabbitmq[20997]: pid is 21001
elo 22 09:16:51 ubuntu-dev rabbitmq[20997]: Error: process_not_running
elo 22 09:16:51 ubuntu-dev systemd[1]: rabbitmq-server.service: Control process exited, code=exited status=70
elo 22 09:16:51 ubuntu-dev systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
elo 22 09:16:51 ubuntu-dev systemd[1]: Failed to start RabbitMQ Messaging Server.
dpkg: error processing package rabbitmq-server (--configure):
installed rabbitmq-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Then when checking the log files they doesn't provide much more information either. Here is startup_err log file content:
init terminating in do_boot (noproc)
Crash dump is being written to: erl_crash.dump...done'
And here is startup_log file content:
BOOT FAILED
===========
Error description:
noproc
Log files (may contain more information):
/var/log/rabbitmq/rabbit.log
/var/log/rabbitmq/rabbit-sasl.log
Stack trace:
[{gen,do_for_proc,2,[{file,"gen.erl"},{line,228}]},
{gen_event,rpc,2,[{file,"gen_event.erl"},{line,239}]},
{rabbit,ensure_working_log_handlers,0,
[{file,"src/rabbit.erl"},{line,842}]},
{rabbit,'-boot/0-fun-0-',0,[{file,"src/rabbit.erl"},{line,281}]},
{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,417}]},
{init,start_em,1,[]},
{init,do_boot,3,[]}]
=INFO REPORT==== 22-Aug-2018::09:16:49.691453 ===
Error description:
noproc
Log files (may contain more information):
/var/log/rabbitmq/rabbit.log
/var/log/rabbitmq/rabbit-sasl.log
Stack trace:
[{gen,do_for_proc,2,[{file,"gen.erl"},{line,228}]},
{gen_event,rpc,2,[{file,"gen_event.erl"},{line,239}]},
{rabbit,ensure_working_log_handlers,0,
[{file,"src/rabbit.erl"},{line,842}]},
{rabbit,'-boot/0-fun-0-',0,[{file,"src/rabbit.erl"},{line,281}]},
{rabbit,start_it,1,[{file,"src/rabbit.erl"},{line,417}]},
{init,start_em,1,[]},
{init,do_boot,3,[]}]
{"init terminating in do_boot",noproc}
The other log files it claim to use for logging are empty. For example log file rabbit#ubuntu-dev.log and rabbit#ubuntu-dev-sasl.log.
I also found this post, which explains to check your hostname in /etc/hostname file but I checked and it's correct.
kazhu#ubuntu-dev:/var/log/rabbitmq$ cat /etc/hostname
ubuntu-dev
I also checked RabbitMQ troubleshoot guide and they said to check log folder permissions and they are right to my eye:
kazhu#ubuntu-dev:/var/log/rabbitmq$ ll
total 48
drwxr-xr-x 2 rabbitmq rabbitmq 4096 kesä 14 06:16 ./
drwxrwxr-x 16 root syslog 4096 elo 22 00:09 ../
-rw-r--r-- 1 rabbitmq rabbitmq 0 kesä 14 06:16 'rabbit#ubuntu-dev.log'
-rw-r--r-- 1 rabbitmq rabbitmq 5247 kesä 14 06:16 'rabbit#ubuntu-dev.log.1'
-rw-r--r-- 1 rabbitmq rabbitmq 954 touko 28 08:36 'rabbit#ubuntu-dev.log.2.gz'
-rw-r--r-- 1 rabbitmq rabbitmq 768 touko 21 07:11 'rabbit#ubuntu-dev.log.3.gz'
-rw-r--r-- 1 rabbitmq rabbitmq 708 touko 16 00:12 'rabbit#ubuntu-dev.log.4.gz'
-rw-r--r-- 1 rabbitmq rabbitmq 955 touko 7 07:26 'rabbit#ubuntu-dev.log.5.gz'
-rw-r--r-- 1 rabbitmq rabbitmq 4264 huhti 22 00:07 'rabbit#ubuntu-dev.log.6.gz'
-rw-r--r-- 1 rabbitmq rabbitmq 0 huhti 17 15:58 'rabbit#ubuntu-dev-sasl.log'
-rw-r--r-- 1 rabbitmq rabbitmq 95 elo 22 09:16 startup_err
-rw-r--r-- 1 rabbitmq rabbitmq 1212 elo 22 09:16 startup_log
Guide also stated that perl chrash dump file contains detailed information of the problem and requires Erlang expertises, which I don't have. So decided to upload the file to my Dropbox for you to see.
Can somebody help me solve this? I've tried some time myself but gave up because cannot figure out what the problem seems to be :/
I solved the problem with help of my colleague. I had installed newest erlang and rabbitmq from outside apt source separately. Now when I removed and purged everything related to rabbitmq and erlang, and removed the added apt sources too. Then I just ran sudo apt install rabbitmq-server and it wanted to install erlang packages too because of the dependency. And it installed and everything is working fine after that.
Wanted to share this solution if somebody else has the same problem as me.
UPDATE 9.12.2020:
Someone asked how I removed RabbitMQ and Erlang. I don't fully remember but I think I was following this guide: https://www.rabbitmq.com/install-debian.html.
The point is to remove the installed RabbitMQ and Erlang packages from added repositories and their configuration with
sudo apt purge rabbitmq-server erlang
You might need to search for rest of the erlang packages with
apt list | grep erlang
Then you need to remove added apt repositories. Usually added repositories in Ubuntu goes under /etc/apt/sources.list.d/ folder. Look for files names like rabbitmq and erlang. Make sure you are not deleting any other files!
After this run sudo apt update and apt should remove removed apt repositories. Then just running sudo apt install rabbitmq-server should do the trick and install Erlang package as a dependence. Of course installing this way you get much older version than using added repositories.

how to correctly configure log4j.properties according to my design?

My desktop application log4j.properties file is:
## Log levels
## TRACE < DEBUG < INFO < WARN < ERROR < FATAL
log4j.rootLogger=INFO
#
## Appender Configuration
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
#
## Pattern to output the caller's file name and line number
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{${datestamp}} %-5p %c{1}:%L - %m%n
And I run this application using java -jar appName.jar > <path-to-log-dir>/logFile.log.
The output for this file is, for instance:
0 [main] INFO br.com.mentium.hrm.agent.Agent - Thread started at: Wed Nov 30 09:53:03 BRST 2016
3 [main] INFO br.com.mentium.hrm.agent.Agent - HRM Agent
3 [main] INFO br.com.mentium.hrm.agent.Agent -
3 [main] INFO br.com.mentium.hrm.agent.Agent - Polling server every 1 minute(s).
3 [main] INFO br.com.mentium.hrm.agent.Agent -
4 [main] INFO br.com.mentium.hrm.agent.Agent - ######################
4 [main] INFO br.com.mentium.hrm.agent.Agent -
5 [main] INFO br.com.mentium.hrm.agent.Agent - Execution at Wed Nov 30 09:53:03 BRST 2016
5 [main] INFO br.com.mentium.hrm.agent.Agent - Iteration number: 1
5 [main] INFO br.com.mentium.hrm.agent.Agent -
Where the first number on each line is the time in milliseconds since the application was started. I guess.
I'd like to format the log's output as:
yyyy-MM-dd hh:mm:sss abbreviatedClassName (ie, b.c.m.h.a.ClassName) - message
I know I need to do it on the ConversionPattern line, but no changes I do to it seem to take effect.
What's wrong here?
You need to specify it like this. Note that this is not exactly to your requirement. I hope you can try it and get it to your exact requirement.
You can read more about pattern layout here.
log4j.appender.CONSOLE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p %c{1}:%L - %m%n