.conf files - parsing a log message with rsyslog - config

I have a log file that is read by a conf file like this: (Log file being read by conf file)
These logs are sent to dedicated log viewer service that tags them with a certain severity. The problem at moment is that since all the different types of logs (Info, Debug, Warning, etc.) are stored in one file, they all given the same severity. I have found this article about parsing log messages using rsyslog: https://somoit.net/linux/rsyslog-parsing-splitting-message-fields
Having not much experience with working with these conf files, how can I parse each line from the log file after it has been read in through the input field?
Is declaring variables in conf files done as described in the article? e.g. set $!malware

Related

Create log files with Serilog only if their is any logs

I want to write different logs on different log files with Serilog by API request.
The problem is when the log file configured within LoggerConfiguration(), they created no matter if their is any logs or not.
The problem is that I have many empty log files per request.
Maybe Serilog has the ability not to create file if it has no logs to write to that file? or maybe any other solution for this problem?

Apache Nifi - What happens when you run getFile processor without any downstream processor

I am a beginner to Apache Nifi and i want to move a file in my local filesystem from one location to another. When I used the getFile processor to move files from the corresponding input directory and started it, the file disappeared. I haven't connected it to a putFile processor. What exactly is happening here. Where does the file go if it disappears from the local directory i had placed it in. Also how can i get it back?
GetFile has a property Keep Source File, if you have set to true, the file is not deleted after it has been copied from Input Directory to the Content Repository, default is false so this is the reason your files are deleted and you must have set success relation for auto termination otherwise GetFile won't run without any downstream connection. Your files have been discarded. Not sure whether this will work, but try the Data Provenance option and replay content.
Have a look at this - GetFile Official Doc and Replaying a FlowFile

AWS CloudWatch Agent not uploading old files

During the initial migration to AWS CloudWatch logging I also want legacy log files to be synced. However, it seems that only the current active file (i.e. still being updated) will be synced. The old files even match the file name format will be ignore.
So are there any easy way to upload legacy files?
Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Short answer: you should be able to upload all files by merging them. Or create a new [logstream] section for each file.
Log files in /var/log are usually archived periodically, for instance by logrotate. If the current active file is named abcd.log, then after a few days files will be created automatically with names like abcd.log.1, abcd.log.2...
Depending on your exact system and configuration, they can also be compressed automatically (abcd.log.1.gz,abcd.log.1.gz, ...).
The CloudWatch Logs documentation defines the file configuration parameter as such:
file
Specifies log files that you want to push to CloudWatch Logs. File can point to a specific file or multiple files (using wildcards such as /var/log/system.log*). Only the latest file is pushed to CloudWatch Logs based on file modification time.
Note : using a glob path with a star (*) will therefore not be sufficient to upload historical files.
Assuming that you have already configured a glob path, you could use the touch command sequentially on each of the historical files to trigger their upload. Problems :
you would need to guess when the CloudWatch agent has noticed each file before proceeding to the next
you would need to temporarily pause the current active file
zipped files are not supported, but you can decompress them manually
Alternatively you could decompress then aggregate all historical files in a single merged file. In the context of the first example, you could run cat abcd.log.* > abcd.log.merged. This newly created file would be detected by the CloudWatch agent (matches the glob pattern) which would consider it as the active file. Problem : the previous active file could be updated simultaneously and take the lead before CloudWatch notices your merged file. If this is a concern, you could simply create a new [logstream] config section dedication the historical file.
Alternatively, just decompress the historical files then create a new [logstream] config section for each.
Please correct any bad assumptions that I made about your system.

NFS server receives multiple inotify events on new file

I have 2 machines in our datacenter:
The public server exposes part of the internal servers's storage through ftp. When files are uploaded to the ftp, the files in fact end up on the internal storage. But when watching the inotify events on the internal server's storage, i notice the file gets written in chunks, probably due to buffering at client side. The software on the internal server, watches the inotify events, to determine if new files have arrived. But due to the NFS manner of writing the files, there is no good way of telling when a file is complete. Is there a way of telling the NFS client to write files in only one operation, or is there a work around for this behaviour?
EDIT:
The events i get on the internal server, when uploading a file of around 900 MB are:
./ CREATE big_buck_bunny_1080p_surround.avi
# after the CREATE i get around 250K MODIFY and CLOSE_WRITE,CLOSE events:
./ MODIFY big_buck_bunny_1080p_surround.avi
./ CLOSE_WRITE,CLOSE big_buck_bunny_1080p_surround.avi
# when the upload finishes i get a CLOSE_NOWRITE,CLOSE
./ CLOSE_NOWRITE,CLOSE big_buck_bunny_1080p_surround.avi
of course, i could listen to the CLOSE_NOWRITE event, but reading inotify documentation says:
close_nowrite
A watched file or a file within a watched directory was closed, after being opened in read-only mode.
Which is not exactly the same as 'the file is complete'. The only workaround I see, is to use .part or .filepart files and move them, once uploaded, to the original filename and ignore the .part files in my storage watcher. Disadvantage is I'll have to explain this to customers, how to upload with .part. Not many ftp clients support this by default.
Basically, if you want to check when the write operations is completed, monitor the event IN_CLOSE_WRITE.
IN_CLOSE_WRITE gets "fired" when a file gets closed which was open for writing. Even if the file gets transferred in chunks, the FTP server will close the file only after the whole file has been transferred.

Log4j - log file

Does anybody knows how apache log4j handling with streams ?
If it open and close log file for each line or if it simply let open stream and flushing it ?
One thing springs to mind. If log4j keeps the log file open, log rollover fails, because its file handle still points to the old log file. Opening, writing and closing means log4j would correctly grab the file handle for the new log file.
It doesn't open and close the log file for each line (this would cause too much overhead). Output can be buffered (check the documentation). You could create a custom appender that opens the file for appending for every line though, but what are you trying to accomplish?