Journald SystemMaxUse when machine-id changes - log-rotation

I am using journald on an embedded system for logging to a persistant drive. The maximum stored logs are limited using the config parameter SystemMaxUse= and log rotation enabled. This works fine, until the systemd machine-id changes during software update (we flash a full system image during software update, very typical in embedded systems).
With the new machine-id journald creates a new log folder name, and ignores all previous folders. Hence the rotation mechanism ignores logs from other machine-id's and my quota is exceeded.
Does anybody know, how to include other machine-id logs into journald log rotation? Thanks...
Side note:
With journalctl --merge I can read logs from other machine-id stored in the same journald log path, however the --merge option is ignored with cleanup commands like journalctl --rotate and/or jorunalctl --vacuumsize=1G

Related

Apache custom module, where to put its own log file?

I have a Apache module that acts as a security filter that allows requests to pass or not. This is a custom made module, I don't want to use any existent module.
I have actually two questions:
The module has its own log file. I'm thinking that the best location should be in /var/log/apache2/ but since the Apache process runs on www-data user, it cannot create files on that path. I want to find a solution for the log file in such way that is not much intrusive (in terms of security) for a typical web server. Where would be the best place and what kind of security attributes should be set?
The module communicates with another process using pipes. I would like to spawn this process from Apache module only when I need it. Where should I locate this binary and how should I set the privileges as less intrusive as possible?
Thanks,
Cezane.
Apache starts under the superuser first and performs the module initialization (calling the module_struct::register_hooks function). There you can create the log files and either chown them to www-data or keep the file descriptor open in order to later use it from the forked and setuided worker processes.
(And if you need an alternative, I think it's also possible to log with syslog and configure it to route your log messages to your log file).
Under the worker process you are already running as the www-data user so there isn't much you can do to further secure the execution. For example, AFAIK, you can't setuid to yet another user or chroot to protect the filesystem.
What you can do to improve the security is to use a system firewall. For example, under AppArmor you could tell the operating system what binaries your Apache module can execute, stopping it from executing any unwanted binaries. And you can limit that binary's filesystem access, preventing it from accessing www-data files that doesn't belong to it.

Unison doesn't copy SYSTEM perm (cygwin/windows)

After using crashplan for a while, I noticed that several files aren't being backed up. The files are synced via unison (through cygwin) with another PC and while the *nix permissions are copied correctly, the mirrored file does not have SYSTEM as a user (in windows). Therefore, crashplan can't back it up. Both client and server are running cygwin.
What's the best solution? Can I copy this permission as well with unison? Can I do it with a script (in cygwin or cmd)?
Thanks
Sander
EDIT: To fix it short term I ran an icacls command, but I'm still looking for a way to copy the ACLs via unison whilst syncing.
Relevant section from the Unison Manual:
Permissions
Synchronizing the permission bits of files is slightly tricky when two different filesytems are involved (e.g., when synchronizing a Windows client and a Unix server). In detail, here's how it works:
When the permission bits of an existing file or directory are changed, the values of those bits that make sense on both operating systems will be propagated to the other replica. The other bits will not be changed.
When a newly created file is propagated to a remote replica, the permission bits that make sense in both operating systems are also propagated. The values of the other bits are set to default values (they are taken from the current umask, if the receiving host is a Unix system).
For security reasons, the Unix setuid and setgid bits are not propagated.
The Unix owner and group ids are not propagated. (What would this mean, in general?) All files are created with the owner and group of the server process.

oracle udump trc file size issue

My project uses oracle db hosted on unix machine. The issue is that the trace files generated at udump location have loggers from my custom code as well. (custom code loggers are from java callouts which are loadjava on the db).
Now every time i use that module the udump folder is flooded by 3 new trc files which have default oracle logs as well as my custom code logs.
I want to disable the logs that are generated from my code.
Till now i have tried to write a custom log4j.properties and loadjava it n use it for my code, in that prop file i have used file and console handlers pointed to custom location on unix machine other than udump location. But still the custom logs are coming only in udump location and there are no logs at the new location which i tried from the prop file.
I tried disabling the logging.trace=false in the logging.properties file of the oracle jvm.
I have checked a few sql queries which can disable session trace. It identifies around 70 sessions. I just want to disable java logs, i would like to know if its possible to find the session that my java logs use and disable the trace for it.
I am using oracle 9i version and java 1.4 version.
Need to disable the custom logs from coming to the udump location. Also the solution should be implementable generally as my application goes multiple environments like test env, stage env, prod env..
Any hint would be very helpful.

what's the performance impact causing from the large size of Apache's access.log?

If the logs file like access.log or error.log gets very large, will the large-size problem impact the performance of Apache running or user accessing? From my understanding, Apache doesn't read entire logs into memory, but just make use of filehandle to write. Right? If so, I don't have to remove the logs manually every time when it's large enough except for the filesystem issue. Please help and correct me if I'm wrong. Or is there any Apache Log I/O issue I'm supposed to take care when running it?
Thx very much
Well, i totally agree with you. Per my understanding apache access the log files using handlers and just append the new message at the end of the file. That's way a huge log file will not make the difference when has to do with writing to the file. But may be if you want to access the file or open it with a kind of logging monitoring tool then the huge size will slowdown the process of reading the file.
So i would suggest you to use log rotation to have an overall better end result.
This suggestion is directly form the apche web site.
Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.
From the Apache Software Foundation site

Why is the minidlna database not being refreshed?

I am developing a MiniDLNA server to stream media over WiFi. Existing files are shown properly. However, when I add new files to media folders the changes are not updated across MiniDLNA clients. I have also tried to restart the server but it does not reflect the changes.
I changed inotify_interval = 60 but it's still not updating files.db which is the MiniDLNA media list database. If I delete this database and restart the server it shows the changes.
Does anyone know what the problem might be?
$ minidlnad -h
…
-r forces a rescan
-R forces a rebuild
In summary, the most reliable way to have MiniDLNA rescan all media files is by issuing the following set of commands:
$ sudo minidlnad -R
$ sudo service minidlna restart
Client-side script to rescan server
However, every so often MiniDLNA will be running on a server. Here is a client-side script to request a rescan on such a server:
#!/usr/bin/env bash
ssh -t server.on.lan 'sudo minidlnad -R && sudo service minidlna restart'
AzP already provided most of the information, but some of it is incorrect.
First of all, there is no such option inotify_interval. The only option that exists is notify_interval and has nothing to do with inotify.
So to clarify, notify_interval controls how frequently the (mini)dlna server announces itself in the network. The default value of 895 means it will announce itself about once every 15 minutes, meaning clients will need at most 15 minutes to find the server. I personally use 1-5 minutes depending on client volatility in the network.
In terms of getting minidlna to find files that have been added, there are two options:
The first is equivalent to removing the file files.db and consists in restarting minidlna while passing the -R argument, which forces a full rescan and builds the database from scratch. Since version 1.2.0 there's now also the -r argument which performs a rebuild action. This preserves any existing database and drops and adds old and new records, respectively.
The second is to rely on inotify events by setting inotify=yes and restarting minidlna. If inotify is set to =no, the only option to update the file database is the forced full rescan.
Additionally, in order to have inotify working, the file-system must support inotify events, which is not the case in most remote file-systems. If you have minidlna running over NFS it will not see any inotify events because these are generated on the server side and not on the client.
Finally, even if inotify is working and is supported by the file-system, the user under which minidlna is running must be able to read the file, otherwise it will not be able to retrieve necessary metadata. In this case, the logfile (usually /var/log/minidlna.log) should contain useful information.
MiniDLNA uses inotify, which is a functionality within the Linux kernel, used to discover changes in specific files and directories on the file system. To get it to work, you need inotify support enabled in your kernel.
The notify_interval (notice the lack of a leading 'i'), as far as I can tell, is only used if you have inotify disabled. To use the notify_interval (ie. get the server to 'poll' the file system for changes instead of automatically being notified of them), you have to disable the inotify functionality.
This is how it looks in my /etc/minidlna.conf:
# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes
inotify=yes
Make sure that inotify is enabled in your kernel.
If it's not enabled, and you don't want to enable it, a forced rescan is the way to force MiniDLNA to re-scan the drive.
I have recently discovered that minidlna doesn't update the database if the media file is a hardlink. If you want these files to show up in the database, a full rescan is necessary.
ex: If you have a file /home/movies/foo.mkv and a hardlink in /home/minidlna/video/foo.mkv, where '/home/minidlna' is your minidlna share, you will have to do a rescan till that file appears in the db (and subsequently your dlna client).
I'm still trying to find a way around this. If anyone has any input, it's most welcome.
There is a patch for the sourcecode of minidlna at sourceforge available that does not make a full rescan, but a kind of incremental scan. That worked fine, but with some later version, the patch is broken. See here Link to SF
Regards
Gerry
I have solved it with a small script:
Every 15 seconds it checks the size of the directory (/media/seriesPI). The service is restarted if there are changes
#!/bin/bash
function sizeFiles(){
for i in $(du /media/seriesPI/ | awk '{print $1}')
do
cad+=$i
done
}
sizeFiles
#first size
first=$cad
cad=''
while [ true ]
do
sizeFiles
echo "$first != $cad"
if [ "$first" != "$cad" ] ; then
echo "Directory size has changed!"
echo "Restart service MiniDLNA"
sudo service minidlna restart
#update new size
first=$cad
else
echo "There are no changes in the directory"
fi
echo "waiting 15 seconds..."
sleep 15
cad=''
done
Resolved with crontab root
10 * * * * /usr/bin/minidlnad -r