I inherited a server on which a large number of applications and files are installed.
Many of these are no longer in use, but I have no documentation of what is in production and what is not.
I should try to clean up and remove unused files and applications.
Since I have Apache logs, can I process them with an analyzer or some script to identify the files (or rather the folders) that are no longer in use (which have not had hits since January 2020 for example)?
Any help is appreciated.
In the end I solved it by simply using the grep command.
I used the Apache logs from January to now, and for each folder on the server I launched a command like:
cat access_log_20200* |grep "GET /folder"
So I deleted the ones that are not present in the log file.
Related
Is it somehow possible to see how many times a specific file was requested from the server, or set up a system to see it in the future?
In my case, it is a Javascript file on my server that is included on many different websites.
My setup is using LAMP.
You can see requests in an access log of Apache server.
Simply scan for occurrences of requests to that file by hands or write some script.
So that JSON file is requested by HTTP requests going to your Apache server.
That server probably manages log files. (the location of that log file is configurable).
You then need to scan the access log file to find every access. You can use something like
grep yourfile.json /var/log/www/access.log | wc
(to count occurrences)
where /var/log/www/access.log is your access log file.
BTW, many log files are managed by logrotate. Details are system specific. Take than into account too.
I have a server that's running Ubuntu 16.04. and apparently Easy Hosting Control Panel keeps on creating multiple back-ups like 50 times a day which fills the 50 gb disk space and it's causing the server to crash.
The backup is creating multiple directories named Apache2.backupbyehcp inside /etc directory.
I've tried deleting the backups one by one and after a day there it is again.
I want to disable or limit the backups created.
Any help is greatly appreciated.
Here's a screen shot of the backup directories that are being created:
This is caused by:
Ehcp trying to recover webserver config, each time it detects that the webserver config is broken or webserver not responding.
This may result in such unexpected/unwanted behaviour.
What to do:
1st, check the problem in webserver configs, like, tail -f /var/log/ehcp.log
so that you can understand what is going wrong.
This is sometime caused by incorrect webserver custom configurations by admin or reseller. You may disable custom webserver configs via ehcp gui-> options.
(I strongly suggest finding the cause of this.)
If everything regarding the webserver is okay, but you just need to disable this backup,
open install_lib.php in ehcp dir, search for backupbyehcp and disable that line.
Hope this helps.
Is there any way to know programmatically when Apache Web Server and/or deployed application started/restarted?
You can fetch the mod_status page for Apache (usually at /server-status - see https://httpd.apache.org/docs/2.4/mod/mod_status.html).
Then you can parse the output to see what "Restart Time" says.
It depends on what OS we're talking about here.
Assuming its Linux/Unix, the start time of a process can be output by the ps command but note that this will (depending on the MPM) return multiple records of different ages for httpd - the base process will be the oldest / the parent of the other processes.
On Linux, this information is also exposed in the proc filesystem - but note that a lot of the files describing the process are changed during its lifetime - but the directory (/proc/$PID) and exe (/proc/$PID/exe) don't.
I am developing a MiniDLNA server to stream media over WiFi. Existing files are shown properly. However, when I add new files to media folders the changes are not updated across MiniDLNA clients. I have also tried to restart the server but it does not reflect the changes.
I changed inotify_interval = 60 but it's still not updating files.db which is the MiniDLNA media list database. If I delete this database and restart the server it shows the changes.
Does anyone know what the problem might be?
$ minidlnad -h
…
-r forces a rescan
-R forces a rebuild
In summary, the most reliable way to have MiniDLNA rescan all media files is by issuing the following set of commands:
$ sudo minidlnad -R
$ sudo service minidlna restart
Client-side script to rescan server
However, every so often MiniDLNA will be running on a server. Here is a client-side script to request a rescan on such a server:
#!/usr/bin/env bash
ssh -t server.on.lan 'sudo minidlnad -R && sudo service minidlna restart'
AzP already provided most of the information, but some of it is incorrect.
First of all, there is no such option inotify_interval. The only option that exists is notify_interval and has nothing to do with inotify.
So to clarify, notify_interval controls how frequently the (mini)dlna server announces itself in the network. The default value of 895 means it will announce itself about once every 15 minutes, meaning clients will need at most 15 minutes to find the server. I personally use 1-5 minutes depending on client volatility in the network.
In terms of getting minidlna to find files that have been added, there are two options:
The first is equivalent to removing the file files.db and consists in restarting minidlna while passing the -R argument, which forces a full rescan and builds the database from scratch. Since version 1.2.0 there's now also the -r argument which performs a rebuild action. This preserves any existing database and drops and adds old and new records, respectively.
The second is to rely on inotify events by setting inotify=yes and restarting minidlna. If inotify is set to =no, the only option to update the file database is the forced full rescan.
Additionally, in order to have inotify working, the file-system must support inotify events, which is not the case in most remote file-systems. If you have minidlna running over NFS it will not see any inotify events because these are generated on the server side and not on the client.
Finally, even if inotify is working and is supported by the file-system, the user under which minidlna is running must be able to read the file, otherwise it will not be able to retrieve necessary metadata. In this case, the logfile (usually /var/log/minidlna.log) should contain useful information.
MiniDLNA uses inotify, which is a functionality within the Linux kernel, used to discover changes in specific files and directories on the file system. To get it to work, you need inotify support enabled in your kernel.
The notify_interval (notice the lack of a leading 'i'), as far as I can tell, is only used if you have inotify disabled. To use the notify_interval (ie. get the server to 'poll' the file system for changes instead of automatically being notified of them), you have to disable the inotify functionality.
This is how it looks in my /etc/minidlna.conf:
# set this to no to disable inotify monitoring to automatically discover new files
# note: the default is yes
inotify=yes
Make sure that inotify is enabled in your kernel.
If it's not enabled, and you don't want to enable it, a forced rescan is the way to force MiniDLNA to re-scan the drive.
I have recently discovered that minidlna doesn't update the database if the media file is a hardlink. If you want these files to show up in the database, a full rescan is necessary.
ex: If you have a file /home/movies/foo.mkv and a hardlink in /home/minidlna/video/foo.mkv, where '/home/minidlna' is your minidlna share, you will have to do a rescan till that file appears in the db (and subsequently your dlna client).
I'm still trying to find a way around this. If anyone has any input, it's most welcome.
There is a patch for the sourcecode of minidlna at sourceforge available that does not make a full rescan, but a kind of incremental scan. That worked fine, but with some later version, the patch is broken. See here Link to SF
Regards
Gerry
I have solved it with a small script:
Every 15 seconds it checks the size of the directory (/media/seriesPI). The service is restarted if there are changes
#!/bin/bash
function sizeFiles(){
for i in $(du /media/seriesPI/ | awk '{print $1}')
do
cad+=$i
done
}
sizeFiles
#first size
first=$cad
cad=''
while [ true ]
do
sizeFiles
echo "$first != $cad"
if [ "$first" != "$cad" ] ; then
echo "Directory size has changed!"
echo "Restart service MiniDLNA"
sudo service minidlna restart
#update new size
first=$cad
else
echo "There are no changes in the directory"
fi
echo "waiting 15 seconds..."
sleep 15
cad=''
done
Resolved with crontab root
10 * * * * /usr/bin/minidlnad -r
I would like to make a complete backup of my whole joomla 1.5 based site from time to time. How would this ideally be done? Are there any common pitfalls? Not that I only have ftp access to the hosting server. Is there a step by step tutorial somewhere? I am using latest Joomgallery and Kunena 1.0.9 (Legacy mode).
Maybe there is a good way to automate this?
There's two parts of the backup you have to worry about, the database and the files.
The first part is the database. It can be backed up using something like phpMyAdmin. If you don't have this available on your server already, it's not too hard to upload and get it going yourself. From there, you can just Export the entire database to a gzip file.
The second part is the code and uploaded files. The code base shouldn't change too often, so you could probably just make one backup of this. There's a number of ways. The simplest is to just download the entire folder via FTP, though if you're Linux, I'm sure someone will know a single command line to get all the changed files (rsync?).
The database is the main thing you have to worry about though: everything else should be able to be rebuilt just by reinstalling.
I think this: http://www.joomlapack.net/ is what you need. I use it myself and it works like a charm. Both for backups and for moving my Joomla installations from developer sites and to the real site.
get an FTP synchronisation tool and keep an up-to-date copy of your site locally. Then you could run the batch script
mysqldump -hhost -uuser -p%1 schema > C:\backup.sql
to create a backup of your mysql tables at various points in time.
edit
you would have to have MySQL Server installed on your local machine and path to its bin directory in you PATH, in order to run the mysqldump command without much hassle. -p%1 would take the command-line provided password, as you wouldn't want to store passwords in your batch script.
If you only have FTP access you are in a bit of a problem, as beside all files you'll also have to backup the database. Without accessing the database, a full-backup won't do you any good.
Whatever backup strategy you choose - be sure it can handle UTF-8 correctly. Joomla 1.5 stores all content with UTF-8, even when the database charset is set on 'iso-5589-1' - so when the backup solution is detecting the database charset, some characters like € or é will result in "strange" ¬ / é - not really what you'll want.
I absolutely endorse using Joomlapack - it works great. The optional remote tools allow you to initiate the backup from a Windows desktop machine - it performs the backup and downloads it. The remote has a scheduler, and you can also set it off to backup and download a list of sites.
Joomlapack also provides a file "kickstart.php" which you copy to your empty server account along with the backup, which automates the restore procedure. You do have to create an empty database with PHPMyAdmin or similar, and you are given the opportunity to supply the database parameters (host, database, username, password) during the process.
One pitfall I did run into with this though is that some common components can have absolute URLs in their configuration - e.g. SOBI2, Virtuemart. It's then just a matter of finding the appropriate configuration file, editing it and re-uploading it.
Another problem was one archive file (either ZIP or their JPA format) got a filename with a "?" character in it (from a Linux server) and this caused a bit of a problem trying to install it locally on a Windows WAMP stack - the extract process on the ZIP file failed, and it stopped the process completing cleanly.
I suggest using automatic backup service by http://www.everlive.net
Update:
Ok, here is some more information. EverLive.net is a website where you can create a free account. Enter your website details and you are ready to take your backups withe just one click. Restore is also possible in the same way.
Further you can use automatic backup option to take automatic backups at defined intervals. Other than that, you can use the website health check service to inform you if your website is not available.