Is there an easy way to automate a process so that if I want a script (.sh) to run hourly, it can check what files have already been FTPed and skip those and only send new ones? Thanks
Here is my current feed. I am not sure how rsync would be configured...
/usr/local/bin/ncftpput -f saxlogin.cfg /MTM/PH /mnt/zeus/scripts/xml/pph/date +"%Y.%m.%d"/*
Related
I need to ingest events for nightly yum update checks (using yum-cron) into a SIEM. Unfortunately yum only logs events to yum.log when action is taken, for example updates or installations. There is no event logged when you check for updates and there are none available. Auditors have also specified that ingesting events proving yum-cron ran is not enough so I can't just import the events from the cron log.
I could run a script that runs yum check-update and pipe the output to a file, then have rsyslog ingest lines from that file but that is messy and not ideal. I also want it to be as easy to configure as possible as it will have to be scripted to be able to configure it on new instances quickly.
It is also a special distribution from a vendor and the logger command does not work with rsyslog on the distribution.
Is there an easy way to track, via log, the fact that yum did run and that no packages were found for update? Indicating that all packages are up to date?
Another forum got me started down the path to a solution and this was what I ended up doing to resolve the issue:
yum-cron supports email notifications, unfortunately the SIEM we are using does not ingest events via email. However looking through the yum-cron scripts they redirect output to a temporary file which they then use to email notifications. I ended up editing the /etc/cron.daily/0yum.cron script to redirect output to /var/log/yum.log instead by changing:
} >> $YUMTMP 2>&1
to:
} >> /var/log/yum.log 2>&1
I then used the im_file module of rsyslog to ingest the yum.log and forward it to the SIEM.
Does anyone know if its possible to have Rundeck (or another open source scheduler) kickoff a job based on a file being detected on it's filesystem?
For ProActive, an open source scheduler, we developed a small script that would allow you to do that and check the changes in a folder for a selected period.
Directory Monitoring Script
Let me know if you have any issue.
I'm trying to back up my entire collection of over 1000 work files, mainly text but also pictures, and a few large (0.5-1G) audiorecordings, to an S3 cloud (Dreamhost DreamObjects). I have tried to use boto-rsync to perform the first full 'put' with this:
$ boto-rsync --endpoint objects.dreamhost.com /media/Storage/Work/ \
> s3:/work.personalsite.net/ > output.txt
where '/media/Storage/Work/' is on a local hard disk, 's3:/work.personalsite.net/' is a bucket named after my personal web site for uniqueness, and output.txt is where I wanted a list of the files uploaded and error messages to go.
Boto-rsync grinds its way through the whole dirtree, but refreshing output about each file's progress doesn't look so good when it's printed in a file. Still as the upload is going, I 'tail output.txt' and I see that most files are uploaded, but some are only uploaded to less than 100%, and some are skipped altogether. My questions are:
Is there any way to confirm that a transfer is 100% complete and correct?
Is there a good way to log the results and errors of a transfer?
Is there a good way transfer a large number of files in a big directory hierarchy to one or more buckets for the first time, as opposed to an incremental backup?
I am on a Ubuntu 12.04 running Python 2.7.3. Thank you for your help.
you can encapsulate the command in an script and starts over nohup:
nohup script.sh
nohup generates automaticaly nohup.out file where all the output aof the script/command are captured.
to appoint the log you can do:
nohup script.sh > /path/to/log
br
Eddi
I am writing a BASH deployment script on RH 5. Script runs great and send out an email at the end of the script run. However, what I need to do is, at the end of the script, if I detect any failure, I need to copy log files back local server to attach to the email.
Script can detect failure fine, how to copy log files back. I don't want to just cat the log files as they can be huge.
Any suggestions?
Thanks
S
If I understand correctly your problem, you should use scp
http://linux.die.net/man/1/scp
and here you can find how to automate the login so you can use it in a script
http://linuxproblem.org/art_9.html
I can't see any easy way of avoiding a second login with scp/sftp. If you're sure that it's only the log file that will be returned you could do something like the following:
ssh -e none REMOTE SCRIPT | gzip -dc > LOGFILE
Inside SCRIPT you have something like gzip -c LOGFILE when if fails.
I have a batch script that unzips some files from a folder and this script may be called several times.
For unzipping I use unzip.exe and I log it to a log file. For instance this is what goes into this logfile:
ECHO %DATE% - %TIME% >> Unzipped.log
ECHO ERROR LEVEL IS: !ERRORLEVEL! >> Unzipped.log
ECHO Error with file %1 >> Unzipped.log
My question is, is it possible to get a file lock on "Unzipped.log" file, if my batch script is called several times in a short time period?
I've tried to google this but with no luck. The only time where I have seen a problem is when I open the "Unzipped.log" file in Word, than my batch script won't write to it. When I have it open in Notepad/Notepad++ there is no problem in writing to the log file.
Yes, you most definitely can get a failure due to file locking if a batch process attempts to open the file for writing while another process already has it open for writing. The two processes could be on the same machine, or they could be on different machines if your are dealing with a file on a shared network drive. Both processes could be batch process, but they don't have to be.
It is possible to safely write to a log file "simultaneously" from multiple batch processes with a little bit of code to manage the locking of the file. See How do you have shared log files under Windows?