rsync -option explaination? - backup

I've seen the switches below on an rsync script and I just wondered if someone could break them down for me...
rsync --chmod=ugo=rwX
ugo?
rwX (read-write-Execute - why the capitalisation on Execute?)
--chmod=CHMOD affect file and/or directory permissions
Also what is "don't cross filesystem boundaries" for the -x option?
-x, --one-file-system don't cross filesystem boundaries
Many Thanks

It is the syntax of chmod and has nothing to do with rsync options: Set user (u), group (g) and other (o) access to read (r), write (w), and if for the existing file or directory already set, then add execute (X) rights.
As you can mount different file systems, you could limit rsync to stay on one file system, for this is the -x option.

Related

PostgreSQL Query To Create A Directory

Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.

please tell me how you set permission 777 on serverfree.com without SSH

anyone can please tell me how you set permission 777 on serverfree.com because i am seen there and there is not any option to set permission and unable to set permission via web based ssh.
please tell me how you set permission.
actually everythis is fine on serverfree.com but i am unable to set cron, someone tell me it's permission issue , but i don't know how to set permission on serverfree.com without SSH ?
Usually, you are able to set the permissions via your FTP-Client.
e.g. in FileZilla there is an option "File permissions..." where you can set the permission values for each file.
You're on a *nix System, right?
If you only want set permissions without calling chmod directly(as your question suggests), you can try following on the console, if you have Perl installed:
perl -e 'chmod 0777, "Filename"'
Another approach is to use the install utility which is a glorified copying program which can set permissions in one step. (See the -m argument.)
install -m 777 "File" "/Copy/Location"
You can find it in the GNU coreutils(if you have it installed there), and isn't directly included to *nix systems(but BSD for example). Also simple move the file out of directory and call install to move it back.
But for both methods you need SSH, and i don't think there is a solution to set permissions without(because you never can do the chmod() system call that you need to set them).

rsync "failed to set times on "XYZ": No such files or directory (2)

I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.

Postfix piping email to php, permissions error

I'm attempting to pipe an email to PHP with my Postfix mail server, using the technique mentioned here and have encountered the following error...
Mar 16 22:52:52 s15438530 postfix/pipe[9259]: AD1632E84C63: to=<php#[myserver].com>, relay=plesk_virtual, delay=0.61, delays=0.59/0/0/0.02, dsn=4.3.0, status=deferred (temporary failure. Command output: /bin/sh: /var/www/vhosts/[myserver].com/httpdocs/clients/emailpipe/email2php.php: Permission denied 4.2.1 Message can not be delivered at this time )
I'd really appreciate if anyone could shed some light on this issue for me. I've tried 777'ing the emailpipe directory, to no avail. Where am I going wrong?
Many thanks.
From the postfix docs...
For security reasons, deliveries to command and file destinations are performed with the rights of the alias database owner. A default userid, default_privs, is used for deliveries to commands/files in root-owned aliases.
So you have two options, either set the default_privs in main.cf to match the ownership of the email2php file.
Alternatively, there should be a way to create an alias database that is owned by the user instead of postfix/nobody. I haven't tried this before though so can't advise.
I have fixed this issue by disabling the SELINUX.
Make sure that you have
#!/usr/bin/php
<?php
(or whatever your path to php is - do "which php" on the server)
at the top of each of your php scripts and that each of the php script files is executable
chmod +x /var/.../email2php.php
Also, make sure that you can test the script from the command line:
cat some_rfc822_email.txt | /var/.../email2php.php
and get the result that you want
To fix this issue, you'll want to chown or chmod /var/www/vhosts/[myserver].com/httpdocs/clients/emailpipe/email2php.php to executable by your postfix user. Alternately, you'll want to redefine this user to execute the file successfully.
Simply changing the permissions of your directory (unless you used -R) won't be sufficient.
To illustrate why this works, consider the following toy example:
<me>#harley:~$ touch test
<me>#harley:~$ ls -al test
-rw-r--r-- 1 <me> <me> 0 2012-03-26 23:44 test
<me>#harley:~$ sh test
<me>#harley:~$
<me>#harley:~$ ./test
bash: ./test: Permission denied
<me>#harley:~$ chmod 755 test
<me>#harley:~$ ./test
<me>#harley:~$
In order to execute a file directly through the running shell, it needs to be set as executable. Other invocations (for example, sh email2php.php or php email2php.php) only require read access, because they're chaining execution off a different file entirely.
For what's likely to be causing the issue in the first place, see here.

Copy files from remote server to local, ignoring existing files (rsync not available)

I would like to copy a directory of files from a remote server. As it is a large number of files, the option of ignoring existing files on the destination server is desirable.
Unfortunately, rsync is not available for some reason (the remote server is from a CDN service, and beyond my control).
So I think I am stuck using scp -r on the folder in question.
Is there anyway of doing this with ignoring existing files?
thanks
You could also create a *.tar.gz or *.tar.bz2 archive, scp it, and then unpack it. I don't know if scp -r uses any compression. If not, compressing everything first might, potentially, make it faster.
It's easy to write an script in Perl to do that using the module Net::SFTP::Foreign:
#!/usr/bin/perl
use Net::SFTP::Foreign;
my $sftp = Net::SFTP::Foreign->new('user#host');
$sftp->die_on_error;
$sftp->rget('/remote/path', '/local/path',
resume => 'auto',
on_error => sub { my ($sftp, $e) = #_;
warn "error processing $e->{filename}: "
. $sftp->error;
}
);
SCP needs a writable file so that it can replace that file.
Using this, for the files which you do not want to replace, you can remove the permission to write for them. And continue with your scp for all files.
https://unix.stackexchange.com/a/51932/284063