Copy files from remote server to local, ignoring existing files (rsync not available) - scp

I would like to copy a directory of files from a remote server. As it is a large number of files, the option of ignoring existing files on the destination server is desirable.
Unfortunately, rsync is not available for some reason (the remote server is from a CDN service, and beyond my control).
So I think I am stuck using scp -r on the folder in question.
Is there anyway of doing this with ignoring existing files?
thanks

You could also create a *.tar.gz or *.tar.bz2 archive, scp it, and then unpack it. I don't know if scp -r uses any compression. If not, compressing everything first might, potentially, make it faster.

It's easy to write an script in Perl to do that using the module Net::SFTP::Foreign:
#!/usr/bin/perl
use Net::SFTP::Foreign;
my $sftp = Net::SFTP::Foreign->new('user#host');
$sftp->die_on_error;
$sftp->rget('/remote/path', '/local/path',
resume => 'auto',
on_error => sub { my ($sftp, $e) = #_;
warn "error processing $e->{filename}: "
. $sftp->error;
}
);

SCP needs a writable file so that it can replace that file.
Using this, for the files which you do not want to replace, you can remove the permission to write for them. And continue with your scp for all files.
https://unix.stackexchange.com/a/51932/284063

Related

PostgreSQL Query To Create A Directory

Files are being written to a directory using the COPY query:
Copy (SELECT * FROM animals) To '/var/lib/postgresql/data/backups/2020-01-01/animals.sql' With CSV DELIMITER ',';
However if the directory 2020-01-01 does not exist, we get the error
could not open file "/var/lib/postgresql/data/backups/2020-01-01/animals.sql" for writing: No such file or directory
PostgeSQL server is running inside a Docker container with the volume mapping /mnt/backups:/var/lib/postgresql/data/backups
The Copy query is being sent from a Node.js app outside of the Docker container.
The mapped host directory /mnt/backups was created by Docker Compose and is owned by root, so the Node.js app sending the COPY query is unable to create the missing directories due to insufficient permissions.
The backup file is meant to be transferred out of the Docker container to the Docker host.
Question: Is it possible to use an SQL query to ask PostgreSQL 11.2 to create a directory if it does not exist? If not, how will you recommend the directory creation be done?
Using Node.js 12.14.1 on Ubuntu 18.04 host. Using PostgreSQL 11.2 inside container, Docker 19.03.5
An easy way to solve it is to create the file directly into the client machine. Using STDOUT from COPY you can let the query output be redirected to the client standard output, which you can catch and save in a file. For instance, using psql in the client machine:
$ psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > file.csv
Creating an output directoy in case it does not exist:
$ mkdir -p /mnt/backups/2020-01/ && psql -U your_user -d your_db -c "COPY (SELECT * FROM animals) TO STDOUT WITH CSV DELIMITER ','" > /mnt/backups/2020-01/file.csv
On a side note: try to avoid exporting files into the database server. Although it is possible, I consider it a bad practice. Doing so you will either write a file into the postgres system directories or give the postgres user permission to write somewhere else, and it is something you shouldn't be comfortable with. Export data directly to the client either using COPY as I mentioned or follow the advice from #Schwern. Good luck!
Postgres has its own backup and restore utilities which are likely to be a better choice than rolling your own.
When used with one of the archive file formats and combined with pg_restore, pg_dump provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. The most flexible output file formats are the “custom” format (-Fc) and the “directory” format (-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
A simple backup rotation script might look like this:
#!/bin/sh
table='animals'
url='postgres://username#host:port/database_name'
date=`date -Idate`
file="/path/to/your/backups/$date/$table.sql"
mkdir -p `dirname $file`
pg_dump $url -w -Fc --table=$table -f $file
To avoid hard coding the database password, -w means it will not prompt for a password and instead look for a password file. Or you can use any of many Postgres authentication options.

How to mirror directories using Bitvise sftpc.exe

The Bitvise SSH Client version history states that v8.15 supports directory mirroring:
The graphical SSH Client and sftpc now support recursive directory mirroring. A directory and all of its subdirectories and files can be synchronized either in the upload or download direction.
I can find it in the GUI, but I can't find how to do using sftpc.exe. There is no mention of mirroring in sftpc.exe -help.
How can I do directory mirroring from the command line?
You point out a tangential design issue in sftpc: getting help for SFTP commands requires you to use sftpc interactively and connect to the server. You can then get help from the interactive prompt.
This is inconvenient, so I opened a feature request for us to make the interactive help available from the command line, as well.
The help text you are looking for is as follows - for the put command:
sftp> help put
USAGE: put local-path [remote-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-m=mode] [-dm=mode] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Upload file.
PARAMETERS:
-bg Start (queue) upload in background.
-fg Start upload in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove local file after successful upload.
-f Assume remote-path is a file (not a directory)
-noTime Do not synchronize file modification times.
-m=mode Set the access mode for remote files to 'mode'.
-dm=mode Set the access mode for new remote directories to 'mode'.
If directory already exists, access mode will not be changed.
-mirror Mirror local-path to remote-path. Local files that do not exist
remotely will be uploaded. Remote files that are different than
their local versions will be overwritten.
-erase With -mirror, erase remote files that are not present locally.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Upload files as binary; no conversions.
-lf Auto-detect text files. In text files, replace CRLF with LF.
Binary files are unaffected.
-std Auto-detect text files. Upload text files using the SFTP v4+ text
file transfer mechanism. Binary files are unaffected. Not
available when SFTP version 3 or lower is in use.
-tlf Upload all files as textual. Replace all CRLF bytes with LF.
-t Upload all files using the SFTP v4+ text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
And for the get command:
sftp> help get
USAGE: get remote-path [local-path] [-bg | -fg] [-s] [-o] [-r]
[-f] [-noTime] [-lit] [-mirror [-erase]]
[-b | -lf | -std | -tlf | -t]
DESCRIPTION: Download file.
PARAMETERS:
-bg Start (queue) download in background.
-fg Start download in foreground.
-s Include subdirectories (recursive).
-r Synchronize file content. If synchronization is not available,
resume existing incomplete files using a heuristic resume.
Heuristic resume MAY result in an inconsistent destination file
if the destination file content has been modified in the middle.
-o Synchronize file content. If synchronization is not available,
force existing file to be overwritten. If -r is also specified,
heuristic resume is tried first.
-del Remove remote file after successful download.
-f Assume remote-path is a file (not a directory).
-noTime Do not synchronize file modification times.
-lit Treat remote-path literally (not a wildcard pattern).
-mirror Mirror remote-path to local-path. Remote files that do not exist
locally will be downloaded. Local files that are different than
their remote versions will be overwritten.
-erase With -mirror, erase local files that are not present remotely.
FILE TRANSFER MODE - if present, overrides mode selected with "type":
-b Download files as binary; no conversions.
-lf Auto-detect text files. In text files, replace LF with CRLF.
Binary files are unaffected.
-std Behaves same as -lf when downloading. Not available when SFTP
version 3 or lower is in use.
-tlf Download all files as textual. Replace all LF bytes with CRLF.
-t Download all files using the SFTP v4 text file transfer mechanism.
Not available when SFTP version 3 or lower is in use.
I hope this helps!
I don't normally monitor Stack Overflow, so please feel free to call my attention by opening a support case with Bitvise if you need me to look at something else.
I recommend also using the latest Bitvise SSH Client version. Currently, this is 8.35. It's free of charge for use in any environment, and we try to ensure that each version is a strict upgrade that does not introduce new difficulties. We want there to be no reason to stay behind. :-)

How do I copy a file into a docker-cloud container? (AKA How to copy a file over ssh without using scp)

docker-machine has an scp command, but docker-cloud doesn't seem to have any way to transfer a file from my local machine to the cloud container or vice-versa.
I'm submitting an answer below that I've finally figured out (in hopes that it will help someone), but I'd love to hear better answers if there are any!
(I realize docker-cloud is going away, but perhaps this will be helpful for other cloud platforms as well)
To transfer a file from your local machine to a docker-cloud instance that is running linux with the tee command available:
docker-cloud container exec id12345 tee filename.ext < file_to_copy.ext > /dev/null
(you'll want to redirect output to /dev/null as shown unless you want the entire contents of the file to be echoed to the terminal... twice)
To transfer a file to your local machine, is somewhat easier:
docker-cloud container exec id12345 cat file_to_copy.ext > filename.ext
Note: I'm not sure this works for binary files, and it can even cause issues with linefeed characters in text files, based on terminal settings, etc. - but it's the best answer I've got short of using an external service like https://transfer.sh

Does scp allow inline file renaming in destination?

For instance, I have tried this (notice sources is remote):
scp root#$node:/sourcepath/sourcefile.log /destinationpath/destinationfile.log
The other option is to rename the file afterwards, but would be more convenient to do it on the fly while the data is downloaded via scp, therein my question. Thanks.
Maybe without scp:
ssh yourserver "cat >tmpfile && mv tmpfile datafile" <datafile
This command copies the "datafile" file to a remote server under the name "tmpfile".
Only after successful copy renames the temporary file "tmpfile" to the right name "datafile" on remote host.
If copying was not successful, the remote host will be only a temporary file.
Thus, you are protected from getting no full "datafile" file.
Sorry for my English.

ssh scripting and copying files

I am writing a BASH deployment script on RH 5. Script runs great and send out an email at the end of the script run. However, what I need to do is, at the end of the script, if I detect any failure, I need to copy log files back local server to attach to the email.
Script can detect failure fine, how to copy log files back. I don't want to just cat the log files as they can be huge.
Any suggestions?
Thanks
S
If I understand correctly your problem, you should use scp
http://linux.die.net/man/1/scp
and here you can find how to automate the login so you can use it in a script
http://linuxproblem.org/art_9.html
I can't see any easy way of avoiding a second login with scp/sftp. If you're sure that it's only the log file that will be returned you could do something like the following:
ssh -e none REMOTE SCRIPT | gzip -dc > LOGFILE
Inside SCRIPT you have something like gzip -c LOGFILE when if fails.