copying the whole directory with some exceptions using scp - scp

I have to retrieve a directory with all the subdirectories from a server. However, I want to exclude some file with a specific extension (they are heavy and useless to me).
scp -r myname#servername:fodertocpy .
does copy the whole directory but I don't know how to exclude files with .abc extension, let's say.
I would like to use scp because it already automatically handles my passwords.

This isn't possible with only scp as scp doesn't have an exclude flag. I assume you want to utilise your key auth you've already setup with ssh /scp. If so I would do rsync over ssh - it would then utilise your existing key authentication.
Something like this would work:
rsync --exclude '*.abc' -avz -e ssh myname#servername:foldertocpy .
Have a look at the man rsync for an explanation of the flags.
Hope this helps,
Will

Related

How to sync .ssh folder from Windows to WSL1 correctly?

I want to sync C:\Users\USERNAME\.ssh and ~/.ssh in WSL1 correctly, but I don't know how to achieve that. I tried to use ln -s /mnt/c/Users/USERNAME/.ssh/ .ssh, and it does create a symbolic link as I expect. But ssh don't like permission of files in ~/.ssh (0777), and chmod doesn't work here. (Maybe because they are files under NTFS.)
Is there a way to mock the permission so that ssh could accept it? Or is there are a better way to do this than symbolic link?
If a symlink approach is not possible, you might want to synchronize on demand, meaning manually copy, your keys from your %USERPROFILE%\.ssh folder to the one representing $HOME for WSL1.
See "What is the home directory on Windows Subsystem for Linux?", For example C:\Users\<username>\AppData\Local\lxss.
This might be easier to do with WSL2 though.

sshfs: will a mount overwrite existing files? Can I tell it to exclude a certain subfolder?

I'm running Ubuntu and have a remote CentOS system which stores (and has access to) various files and network locations. I have SSH access to the CentOS machine and want to be able to work locally on Ubuntu.
I'm trying to mirror a remote directory structure. The remote directory is structured:
/my_data/user/*
And I want to replicate this structure locally (a lot of scripts rely on absolute paths).
However, for reasons of speed, I want a certain subfolder, for example:
/my_data/user/sourcelibs/
To be stored locally on disk. I know the sourcelibs subfolder doesn't change much (but the rest might). So I can comfortably rsync it:
mkdir -p /my_data/user/sourcelibs/
rsync -r remote_user#remote_host:/my_data/user/sourcelibs/ /my_data/user/sourcelibs/
My question is, if I use sshfs to mount /my_data/user:
sudo sshfs -o allow_other,default_permissions, remote_user#remote_host:/my_data/user /my_data/user
Will it overwrite my existing files? Is there a way to have sshfs mount but exclude certain subfolders?
Yes, sshfs will overwrite existing files. I have almost the same use case and just tested this myself. BTW, you'll need to add -o nonempty to your sshfs command since the destination dir /my_data/user already exists.
What I found to work is make a copy of the remote directory excluding the large sub dirs. IDK if keeping 2 copies in sync on the remote machine is feasible for your use case? But if you'll mostly be updating on your local machine and rarely making changes remotely, that could work.

gsutil rsync only files matching a pattern

I need to rsync files from a bucket to a local machine everyday, and the bucket contains 20k files. I need to download only the changed files that end with *some_naming_convention.csv .
What's the best way to do that? using a wildcard in the download source gave me an error.
I don't think you can do that with Rsynch. As Christopher told you, you can skip files by using the "-x" flag, but no just synch those [1]. I created a public Feature Request on your behalf [2] for you to follow updates there.
As I say in the FR, IMHO I consider this to not follow the purpose of rsynch, as it's to keep folders/buckets synchronise, and just synchronising some of them don't fall in that purpose.
There is a possible "workaround" by using gsutil cp to copy files and -n to skip the ones that already exist. The whole command for your case should be:
gsutil -m cp -n <bucket>/*some_naming_convention.csv <directory>
Other option, maybe a little bit more far-fetched is to copy/move those files to a folder and then use that folder to rsynch.
I hope this works for you ;)
Original Answer
From here, you can do something like gsutil rsync -r -x '^(?!.*\.json$).*' gs://mybucket mydir to rsync all json files. The key is the ?! prefix to the pattern you actually want.
Edit
The -x flag excludes a pattern. The pattern ^(?!.*\.json$).* uses negative look-ahead to specify patterns not ending in .json. It follows that the result of the gsutil rsync call will get all files which end in .json.
Rsync lets you include and exclude files matching patterns.
For each file rsync applies the first patch that matches, some if you want to sync only selected files then you need to include those, and then exclude everything else.
Add the following to your rsync options:
--include='*some_naming_convention.csv' --exclude='*'
That's enough if all your files are in one directory. If you also want to search sub folders then you need a little bit more:
--include='*/' --include='*some_naming_convention.csv' --exclude='*'
This will duplicate all the directory tree, but only copy the files you want. If that leaves empty directories you don't want then add --prune-empty-dirs.

SCP is creating subdirectory... but I just want it to copy directly

I'm trying to use scp to copy recursively from a local directory to a remote directory.... I have created the folders on the remote side:
Remote Location (already created):
/usr/local/www/foosite
I am running scp from the local machine in directory:
/usr/local/web/www/foosite
But it's copying the "foosite" directory as a subdirectory... I just want the contents of the folder, not the folder itself...
Here is the command I'm using:
scp -r /usr/local/web/www/foosite scpuser#216.99.999.99:/usr/local/www/foosite
The problem is that if you don't use the asterisk (*) in the local part of the call, scp will create a new top level directory in the remote server. It should look like this:
scp -r /usr/local/web/www/foosite/* scpuser#216.99.999.99:/usr/local/www/foosite
This says "Copy the CONTENTS" (but not the directory itself) to the remote location.
Hope this helps... Took me an hour or so to figure this out!!!
Old question, but I think there is a better answer. The trick is to leave the foosite directory off of the destination:
scp -r /usr/local/web/www/foosite scpuser#216.99.999.99:/usr/local/www
This will create the foosite directory on the destination if it does not exist, but will just copy files into foosite if the directory already exists. Basically the -r option will copy the last directory in the path and anything under it. If that last directory already exists on the destination, it just doesn't do the mkdir.

Need help with ssh script

I'm very new with ssh so I need some help to write some scripts. The idea is I have files distributed in different folders on a remote server. I want to copy some certain folders into another new folder also on the same server. Suppose, I know all the name of folders that I want to copy and I can list them in a text file. How can write a script that will automatically transfer all those folders into the place I need?
Also, suppose there is one file in each folder that is encrypted with an individual password. All passwords are known by me. How can I write a script to automatically decrypt them?
If you don't have a directly answer, can you give me a link to a tutorial on writing ssh scripts?
Many thanks
I think you might be a little confused.
SSH is the tool you use to get to the remote server.
Once you're connected to that remote server, the prompt you see and command line interface is called "sh" or "bash", typically, and is a shell.
What you're looking for is a shell scripting tutorial. You can google for others, but that one looks reasonable.
The simplest thing to do would be to just turn your list of files into a script. It might look something like this:
#!/bin/sh
for file in a, b, c, d; do
cp $file firstFolderName
done
for file in e, f, g, h; do
cp -v $file secondFolderName
done
decrypt secondFolderName/c "myPassword"
Obviously, the command to decrypt would depend on what encryption tool you used.
You could save this into a file called myscript.sh and execute it with sh myscript.sh from the command line. You might need to learn about nano, vi, or emacs, or another editor in order to actually edit this script from an ssh terminal session too.
Assuming that by SSH you mean bash accessed through SSH.
Assuming list of files is just like this:
/path/tofile1
/path/to/file/2
You can do:
$ cp `cat listOfInputFiles | xargs` destinationDirectory