I need to reinstall one of ours servers, and as a precaution, I want to move /home, /etc, /opt, and /Services to backup server.
However, I have a problem: because of plenty of symbolic links a lot of files are copied multiple times.
Is it possible to make scp ignore the symbolic links (or actually to copy link as a link not as a directory or file)? If not, is there another way to do it?
I knew that it was possible, I just took wrong tool. I did it with rsync
rsync --progress -avhe ssh /usr/local/ XXX.XXX.XXX.XXX:/BackUp/usr/local/
I found that the rsync method did not work for me, however I found an alternative that did work on this website (www.docstore.mik.ua/orelly).
Specifically section 7.5.3 of "O'Reilly: SSH: The Secure Shell. The Definitive Guide".
7.5.3. Recursive Copy of Directories
...
Although scp can copy directories, it isn't necessarily the best method. If your directory contains hard links or soft links, they won't be duplicated. Links are copied as plain files (the link targets), and worse, circular directory links cause scp1 to loop indefinitely. (scp2 detects symbolic links and copies their targets instead.) Other types of special files, such as named pipes, also aren't copied correctly.A better solution is to use tar, which handles special files correctly, and send it to the remote machine to be untarred, via SSH:
$ tar cf - /usr/local/bin | ssh server.example.com tar xf -
Using tar over ssh as both sender and receiver does the trick as well:
cd $DEST_DIR
ssh user#remote-host 'cd $REMOTE_SRC_DIR; tar cf - ./' | tar xvf -
One solution is to use a shell pipe. I have a situation where I got some *.gz files and symbolic links generated by some software to link to the same *.gz files with a slightly shorter name. If I simply use scp, then the symbolic links will be copied as regular files and resulting in duplicates. I know rsync can ignore symbolic links, but my gz files are not compressed with syncable options, and sync is very slow in copying these gz files. So I simply use the following script to copy over the files:
find . -type f -exec scp {} target_host:/directory/name/data \;
The -f option will only find regular files and ignore symbolic links. You need to give this command on the source host. Hope this may help some user in my situation. Let me know if I missed anything.
A one liner solution which can be executed at client to copy folder from server using tar + ssh command.
ssh user#<Server IP/link> 'mkdir -p <Remote destination directory;cd <Remote destination directory>; tar cf - ./' | tar xf - C <Source destination directory>
Note: mkdir is must, if the remote destination directory is not present then the command will simply compress the entire home of the remote server and extract it to client.
Related
I'm working on a research project in a ML lab, and running stuff on their machines virtually using ssh. The machines are Linux, and my home laptop is a Mac. The actual machine storage is really small, so I'm supposed to use these two specific directories that show up under df -I (inodes list). But how do I like redirect files to store there?
Adding it to the end of directories when I use scp -r doesn't work. My command to send directories from my local computer to the ssh is scp -r /Local directory pwd/ username#server:/home/username, and for example sticking on the anode directory to the back of it (so new command would be scp -r /Local directory pwd/ username#server:/home/username/inode directory) doesn't work.
Would appreciate help
iTerm2 allows you to click on a link (CMD+click) and open it quickly. However, when working over SSH, this doesn't work. Is it possible to enable this functionality, so that I can CMD+click a file, and it will automatically download into a folder on my local machine?
Thanks!
This is actually possible with Shell Integration installed. Note that Shell Integration will need to be installed on any server that you are ssh'ing into, not just on your local machine. From this link:
iTerm has recently introduced a feature called Shell Integration. Using this feature, we can upload and download files conveniently directly from iTerm 2. Drag a file into the window when pressing Option Key uploads the file to the remote ssh connection. Right-click on a file using ls command will bring up a context list containing downloading the file.
Click “iTerm2->Install Shell Integration” when sshing into the remote server.
Ensure the server has a correct FQDN as hostname and can be connected through this hostname. (You can use hostname -f to check it)
If you’re using private key authentication, then you should have id_rsa in your .ssh directory. However, you should also put id_rsa.pub in your .ssh directory to use this feature.
Sorry for the late answer, but I was just trying to do the same thing and came across your question. Thought I would post my findings once I found a solution.
I've not had much success with ⌘+Clicking to download via SCP in iTerm2 because I have a complex set of rules involving jump hosts in ~/.ssh/config.
But I have found an elegant work around: a shell function which writes to STDOUT to trigger iTerm2 into capturing the output and saving it as a file!
I keep the following snippet (Toolbelt → Snippets) which I execute to define a command download:
alias download="bash <(base64 -d <<<'IyEvYmluL2Jhc2gKaWYgWyAkIyAtbHQgMSBdOyB0aGVuCiAgZWNobyAiVXNhZ2U6ICQwIGZpbGUg
Li4uIgogIGV4aXQgMQpmaQpmb3IgZmlsZW5hbWUgaW4gIiRAIgpkbwogIGlmIFsgISAtciAiJGZp
bGVuYW1lIiBdIDsgdGhlbgogICAgZWNobyBGaWxlICRmaWxlbmFtZSBkb2VzIG5vdCBleGlzdCBv
ciBpcyBub3QgcmVhZGFibGUuCiAgICBjb250aW51ZQogIGZpCgogIGZpbGVuYW1lNjQ9JChlY2hv
IC1uICIkZmlsZW5hbWUiIHwgYmFzZTY0KQogIGZpbGVzaXplPSggJCh3YyAtYyAiJHtmaWxlbmFt
ZX0iKSApCiAgcHJpbnRmICJcMDMzXTEzMzc7RmlsZT1uYW1lPSR7ZmlsZW5hbWU2NH07c2l6ZT0k
e2ZpbGVzaXplWzBdfToiCiAgYmFzZTY0IDwgIiRmaWxlbmFtZSIKICBwcmludGYgJ1xhJwpkb25l
Cg==')"
The base64-encoded string decodes to:
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 file ..."
exit 1
fi
for filename in "$#"
do
if [ ! -r "$filename" ] ; then
echo File $filename does not exist or is not readable.
continue
fi
filename64=$(echo -n "$filename" | base64)
filesize=( $(wc -c "${filename}") )
printf "\033]1337;File=name=${filename64};size=${filesize[0]}:"
base64 < "$filename"
printf '\a'
done
Which relies on iTerm2's download protocol
Sample session showing the notifications from iTerm2:
I have a Arduino Yun and want setup the server for Yun.
So what I want is to copy a folder that contain a py file and a index.html to my Yun
I used mac terminal to do this operation
the command looks like this
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
and then terminal asked for the password
after I typed, it shows
scp: /mnt/sda1/LobsterHeartRate: Not a directory
I didn't type /mnt/sda1/LobsterHeartRate why it shows this error
Your code
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
requires that the remote directory /mnt/sda1 exists. This looks like it is not true in your case. Check it using ssh root#192.168.240.1 ls /mnt/sda1.
scp is simple tool and it does not allow you to rename directories on the fly and the target directory must exists. You might try
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/
ssh root#192.168.240.1 mv /mnt/LobsterHeartRate /mnt/sda1
or so, if it will suit your needs. But copying more files, rsync is usually more suitable. Check its manual page and give it a try next time.
As #Jens Höpken notes, your post is a bit sparse. But trying to read between the lines of your post I suspect that LobsterHeartRate is a DIRECTORY on your local system but a FILE named LobsterHeartRate in your target system. This might be happening right at the top of the directory tree, or perhaps you have directories/files of the same name further down the tree. scp -rv might help resolve any confusions here.
Beware: scp -r resolves symbolic links. If you want to preserve symlinks you need to do something else. For historic reasons I use the following, though cpio with a find front-end opens up interesting possibilities for fine-grained file selections.
( cd /Users/gudi/Desktop && tar -cf - LobsterHeartRate ) |
ssh root#192.168.240.1 'cd /mnt/sda1 && tar -xf -'
For a safe "dry run" you could change the -xf to a -tf. The && chains are required to prevent bad things from happening if any prior command fails.
Disclaimer: any debugging is left as an exercise for the student.
I am trying to copy a file to remote server in a certain folder.
Its an adrive backup plan. But it comes with scp. I can copy the file if I don't select directory. Even if I put a directory that doesn't exist it says its a directory.
root#host1 [/usr/src]# scp ftpdelete.sh user#host#scp.adrive.com:/mysql-only/
scp: /mysql-only/: Is a directory
Amazingly enough in my case it was that the directory didn't exists!! :|
Is the error message a bug?... or it's me. Tempted for the latter.
SCP doesn't automatically create you new directory if you want to scp file (it creates directory only if you do recursive copy). There is wrong error message. The error should be No such file or directory or similar.
It is known problem and there is upstream bugzilla about this [1].
[1] https://bugzilla.mindrot.org/show_bug.cgi?id=1768
You are copying the sh file to a new directory on the server, and the directory is expected to be there but in fact not(then the machine thinks you want to change the file to be a directory). Most probably the directory you set is wrong.
-r' Recursively copy entire directories. Note that scp follows symbolic links encountered in the tree traversal.
But it doesn't create a directory but you can do below
ssh remote mkdir /diretcory
root#host1 [/usr/src]# scp -r ftpdelete.sh user#host#scp.adrive.com:/complete_path/mysql-only/
or
rsync can do the creating of directory if not exist
its basic command syntax is similar to scp:²
$ rsync -r -e ssh ftpdelete.sh me#my-system:/complete_path/mysql-only/
I saw a similar error, when tried scp to path that relative to home directory. The error fixed after removing unnecessary leading / in path:
# scp ftpdelete.sh user#host#scp.adrive.com:mysql-only/
rather then
# scp ftpdelete.sh user#host#scp.adrive.com:/mysql-only/
^
scp -r source_location user#servername:/target_location
Don't put "/" after the directory's name.
Try to download the file directly to your local root directory and then copy it from there :
root#host1 [/usr/src]# scp user#host:/root/Desktop/file.txt /root/home/
Whenever I try to SCP files (in bash), they end up in a seemingly random(?) order.
I've found a simple but not-very-elegant way of keeping a desired order, described below. Is there a clever way of doing it?
Edit: deleted my early solution from here, cleaned, adapted using other suggestions, and added as an answer below.
To send files from a local machine (e.g. your laptop) to a remote (e.g. your calculation server), you can use Merlin2011's clever solution:
Go into the folder in your local machine where you want to copy files from.
Execute the scp command, assuming you have an access key for the remote server:
scp -r $(ls -rt) user#foo.bar:/where/you/want/them/.
Note: if you don't have a public access key it may be better to do something similar using tar, then send the tar file, i.e. tar -zcvf files.tar.gz $(ls -rt), and then send that tar file on its own using scp.
But to do it the other way around you might not be able to run the scp command directly from the remote server to send files to, say, your laptop. Instead, you may need to, let's say bring files into your laptop. My brute-force solution is:
In the remote server, cd into the folder you want to copy files from.
Create a list of the files in the order you want. For example, for reverse order of creation (most recent copied last):
ls -rt > ../filenames.txt
Now you need to add the path to each file name. Before you go up to the directory where the list is, print the path using pwd. Now do go up: cd ..
You now need to add this path to each file name in the list. There are many ways to do this, here's one using awk:
cat filenames.txt | awk '{print "path/to/files/" $0}' > delete_me.txt
You need the filenames to be in the same line, separated by a space, so change newlines to spaces:
tr '\n' ' ' < delete_me.txt > filenames.txt
Get filenames.txt to the local server, and put it in the folder where you want to copy the files into.
The scp run would be:
scp -r user#foo.bar:"$(cat filenames.txt)" .
Similarly, this assumes you have a private access key, otherwise it's much simpler to tar the file in the remote, and bring that.
One can achieve file transfer with alphabetical order using rsync:
rsync -P -e ssh -r user#remote_host:/some_path local_path
P allows partial downloading, e sets the SSH protocol and r downloads recursively.
You can do it in one line without an intermediate using xargs:
ls -r <directory> | xargs -I {} scp <Directory>/{} user#foo.bar:folder/
Of course, this would require you to type your password multiple times if you do not have public key authentication.
You can also use cd and still skip the intermediate file.
cd <directory>
scp $(ls -r) user#foo.bar:folder/