I am trying to download a large number of files from a remote Ubuntu Server to my machine which is also running on Ubuntu. I am using SCP protocol as below:
for i in *; do $i sshpass -p 'Remote_Server_Passcode' scp root#<Remote_Server_IP>:'/opt/Data/' .; done
This is failing with an error message saying command not found
Any help pointing towards right direction will be highly helpful.
Thanks
If I understand correctly you just want to copy the whole /opt/Data directory, this can also be achieved like this:
scp -r root#<Remote_Server_IP>:/opt/Data/ .
-r means recursive
And as to what was going wrong the for i in *; do $i loops through all files in the current local directory and then tries to execute those, which is probably not what you wanted.
Related
I am trying to use the SSHOperator to SSH into a remote machine and run an external application through the command line. I have setup the SSH connection via the admin page.
This section of code is used to define the commands and the SSH connection to the external machine.
sshHook = SSHHook(ssh_conn_id='remote_comp')
command_1 ="""
cd /files/232-065/Rans
bash run.sh
"""
Where 'run.sh' runs the shell script:
#!/bin/sh
starccm+ -batch run_export.java Rans_Model.sim
Which simply runs the commercial software starccm+ with some options I have specified.
This section defines the task:
inlet_profile = SSHOperator(
task_id='inlet_profile',
ssh_hook=sshHook,
command=command_1
)
I have confirmed the SSH connection works by giving a simple 'ls' command and checking the output.
The error that I get is:
bash run.sh, error: run.sh: line 2: starccm+: command not found
The command in 'run.sh' works when I am logged into the machine (it does not require a GUI). This makes me think that there is a problem with the SSH session and it is not the same as the one that Apache Airflow logs into, but I am not sure how to solve this problem.
Does anyone have any experience with this?
There is no issue with SSH connection (at least from the error message). However, the issue is with starccm+ installation path.
Please check the installation path of starccm+ .
Check if the installation path is part of $PATH env variable
$ echo $PATH
If not, then install it in the standard locations like /bin or /usr/bin etc (provided they are included in $PATH variable), or export the installed director into PATH variable like this,
$ export PATH=$PATH:/<absolute_path>
It is not ideal but if you struggle with setting the path variable you can run starccm stating the full path like:
/directory/where/star/is/installed/starccm+ -batch run_export.java Rans_Model.sim
iTerm2 allows you to click on a link (CMD+click) and open it quickly. However, when working over SSH, this doesn't work. Is it possible to enable this functionality, so that I can CMD+click a file, and it will automatically download into a folder on my local machine?
Thanks!
This is actually possible with Shell Integration installed. Note that Shell Integration will need to be installed on any server that you are ssh'ing into, not just on your local machine. From this link:
iTerm has recently introduced a feature called Shell Integration. Using this feature, we can upload and download files conveniently directly from iTerm 2. Drag a file into the window when pressing Option Key uploads the file to the remote ssh connection. Right-click on a file using ls command will bring up a context list containing downloading the file.
Click “iTerm2->Install Shell Integration” when sshing into the remote server.
Ensure the server has a correct FQDN as hostname and can be connected through this hostname. (You can use hostname -f to check it)
If you’re using private key authentication, then you should have id_rsa in your .ssh directory. However, you should also put id_rsa.pub in your .ssh directory to use this feature.
Sorry for the late answer, but I was just trying to do the same thing and came across your question. Thought I would post my findings once I found a solution.
I've not had much success with ⌘+Clicking to download via SCP in iTerm2 because I have a complex set of rules involving jump hosts in ~/.ssh/config.
But I have found an elegant work around: a shell function which writes to STDOUT to trigger iTerm2 into capturing the output and saving it as a file!
I keep the following snippet (Toolbelt → Snippets) which I execute to define a command download:
alias download="bash <(base64 -d <<<'IyEvYmluL2Jhc2gKaWYgWyAkIyAtbHQgMSBdOyB0aGVuCiAgZWNobyAiVXNhZ2U6ICQwIGZpbGUg
Li4uIgogIGV4aXQgMQpmaQpmb3IgZmlsZW5hbWUgaW4gIiRAIgpkbwogIGlmIFsgISAtciAiJGZp
bGVuYW1lIiBdIDsgdGhlbgogICAgZWNobyBGaWxlICRmaWxlbmFtZSBkb2VzIG5vdCBleGlzdCBv
ciBpcyBub3QgcmVhZGFibGUuCiAgICBjb250aW51ZQogIGZpCgogIGZpbGVuYW1lNjQ9JChlY2hv
IC1uICIkZmlsZW5hbWUiIHwgYmFzZTY0KQogIGZpbGVzaXplPSggJCh3YyAtYyAiJHtmaWxlbmFt
ZX0iKSApCiAgcHJpbnRmICJcMDMzXTEzMzc7RmlsZT1uYW1lPSR7ZmlsZW5hbWU2NH07c2l6ZT0k
e2ZpbGVzaXplWzBdfToiCiAgYmFzZTY0IDwgIiRmaWxlbmFtZSIKICBwcmludGYgJ1xhJwpkb25l
Cg==')"
The base64-encoded string decodes to:
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 file ..."
exit 1
fi
for filename in "$#"
do
if [ ! -r "$filename" ] ; then
echo File $filename does not exist or is not readable.
continue
fi
filename64=$(echo -n "$filename" | base64)
filesize=( $(wc -c "${filename}") )
printf "\033]1337;File=name=${filename64};size=${filesize[0]}:"
base64 < "$filename"
printf '\a'
done
Which relies on iTerm2's download protocol
Sample session showing the notifications from iTerm2:
I have a Arduino Yun and want setup the server for Yun.
So what I want is to copy a folder that contain a py file and a index.html to my Yun
I used mac terminal to do this operation
the command looks like this
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
and then terminal asked for the password
after I typed, it shows
scp: /mnt/sda1/LobsterHeartRate: Not a directory
I didn't type /mnt/sda1/LobsterHeartRate why it shows this error
Your code
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/sda1
requires that the remote directory /mnt/sda1 exists. This looks like it is not true in your case. Check it using ssh root#192.168.240.1 ls /mnt/sda1.
scp is simple tool and it does not allow you to rename directories on the fly and the target directory must exists. You might try
scp -r /Users/gudi/Desktop/LobsterHeartRate root#192.168.240.1:/mnt/
ssh root#192.168.240.1 mv /mnt/LobsterHeartRate /mnt/sda1
or so, if it will suit your needs. But copying more files, rsync is usually more suitable. Check its manual page and give it a try next time.
As #Jens Höpken notes, your post is a bit sparse. But trying to read between the lines of your post I suspect that LobsterHeartRate is a DIRECTORY on your local system but a FILE named LobsterHeartRate in your target system. This might be happening right at the top of the directory tree, or perhaps you have directories/files of the same name further down the tree. scp -rv might help resolve any confusions here.
Beware: scp -r resolves symbolic links. If you want to preserve symlinks you need to do something else. For historic reasons I use the following, though cpio with a find front-end opens up interesting possibilities for fine-grained file selections.
( cd /Users/gudi/Desktop && tar -cf - LobsterHeartRate ) |
ssh root#192.168.240.1 'cd /mnt/sda1 && tar -xf -'
For a safe "dry run" you could change the -xf to a -tf. The && chains are required to prevent bad things from happening if any prior command fails.
Disclaimer: any debugging is left as an exercise for the student.
Whenever I try to SCP files (in bash), they end up in a seemingly random(?) order.
I've found a simple but not-very-elegant way of keeping a desired order, described below. Is there a clever way of doing it?
Edit: deleted my early solution from here, cleaned, adapted using other suggestions, and added as an answer below.
To send files from a local machine (e.g. your laptop) to a remote (e.g. your calculation server), you can use Merlin2011's clever solution:
Go into the folder in your local machine where you want to copy files from.
Execute the scp command, assuming you have an access key for the remote server:
scp -r $(ls -rt) user#foo.bar:/where/you/want/them/.
Note: if you don't have a public access key it may be better to do something similar using tar, then send the tar file, i.e. tar -zcvf files.tar.gz $(ls -rt), and then send that tar file on its own using scp.
But to do it the other way around you might not be able to run the scp command directly from the remote server to send files to, say, your laptop. Instead, you may need to, let's say bring files into your laptop. My brute-force solution is:
In the remote server, cd into the folder you want to copy files from.
Create a list of the files in the order you want. For example, for reverse order of creation (most recent copied last):
ls -rt > ../filenames.txt
Now you need to add the path to each file name. Before you go up to the directory where the list is, print the path using pwd. Now do go up: cd ..
You now need to add this path to each file name in the list. There are many ways to do this, here's one using awk:
cat filenames.txt | awk '{print "path/to/files/" $0}' > delete_me.txt
You need the filenames to be in the same line, separated by a space, so change newlines to spaces:
tr '\n' ' ' < delete_me.txt > filenames.txt
Get filenames.txt to the local server, and put it in the folder where you want to copy the files into.
The scp run would be:
scp -r user#foo.bar:"$(cat filenames.txt)" .
Similarly, this assumes you have a private access key, otherwise it's much simpler to tar the file in the remote, and bring that.
One can achieve file transfer with alphabetical order using rsync:
rsync -P -e ssh -r user#remote_host:/some_path local_path
P allows partial downloading, e sets the SSH protocol and r downloads recursively.
You can do it in one line without an intermediate using xargs:
ls -r <directory> | xargs -I {} scp <Directory>/{} user#foo.bar:folder/
Of course, this would require you to type your password multiple times if you do not have public key authentication.
You can also use cd and still skip the intermediate file.
cd <directory>
scp $(ls -r) user#foo.bar:folder/
I'd like to be able to upload to my remote server only updating new files. I am using a nanoblogger, and it appears to upload the entire thing every time using plain scp -r, but I can't find any -u option for scp mentioned in the man pages.
I suppose I could try to somehow script the upload with an ls or find that grabs only files updated in the last $n minutes, or something, but that seems heavy handed.
Use rsync over SSH.
I you can scp, you can very probably rsync over ssh:
rsync -a /some/dir/ user#server:/dest/dir/
scp doesn't have any conditional option, and probably won't get it anytime soon. rsync seems like a very reasonable way, if it is installed on the target system; if not, some find + uniq magic could do the job, but would be serious work. Compiling rsync would probably be faster :-).