how to use scp inside ssh in a shell script - scp

I have a server ServerA ( user1 as userid) .I want to connect to same server ServerA with userid as user2 and then do scp to get files from other server and keep in ServerA server(userid as user2).
Could anyone please help me out asap.

Wouldn't it be easier to do the scp as user1, then chown the files to user2? If you need to store them in a place accessible only to user2 you could su to user2 and move them.

Say you're on hostC (client) and you want to copy files from userX#hostA to userY#hostB.
You can issue an scp command on hostC:
scp userX#hostA:file-to-copy userY#hostB:destination-folder/
However, you need to authenticate. If you have an ssh key set up for userX#hostA key authentication will work fine.
However scp will invoke ssh with options to either disable agent forwarding or clear forwarding keys, so that even if you have a key for userY#hostB it won't be available to hostB and you'll be prompted for a password.
One solution to this is to pass a -S <ssh-command> to the invocation of scp on hostC with a wrapper script to strip out the options that prevent agent forwarding, perhaps having to explicitly enable it.
e.g.
ssh-wrapper.py
#!/usr/bin/python
import sys, os
def is_exe(fpath):
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
def which(program):
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
if __name__ == '__main__':
ssh = which('ssh')
assert ssh is not None
args = [ssh] + sys.argv[1:]
for x in ('-a','-oClearAllForwardings yes'):
if x in args:
args.remove(x)
if '-oForwardAgent yes' not in args:
args.insert(1,'-oForwardAgent yes')
os.execl(ssh,*args)
Invoking scp:
scp -S ssh-wrapper.py userX#hostA:file-to-copy userY#hostB:destination-folder/

Related

Robot Framework - SSH library - Editing a file on remote server

I am writing a test case in Robot Framework where in, I have to either copy the file from the local machine (windows) to the remote server (linux) or create a new one at the location.
I have used multiple sudo su - command to switch users to root user to reach the desired host. As a result of this, I am not able to use Put File Keyword from SSH Library to upload the file. I have reached at the desired folder location by executing the commands with Write keyword.
Since there is no option left (thats what i realize with my limited knowledge on Robot Framework), i started creating a new file with vi <filename> command. I have also reached the INSERT mode of the file, BUT i am not able to edit text into the file.
Can someone please suggest me how can i either
Copy the file from local windows machine to remote linux server AFTER multiple SU commands (Switch User)
Create a new text file and enter the content.
Please See : the new file which is being created / copied is a certificate file. Hence i do not wish to write the entire content of the certificate in my test suite file
The entire test case looks something like this
First Jump1
Log Starting the connection to AWS VM
# Connection to VM with Public Key
Connection To VM ${hostname} ${username}
Send Command sudo su -
Send Command su - <ServiceUser1>
# Reached the Detination server
Send Command whoami
Send Command ss -tln | grep 127.0.0.1:40
# Connecting to Particular ZIP
Send Command sudo -u <ServiceUser2> /usr/bin/ssh <ServiceUser2>#localhost -p <port>
Send Command sudo su -
# Check Auth Certificate
Send Command mosquitto_pub -h ${mq_host} -p ${mq_port} -u ${mq_username} -P ${mq_password}
In the step Check Auth Certificate, the certificate is checked to be present or not, if present -> delete the current certificate and create the new one (either create a new file or upload from local) and if it not there create a new certificate
though it might not be ideal, but was able to achieve what i wanted to do with
echo "content" > newFilename
echo "update content" >> newFileName

How to check SSH credentials are working or not

I have a large number of devices around 300
I have different creds to them
SSH CREDS, API CREDS
So as I cannot manually SSH to all those devices and check the creds are working or not
I am thinking of writing a script and pass the device IP's to the script and which gives me as yes as a result if the SSH creds are working and NO if not working.
I am new to all this stuff! details will be appreciated!
I will run this script on a server from where I can ssh to all the devices.
Your question isn't clear as to what sort of credentials you use for connecting to each host: do all hosts have the same connection method, for instance?
Let's assume that you use ssh's authorised keys method to log in to each host (i.e. you have a public key on each host within the ~/.ssh/authorized_keys file). You can run ssh with a do nothing command against each host and look at the exit code to see if the connection was successful.
HOST=1.2.3.4
ssh -i /path/to/my/private.key user#${HOST} true > /dev/null 2>&1
if [ $? -ne 0]; then echo "Error, could not connect to ${HOST}"; fi
Now it's just a case of wrapping this in some form of loop where you cycle through each host (and choose the right key for each host, perhaps you could name each private key after the name or IP address of the target host). The script will go out all those hosts for which a connection was not possible. Note that this script assumes that true is available on the target host, otherwise you could use ls or similar. We pipe all output to /dev/null/ as we're only interested in the ability to connect.
EDIT IN RESPONSE TO OP CLARIFICATION:
I'd strongly recommend not using username/password for login, as the username and password will likely be held in your script somewhere, or even in your shell history, if you run the command from the command line. If you must do this, then you could use expect or sshpass, as detailed here: https://srvfail.com/how-to-provide-ssh-password-inside-a-script-or-oneliner/
The ssh command shown does not spawn a shell, it literally logs in to the remote server, executes the command true (or ls, etc), then exits. You can use the return code ($? in bash) to check whether the command executed correctly. My example shows it printing out an error message for non-zero return codes, but to print out YES on successful connection, you could do this:
if [ $? -eq 0]; then echo "${HOST}: YES"; fi

SSH from Synology NAS to remote server

When I run competitions for Icelandic Horses, I want to automatically upload the results from our Synology NAS to a remote webserver. The program we use automatically generates the html-files that needs to be uploaded.
What is the easiest way to achieve this? I have SSH access on both the NAS and the webserver.
Any help is appreciated :)
In this case you can create a cron task in the synology console with the command:
sudo -i
vi /etc/crontab
Edit the file and add a line like this at the end of the file with a scp command:
0 0 * * * root scp -r "-i/root/.ssh/mykey" 'root#serverurl.com:/some/remote/path' '/some/local/path'
Finally you have to reload the configuration restarting the service with:
synoservice -restart crond
Before all this you must to configure a pair keys to avoid the password entry:
cd to a private directory of the user which will be running the script (typically "$HOME/.ssh", to be created if needed). That directory must be protected to write acces from other users, fix the modes if needed.
generate the keypair using command "ssh-keygen"
("/usr/syno/bin/ssh-keygen" if not in your PATH)
at the prompt "Enter file in which to save the key", choose a file
name (let's say "mykey")
at the prompt "Enter passphrase (empty for no passphrase):" press
return (this will create a passwordless private key)
Two files will be created: "mykey" and "mykey.pub"
copy the contents of mykey.pub inside "$HOME/.ssh/authorized_key"
file of user account on the remote machine your script is going to
connect to.
in your script, add "-i" as argument to the
ssh command
Also in this forum is explained how to make the copy with rsync instead of scp

Can't use RSYNC daemon via SSH connection

I have a problem while trying to use RSYNC with daemon and SSH connection.
What I wan't to do is simply login to rsync without pass and be able to use the rsync daemon.
Here is my conf file (/etc/rsyncd.conf):
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
auth users = rsync
hosts allow = <myIP>
/home/pierre/xyz has gid wich rsync user can reach.
This is working (but is not using the daemon):
rsync -rzP --stats --ignore-existing --remove-sent-files rsync#mydomain.fr:/home/pierre/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon), but rsync asks me for pass and then says "#ERROR: auth failed on module xyz" because I don't have configure authentification this way :
rsync -rzP --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon):
rsync -rzP -e "ssh -l rsync" --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Here is the error message:
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [Receiver=3.0.9]
With -v option to the ssh command, it says connection is allowed, so I suppose rsync is the problem, not ssh.
Any idee ?
Thanks for your help :)
Make sure that you stop and disable the rsync system service. E.g. if you are using systemd: systemctl disable --now rsync.
Remove -l rsync from the rsync command
rsync -rzP -e "ssh" --stats --ignore-existing --remove-sent-files rsync://mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Remove auth users = rsync from rsyncd.conf
I found that if I was not using root, I had to also add use chroot = no in rsyncd.conf.
Great it works, but what sort of authentification is made ?
The connection is authenticated as usual for the ssh command (specifically, the same as ssh mydomain.fr).
This does not involve the system service rsync. Instead it uses SSH to start and communicate with an instance of rsync --server --daemon .. You can see this command being started if you replace -e "ssh" with -e "ssh -v".
The problem with using the system service rsync is that it does not encrypt the network connection, so the network is able to intercept and modify the data in transit. This somewhat defeats the point of using any authentication.
Often this approach is used with a dedicated SSH key, using the command="" option in authorized_keys to restrict it to rsync only. A side-benefit of doing so is that it overrides the command rsync tries to use, so you can force it to use --config=~/rsyncd.conf instead of creating a global /etc/rsyncd.conf. IMO this is useful to avoid confusion IMO. It is good practice because if you create the global config file, there is some risk that you will accidentally run the insecure system service. For example Debian 9 enables the rsync system service by default, and will start it automatically at boot if you have created /etc/rsyncd.conf.
https://gist.github.com/trendels/6582e95012f6c7fc6542
https://indico.cern.ch/event/577279/contributions/2354037/attachments/1366772/2071442/Hepsysman-keeping-in-sync.pdf
https://serverfault.com/questions/6367/cant-get-rsync-to-work-in-daemon-over-ssh-mode
Unusual variant using a dedicated user with a custom shell, instead of command="" / ForceCommand, for some reason: http://mennucc1.debian.net/howto-ssh-rsyncd.html
To use rsync daemon without a password, you should remove auth users line from your config file.
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
hosts allow = <myIP>
After starting the daemon, you can refer the module either using :: syntax or using rsync:// prefix as follows
rsync -rzv rsync#mydomain.fr::xyz/ /media/xyz
rsync -rzv rsync://rsync#mydomain.fr/xyz/ /media/xyz
More info: man rsyncd.conf

how to ssh / su - by passing the password initially itself?

Anyone knows how to ssh / su - by passing the password initially itself?
Like:
ssh username#hostname -p [password]
pbrun su - unix_owner -p [password]
How can I achieve this?
It shouldn't popup for password or any RSA authentication like yes/no.
I think you will probably need a sudoers file to get stuff done in a su like manner without being prompted for a password.
I have never used ssh without a password prompt, but found this which suggests it can be done...
passing a password in clear text is not intended by ssh.
Try to learn about ssh key authentication (google would help), you won't need to type your password anymore.
ok, more detailed, try this:
on the remote machine
> mkdir -p ~/.ssh #if neccessary
> touch ~/.ssh/authorized_keys2
> chmod go-rwx $HOME/.ssh/authorized_keys2
on your local machine:
> ssh-keygen # if neccessary
> cat ~/.ssh/id_rsa.pub | ssh root#remotehost "cat >> .ssh/authorized_keys2 && chmod 0600 ~/.ssh/authorized_keys2"
A better approach would be using ssh keys, like other answers recommend, but if you really need it, you can use expect for that.
Just create a expect.file like this one:
#!/usr/bin/env expect
set username youruser
set pass yourpassword
set host yourhost
spawn ssh ${username}#${host}
expect -re "password:"
send "${pass}\r"
expect -re "$"
interact
and execute it:
expect expect.file
Can't do it. You're invoking the passwd program on the remote machine. If it had a way to change a password without prompting for the old one, ANYONE could change your password if they got onto your console. You'd still need to pass the password in over the ssh link
As for SSH, you could use RSA keys, and those won't prompt you for passwords.
As for SU, it would have to be hardcoded or you would have to create your own application to serve as a wrapper of sorts.
I don't think you can pass in password directly to the ssh command (It will be stored in your history otherwise). Why don't you use keys to skip the authentication prompt.