rsync over ssh triggers a "Cannot execute command-line and remote command" error - ssh

I want to copy files using rsync over ssh, but a "Cannot execute command-line and remote command." is raised, meanwhile it works fine with scp.
Command : rsync -ravh folder XXX:folder
My ssh config is configured as follows :
Host XXX
Hostname YY
User user
RequestTTY yes
RemoteCommand bash --init-file ~/.bashrc
I noticed that by removing the RemoteCommand option, rsync does the job.
How could I manage to make rsync work while using my current ssh host config ?
Thanks in advance.

Related

How to specify RemoteForward in the ssh config file?

I'm trying to setup a an ssh tunnel with remote port forwarding. The idea is the have a VPS act as a means to ssh into remote deployed systems (which currently incorporate a Raspberry Pi). Everything seems to work, but I run into issues when trying to move all arguments into the ~/.ssh/config file.
what does work is the setting of the HostName, User, Port and IdentityFile. However setting the RemoteForward parameter does not seem to work.
The following works:
ssh -R 5555:localhost:22 ssh-tunnel
How ever when using the following line in the config file;
Host ssh-tunnel
...
RemoteForward 5555 localhost:22
The following command returns the message "Bad remote forwarding specification 'ssh-tunnel'"
ssh -R ssh-tunnel
Obvious I found the answer almost immediately after posting the question. Using the -R flag requires you to set the remote forwarding in the command line call. However because remote forwarding is set in the config file you shouldn't add it to the command. However something confusing occurs in that aside from setting up the tunnel you also ssh into the remote server. To avoid this add the -f and the -N flag. This results in the following command:
ssh -f -N ssh-tunnel

Fish shell new function : "ssh: command not found "

I'm trying to create a function that connects me to a ssh server then become su, and then ssh into another server so I did the following:
function test
ssh -t testuser#server1 'sudo ssh -t testuser#server2'
end
When I execute it I get the following error ssh: command not found
But when I execute it straight to the terminal it works with no problems.
This sounds like a path issue on server1.
From the command line, what do you see if you type the following?
ssh -t testuser#server1 sudo which ssh
If SSH is not in the path for root, you might need to specify full paths, such as something like:
ssh -t testuser#server1 sudo /usr/bin/ssh testuser#server2
You may need to adjust the paths to match your environment, of course.
Also, if you're trying to connect from root at server1 to server2, can you just ssh directly to root#server1? If so, you could perhaps use the "ProxyJump" functionality that was added with OpenSSH 7.3. This depends upon the ability to remotely login as root, which may not be an option, depending on your environment.
ssh -J root#server1 testuser#server2
My problem was solved when I added each one of them to the fish functions folder:
~/.config/fish/functions
I just created a file called myfunction.fish and inside of that file I pasted the function definition:
function myfunction
ssh -t testuser#server1 'sudo ssh -t testuser#server2'
end
saved it, exited fish and now that function is permanent.

Can't use RSYNC daemon via SSH connection

I have a problem while trying to use RSYNC with daemon and SSH connection.
What I wan't to do is simply login to rsync without pass and be able to use the rsync daemon.
Here is my conf file (/etc/rsyncd.conf):
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
auth users = rsync
hosts allow = <myIP>
/home/pierre/xyz has gid wich rsync user can reach.
This is working (but is not using the daemon):
rsync -rzP --stats --ignore-existing --remove-sent-files rsync#mydomain.fr:/home/pierre/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon), but rsync asks me for pass and then says "#ERROR: auth failed on module xyz" because I don't have configure authentification this way :
rsync -rzP --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon):
rsync -rzP -e "ssh -l rsync" --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Here is the error message:
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [Receiver=3.0.9]
With -v option to the ssh command, it says connection is allowed, so I suppose rsync is the problem, not ssh.
Any idee ?
Thanks for your help :)
Make sure that you stop and disable the rsync system service. E.g. if you are using systemd: systemctl disable --now rsync.
Remove -l rsync from the rsync command
rsync -rzP -e "ssh" --stats --ignore-existing --remove-sent-files rsync://mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Remove auth users = rsync from rsyncd.conf
I found that if I was not using root, I had to also add use chroot = no in rsyncd.conf.
Great it works, but what sort of authentification is made ?
The connection is authenticated as usual for the ssh command (specifically, the same as ssh mydomain.fr).
This does not involve the system service rsync. Instead it uses SSH to start and communicate with an instance of rsync --server --daemon .. You can see this command being started if you replace -e "ssh" with -e "ssh -v".
The problem with using the system service rsync is that it does not encrypt the network connection, so the network is able to intercept and modify the data in transit. This somewhat defeats the point of using any authentication.
Often this approach is used with a dedicated SSH key, using the command="" option in authorized_keys to restrict it to rsync only. A side-benefit of doing so is that it overrides the command rsync tries to use, so you can force it to use --config=~/rsyncd.conf instead of creating a global /etc/rsyncd.conf. IMO this is useful to avoid confusion IMO. It is good practice because if you create the global config file, there is some risk that you will accidentally run the insecure system service. For example Debian 9 enables the rsync system service by default, and will start it automatically at boot if you have created /etc/rsyncd.conf.
https://gist.github.com/trendels/6582e95012f6c7fc6542
https://indico.cern.ch/event/577279/contributions/2354037/attachments/1366772/2071442/Hepsysman-keeping-in-sync.pdf
https://serverfault.com/questions/6367/cant-get-rsync-to-work-in-daemon-over-ssh-mode
Unusual variant using a dedicated user with a custom shell, instead of command="" / ForceCommand, for some reason: http://mennucc1.debian.net/howto-ssh-rsyncd.html
To use rsync daemon without a password, you should remove auth users line from your config file.
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
hosts allow = <myIP>
After starting the daemon, you can refer the module either using :: syntax or using rsync:// prefix as follows
rsync -rzv rsync#mydomain.fr::xyz/ /media/xyz
rsync -rzv rsync://rsync#mydomain.fr/xyz/ /media/xyz
More info: man rsyncd.conf

rsync remote files over SSH to my local machine, using sudo privileges on local side, and my personal SSH key

I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"

How to make SSH go directly to specific directory?

when you do an "ssh second_machine" you are able to connect to second_machine on your home directory
But usually i am working in my_machine in directory with very long path, and i want to connect to second_machine and move to my working directory right away. So everytime i have to:
ssh second_machine
cd /very/long/path/to/directory/
Is there a way to make it automatic ?? ( ssh automatically go to the desired directory )
This should work for you
ssh -t second_machine "cd /very/long/path/to/directory/; bash"
Assumes you're wanting to run bash, substitute for a different shell if required.
To make it permanent, use RemoteCommand in your ~/.ssh/config file, e.g.
Host myhost
HostName IP
User ubuntu
IdentityFile ~/.ssh/id_rsa
RemoteCommand cd /path/to/directory; $SHELL -il
Related:
SSH Config File Alias To Get To a Directory On Server
How can I automatically change directory on ssh login?
Run a remote command using ssh config file
You could do something like the one I'm using. Make an alias as the one below.
alias ssh 'ssh -t \!* "cd $PWD; csh"'
(here, csh could also be replaced by bash)
This brings you directly to the 'current' path on the other machine.
The usage would be like [$] ssh some machine
However, I find that it works slow. So, I'm looking for an alternative.