I'm trying to make a rootfs backup from ServerA on to ServerB.
The connection is one way and is initialized from ServerB using rsnapshot.
I have made a backup account on ServerA and enabled paswordless sudo only for rsync
What I'm trying to accomplish:
Change the authorized_keys file on ServerA, so only the rsync command can be used via ssh.
On ServerB - /etc/rsnapshot.conf is setup to run rsync with the following args:
rsync_long_args --rsync-path="sudo rsync" --delete --numeric-ids --relative --delete-excluded
I have tried the following on ServerA:
from="ServerB",command="sudo rsync *" ssh-ed25519 SSH-KEY
But rsnapshot keeps crashing and giving IO error codes for rsync.
What am I missing here?
!! Problem Solved !!
Found out about rrsync --- /usr/share/doc/rsync/scripts/rrsync, copy it to wherever.
ServerA:authorized_keys --- command="sudo /usr/local/bin/rrsync -ro /backup"
Since I'm keeping a copy of the backups on ServerA, I might as well rsync from them instead of using rsnapshot on ServerB. (This was my initial idea, but it doesn't work since there are duplicate files because of links that rsnapshot creates, I ended up having rsnapshot running both on ServerA and ServerB, to save backups from ServerA to a localDir on ServerA and also make remote snapshots from ServerA to ServerB.)
Also changed the sudoers file on ServerA:
Defaults!/usr/local/bin/rrsync env_keep += "SSH_ORIGINAL_COMMAND"
backup ALL = (root) NOPASSWD: /usr/local/bin/rrsync
rsync -ax --delete --numeric-ids --relative ServerA:/ /ServerB-backup/
Now works as expected.
Note that the path on ServerA in the command above is relative to the rule set in authorized_keys.
Related
I want to copy files using rsync over ssh, but a "Cannot execute command-line and remote command." is raised, meanwhile it works fine with scp.
Command : rsync -ravh folder XXX:folder
My ssh config is configured as follows :
Host XXX
Hostname YY
User user
RequestTTY yes
RemoteCommand bash --init-file ~/.bashrc
I noticed that by removing the RemoteCommand option, rsync does the job.
How could I manage to make rsync work while using my current ssh host config ?
Thanks in advance.
I have a playbook that creates an SSH key in a remote serverA that then copies it over another remote serverB.
I'm looking for a way to test the SSH connection from serverA to serverB, and then maybe run some command in serverB (for example uname -a) to output it as a debug message that confirms the connection is working.
I've been looking around on the Internet and here as well, but I haven't found anything yet...
Any clue?
A quick approach would be to :
On Ansible's control node, use openssh_keypair to create an SSH
keypair. Please pay attention to the path, to make sure the existent
keypair is not overwritten.
Copy the keypair from Ansible's control node to serverA (make sure
your set the right permssions for the files), use the copy
module.
Copy the public-key (newly generated keypair) from Ansible's control
node to serverB (make sure your set the right permssions for the
file), and delete the source keypair.
Now, SSH keypairs setup is ready between serverA & serverB.
Run command module, on the serverA and register it's result e.g:
- name: Create variable from command
command: ssh -o StrictHostKeyChecking=no user#serverB 'some_command'
register: command_output
Print out the out the output of the registered result:
- debug: msg="{{ command_output.stdout }}"
Very often I need to copy a file from a ssh connection. Lets say a mysql dump. what I do is
local $ ssh my_server
server$ mysqldump database >> ~/export.sql
server$ exit
local $ scp myserver:~/export.sql .
I know ssh has a lot of features like ssh-agent, port-forwarding, etc, and I was wondering if there is anyway to execute scp FROM the server to copy to my local computer (without creating another ssh connection).
First of all, this question is off-topic here, so it will be migrated or put on hold early.
Anyway, I described the solution for similar problem here, but it should help you: https://stackoverflow.com/a/33266538/2196426
Summed up, yes it is possible using remote port forwarding:
[local] $ ssh -R 2222:xyz-VirtuaBox:22 remote
[remote]$ scp -P 2222 /home/user/test xyz#localhost:/home/user
I followed this page on Protecting the Docker daemon Socket with HTTPS to generate ca.pem, server-key.pem, server-cert.pem, key.pem and key-cert.pem
I wanted a remote Docker daemon to use those keys so i used rsync via ssh to send three of the files(ca.pem, server-key.pem and key.pem) to the remote host's home directory. The identity file for ssh into the remote host is called dl-datatest-internal.pem
ubuntu#ip-10-3-1-174:~$ rsync -avz -progress -e "ssh -i dl-datatest-internal.pem" dockerCer/ core#10.3.1.181:~/
sending incremental file list
./
ca.pem
server-cert.pem
server-key.pem
sent 3,410 bytes received 79 bytes 6,978.00 bytes/sec
total size is 4,242 speedup is 1.22
The remote host stopped recognising the identity file ever since and started asking for a non-existent password.
ubuntu#ip-10-3-1-174:~$ ssh -i dl-datatest-internal.pem core#10.3.1.151
core#10.3.1.151's password:
Does anyone know why and how to fix it? I still have all the keys if that helps.
There are a couple things about the rsync command that bother me, but, I can't put my finger on the problem (if there is one).
the rsync command and subsequent ssh command reference different hosts: rsync(core#10.3.1.181:~/
) and ssh to the host(core#10.3.1.151). Those are different machines, no?
the ~ in the target of the rsync command. core#10.3.1.181:~/. I am pretty sure that the ~/ references the core home directory, but, you could just get rid of the ~/ and replace that with a . (dot).
If you can reproduce the environment you did the copy in, you can add a --dry-run to the rsync command to see what it is going to do. Looking at this command I can't see it erasing the target's .ssh directory.
I have a problem while trying to use RSYNC with daemon and SSH connection.
What I wan't to do is simply login to rsync without pass and be able to use the rsync daemon.
Here is my conf file (/etc/rsyncd.conf):
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
auth users = rsync
hosts allow = <myIP>
/home/pierre/xyz has gid wich rsync user can reach.
This is working (but is not using the daemon):
rsync -rzP --stats --ignore-existing --remove-sent-files rsync#mydomain.fr:/home/pierre/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon), but rsync asks me for pass and then says "#ERROR: auth failed on module xyz" because I don't have configure authentification this way :
rsync -rzP --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
This is not working (using the daemon):
rsync -rzP -e "ssh -l rsync" --stats --ignore-existing --remove-sent-files rsync://rsync#mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Here is the error message:
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(605) [Receiver=3.0.9]
With -v option to the ssh command, it says connection is allowed, so I suppose rsync is the problem, not ssh.
Any idee ?
Thanks for your help :)
Make sure that you stop and disable the rsync system service. E.g. if you are using systemd: systemctl disable --now rsync.
Remove -l rsync from the rsync command
rsync -rzP -e "ssh" --stats --ignore-existing --remove-sent-files rsync://mydomain.fr/xyz/ /media/xyz --include="*.cfg" --exclude="*"
Remove auth users = rsync from rsyncd.conf
I found that if I was not using root, I had to also add use chroot = no in rsyncd.conf.
Great it works, but what sort of authentification is made ?
The connection is authenticated as usual for the ssh command (specifically, the same as ssh mydomain.fr).
This does not involve the system service rsync. Instead it uses SSH to start and communicate with an instance of rsync --server --daemon .. You can see this command being started if you replace -e "ssh" with -e "ssh -v".
The problem with using the system service rsync is that it does not encrypt the network connection, so the network is able to intercept and modify the data in transit. This somewhat defeats the point of using any authentication.
Often this approach is used with a dedicated SSH key, using the command="" option in authorized_keys to restrict it to rsync only. A side-benefit of doing so is that it overrides the command rsync tries to use, so you can force it to use --config=~/rsyncd.conf instead of creating a global /etc/rsyncd.conf. IMO this is useful to avoid confusion IMO. It is good practice because if you create the global config file, there is some risk that you will accidentally run the insecure system service. For example Debian 9 enables the rsync system service by default, and will start it automatically at boot if you have created /etc/rsyncd.conf.
https://gist.github.com/trendels/6582e95012f6c7fc6542
https://indico.cern.ch/event/577279/contributions/2354037/attachments/1366772/2071442/Hepsysman-keeping-in-sync.pdf
https://serverfault.com/questions/6367/cant-get-rsync-to-work-in-daemon-over-ssh-mode
Unusual variant using a dedicated user with a custom shell, instead of command="" / ForceCommand, for some reason: http://mennucc1.debian.net/howto-ssh-rsyncd.html
To use rsync daemon without a password, you should remove auth users line from your config file.
uid = rsync
gid = rsync
[yxz]
path = /home/pierre/xyz
read only = false
hosts allow = <myIP>
After starting the daemon, you can refer the module either using :: syntax or using rsync:// prefix as follows
rsync -rzv rsync#mydomain.fr::xyz/ /media/xyz
rsync -rzv rsync://rsync#mydomain.fr/xyz/ /media/xyz
More info: man rsyncd.conf