How to make a secure communication between nodes for replication? - ssl

I searched for the best way to estabilsh a replication between two servers with CouchDB but I do not find any infomation on it. The manual in https://docs.couchdb.org/en/stable/setup/cluster.html does not talk about this.
I am using a SSH permanent connection between the two servers using a certificate without passthrough:
ssh -f -L 127.0.0.1:5985:127.0.0.1:5984 sinccouchdb#100.100.100.100 -N -i id_rsa_sinccouchdb -l sinccouchdb -o ServerAliveInterval=60
But I am not sure if this is the best way. Anyone can point me to a better and secure solution?

Related

Is there a stability advantage using autossh instead of a while true loop calling ssh with ServerAliveInterval and ServerAliveCountMax set?

I want to establish a stable ssh tunnel between two machines. I have been using autossh for this in the past. However, the present setup does not allow me to perform local port forwarding (this is disabled in sshd_config on both sides for security reasons). As a consequence, it seems that autossh gets confused (it cannot set up a double, local and remote, port forwarding tunnel, to "ping itself", so it seems to be resetting the ssh tunnel periodically). So, I consider instead relying on a "pure ssh" solution, something like:
while true; do
echo "start tunnel..."
ssh -v -o ServerAliveInterval=120 -o ServerAliveCountMax=2 -R remote_port:localhost:local_port user#remote
echo "ssh returned, there was a problem. sleep a bit and retry..."
sleep 15
echo "... ready to retry"
done
My question is: is there some guarantees / stability features that I "used to have" with autossh, but that I will not have with the new solution? Anything I should be aware of? This solution should well check that the server is alive and communicating thanks to the 2 -o options, and restart the tunnel if needed, right?

ssh port forwarding with connection sharing

I want to create a port forwarding using ssh's -L option. The problem I have is that I use connection sharing to the remote host. So depending on if there is already a connection providing a master I either need
ssh -O forward -L ... $remotehost
(if there is already a master) or
ssh -N -L ... $remotehost
. I could use something like:
if ssh -O check $remotehost 2>/dev/null; then
ssh -O forward -L ... $remotehost
else
ssh -N -L ... $remotehost
fi
, but this is racy and from C code it would be easier if there was an option that makes ssh automatically start a master if there is none yet. For "normal" invocations you could use -o "ControlMaster auto", but this doesn't do the right thing here. I fail to find such an option in the docs however and wonder if I missed something.
So my question is: Is there a catch-all command, that adds a port forward independent of the settings for command multiplexing, that maybe even works if multiplexing isn't enabled at all.
ssh -N -L ... $remotehost doesn't seem to do anything at all if an already established connection is used. Is this a bug?
(Of course ssh -S none -N -L ... $remotehost works, but the obvious downside is that the maybe already existing connection isn't used then.)
It seems that this was old knowledge, ssh -N -L ... $remotehost does the right thing in my setup. Probably fixed since I checked this problem for real last time ... No points for "well researched question" :-)
Checking the changelog of openssh I didn't find this problem mentioned.

Is possible to copy files over ssh during an active connection

Very often I need to copy a file from a ssh connection. Lets say a mysql dump. what I do is
local $ ssh my_server
server$ mysqldump database >> ~/export.sql
server$ exit
local $ scp myserver:~/export.sql .
I know ssh has a lot of features like ssh-agent, port-forwarding, etc, and I was wondering if there is anyway to execute scp FROM the server to copy to my local computer (without creating another ssh connection).
First of all, this question is off-topic here, so it will be migrated or put on hold early.
Anyway, I described the solution for similar problem here, but it should help you: https://stackoverflow.com/a/33266538/2196426
Summed up, yes it is possible using remote port forwarding:
[local] $ ssh -R 2222:xyz-VirtuaBox:22 remote
[remote]$ scp -P 2222 /home/user/test xyz#localhost:/home/user

Cygrunsrv & autossh : A way to embedd remote commands in the command line?

I'm using cygrunsrv and autossh on windows XP to create a service building a tunnel to a remote server but i also want to create another tunnel from the remote server to another server.
I can achieve it with this command line :
autossh -M 5432 serverA -t 'autossh -M 4321 serverB -N'
but when I want to set it up in cygwin through cygrunsrv to make it works as a service :
cygrunsrv -I TUNNEL -p /usr/bin/autossh -a "-M 5432 serverA -t 'autossh -M 4321 serverB -N'" -e AUTOSSH_NTSERVICE=yes -e AUTOSSH_POLL=20 -e AUTOSSH_GATETIME=30
It's not fully working. The service is creating the tunnel correctly to ServerA but it's not sending the autossh command "autossh -M 4321 serverB -N" to ServerA.
I tried to escape the quote but all my efforts didn't make any difference and I'm not seeing any command sent in the autossh logs.
I think the problem is related to pseudo terminal that is not created through the cygrunsrv.
I'd like to know if there's a way to fix my cygrunsrv command line to make it work or should I consider a different approach ?
Lionel, try removing the AUTOSSH_NTSERVICE=yes from the cygrunsrv invocation. As /usr/share/doc/autossh/README.Cygwin explains:
Setting AUTOSSH_NTSERVICE=yes in the calling environment ...
change[s] autossh's behavior in three useful
ways:
(1) Add an -N flag to each invocation of ssh, thus disabling shell
access. The idea is that if you're running autossh as a system
service, you're using it to forward ports; it wouldn't make sense to
run a shell session as a system service. (If you think this reasoning
is wrong, please send a bug report to the author or Cygwin maintainer,
and tell us what you're trying to do.)
Despite what the above says, it seems that you may have a good reason for not wanting -N (which suppresses command execution) in your service's ssh invocation. Removing AUTOSSH_NTSERVICE=yes should take care of it. It will have a couple of other minor disadvantages, but you can probably live with it. Read the rest of README.Cygwin for the details.

Disabling unidentified host confirmation when connecting to Amazon EC2 instances using SSH

I am writing a script using boto and Python to automatically launch an Amazon EC2 instance and interact with it using SSH. Everything works fine except that every time I establish the connection, SSH prompts me to confirm the authenticity of the host like this:
The authenticity of host 'ec2-174-129-121-25.compute-1.amazonaws.com (174.129.121.25)' can't be established.
RSA key fingerprint is 26:09:bd:21:4f:55:20:3f:0d:fc:5f:cc:3e:08:30:db.
Are you sure you want to continue connecting (yes/no)?
My SSH command is:
ssh -i ssh2.pem root#ec2-174-129-121-25.compute-1.amazonaws.com
Since every EC2 instance is a new host, I have to confirm this every time, but I want an automatic script without any user input. What is the best solution?
Use -O StrictHostKeyChecking=no and, optionally, set the KnownHostsFile of /dev/null (if you want to be totally insecure about things). But remember, you're bypassing security measures meant to protect you!
edit and probably CheckHostIP=no too. man ssh and see all the gory bits.
For PuTTY and windows you can use
echo y | plink -pw yourpassword root#yourservername.com