Test SSH connection between Avi Vantage Controller and Service Engine Host - load-balancing

The Avi docs say to add an ssh public key to the known_hosts file on the SE hosts so the controller can login and install and start the service engine host.
I'm pretty sure this isn't working properly. How can I test the ssh connection between the controller and the service engine host(s)? Where is the controller's private key stored?

We will automatically test the SSH connection and display status as appropriate. For security reasons, the private key configured is stored in plain key format anywhere on the file system.
Did you "create" a ssh key or "import" a ssh key - if you imported, you could use plain ssh -i <path-to-imported-private-key user#host from your workstation where the private key resides.
Refer to #Aziz comment for details on host status display. Also note the correction about authorized_keys (not authorized_hosts)

I am guessing this is in reference to creating a "LinuxServer" Cloud in Avi. On Avi, you have to do the following:
1) configure a SSHUser (Administration > Settings > SSH Key Settings). alternatively, this can also be created from UI during LinuxServer cloud creation.
2) Create the LinuxServer cloud (Infrastructure > Clouds) with appropriate hosts and select the SSHUser from the dropdown.
The SSH keys configured are stored encrypted in Avi controller DB and not exposed via API/REST or on file system. The Avi Controller modules use the decrypted key to connect to each host and provision the SE.
I suppose the docs are not clear - you dont add the Avi Controller's public key to each host, instead you add "your" custom SSH key pair into Avi Controller (via step 1 above) and add the correspinding public key on each host.
With regards to "testing" the SSH connection, since these are your owned keys, you can plain "ssh -i username#host" to test the SSH. Alternatively, the Cloud status will also provide information if SSH using the configured key failed for any reason.
Please refer: http://kb.avinetworks.com/installing-avi-vantage-for-a-linux-server-cloud/ for complete install guide.
Let me know if your question was related to a different Cloud/Topic.

Adding to what #Siva explained, the status of the connection is displayed in the controller cloud page (From menu Infrastructure->Clouds, click on the cloud where host are added). Also if you hover the mouse over the State column of the host then you can see the detailed reason of the failure.
This Host Status in linux server cloud, in this case "Default-Cloud" is a linux server cloud with 3 host, out of which on one of the host ssh fails. In this example the host 10.10.99.199 is a fake entry i.e. there is no host with that IP hence SSH fails, where as 10.10.22.71 and 10.10.22.35 are the host for which SSH credentials passed, then the Service Engine was deployed on them and are ready for Virtual Services(load balancer or SSL termination etc..) to be placed on them.
#Davidn Coleman, In the comment you mentioned that you added the public key to authorized_hosts (you need to add the key to authorized_keys), and also the user for whom you added the ssh authorization is not root(i.e. /home/user/.ssh/authorized_keys) then make the user is sudoer (add the entry in /etc/sudoers for this user) and also make sure the permission for .ssh dir and authorized_keys are set correctly (for security reasons and good practise).
The following is the snippet for the host 10.10.22.35.
[root#localhost ~]# ls -lrtha
total 318M
-rw-r--r--. 1 root root 129 Dec 28 2013 .tcshrc
-rw-r--r--. 1 root root 100 Dec 28 2013 .cshrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bashrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bash_profile
-rw-r--r--. 1 root root 18 Dec 28 2013 .bash_logout
-rw-------. 1 root root 1.2K May 27 13:56 anaconda-ks.cfg
drwxr-xr-x. 3 root root 17 May 27 14:07 .cache
drwxr-xr-x. 3 root root 17 May 27 14:07 .config
dr-xr-xr-x. 17 root root 4.0K May 31 08:15 ..
drwxr-----. 3 root root 18 May 31 08:25 .pki
-rw-------. 1 root root 1.9K May 31 08:46 .viminfo
drwx------. 2 root root 28 May 31 09:09 .ssh
-rw-r--r--. 1 root root 317M May 31 09:13 se_docker.tgz
-rw-r--r--. 1 root root 1.2M May 31 09:13 dpdk_klms.tar.gz
dr-xr-x---. 6 root root 4.0K May 31 09:14 .
-rw-r--r--. 1 root root 1.1K May 31 09:14 avise.service
-rw-------. 1 root root 3.4K Jun 1 09:14 .bash_history
[root#localhost ~]# ls -lrtha .ssh/
total 8.0K
-rw-r--r--. 1 root root 399 May 31 09:09 authorized_keys
drwx------. 2 root root 28 May 31 09:09 .
dr-xr-x---. 6 root root 4.0K May 31 09:14 ..
[root#localhost ~]# pwd
/root

Related

SSL certificates not loaded and container startup

I have a container from where I am trying to reach an HTTPS URL using:
curl -v https//myserver:7050
The SSL issuer certificate of the server is placed on the VM where I run the container in /etc/ssl/certs. This VM location is volume mapped to /etc/ssl/certs of the container. This means the cert should be available to the container.
However, when I issue the curl command, I get a message saying "unable to get issuer certificate".
Then I need to run
update-ca-certificates --refresh
After this the curl command succeeds.
If I am starting the container with a volume map, why am I required to run the update-ca-certificates command? Shouldn't the container already have all the certs in its cache when it starts up?
Regards
Yash
Files on /etc/ssl/certs are symlinks to other files, if you mount a folder with symlinks, it will try to load the files they are linked, which probally doesnt exists inside your container.
U will need to mount the original file locations too.
lrwxrwxrwx. 1 root root 49 Jul 19 06:51 ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
lrwxrwxrwx. 1 root root 55 Jul 19 06:51 ca-bundle.trust.crt -> /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt
Or you can mount the original single files to your container /etc/ssl/certs

How to authenticate ldap user and login on server as GUI ,it should login on server directly via GUI

I am new to System admin My problem is : In my department there are 30 students in 1st year and 30 students in 2nd year which are divided into two groups lets say group1 and group2 which need to login as ldap user via Ubuntu(14.04) GUI through any System connected to LAN.Every users home directory should be created on server side ,It should mount while login as GUI in ubuntu14.04, No other user should access anyone else home directory except by self.
[I don't want authenticating user to ldap-server and creating home directory on local machine ,instead I want central directory on server side,It should looks like login to server.]
Server Side : Ubuntu 14.04
I tried this and it works fine for me.
Client side : Ubuntu14.04
I tried this , it also works
but the issue is , this tutorial creates home directory on local machine instead of mounting server directory.I know from where it does.
I want : If i login through ldap user It should login on server via GUI not on local machine home directory.
on client side file "/var/log/auth.log"
Jul 28 11:53:06 issc systemd-logind[674]: System is rebooting.
Jul 28 11:53:23 issc systemd-logind[650]: New seat seat0.
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event1 (Power Button)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event4 (Video Bus)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event0 (Power Button)
Jul 28 11:53:24 issc sshd[833]: Server listening on 0.0.0.0 port 22.
Jul 28 11:53:24 issc sshd[833]: Server listening on :: port 22.
Jul 28 11:53:25 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:25 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:25 issc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0)
Jul 28 11:53:25 issc systemd-logind[650]: New session c1 of user lightdm.
Jul 28 11:53:25 issc systemd-logind[650]: Linked /tmp/.X11-unix/X0 to /run/user/112/X11-display.
Jul 28 11:53:26 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:26 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:26 issc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "scicomp"
Jul 28 11:53:29 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Please help me, i tried many tutorial online and every tutorial looks like same ,like this one.I am trying from last 2 weeks its not working.Thank you for your time.
You need to install and configure autofs for this to work. autofs will automatically mount user's home directories on the client machine from an NFS server. I'm not sure about creating them on the server on the fly, but if it does work, you will likely need to enable the pam_mkhomedir module in the appropriate /etc/pam.d file(s), as described here
Yep! I tried and worked for me.
**Server Side :** Package require to install :
$ sudo apt-get install nfs-kernel-server
Updated in below file like this
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo vi /etc/exports
#/homes 198.1.10.*(fsid=0,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part1/home 198.1.10.*(fsid=1,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part2/home 198.1.10.*(fsid=2,rw,insecure,no_subtree_check,sync)
Exported as per below::::
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo exportfs -r
root#issc-ldap:/ldap/rnd# showmount -e 198.1.10.45
Export list for 198.1.10.45:
/ldap/batch2015part1/home
/ldap/batch2015part2/home
**On Client Side :** Package require to install :
$ sudo apt-get install nfs-kernel-client
NOW ON CLIENT SIDE mount,permission ,ownership::::::
$ sudo gedit /etc/fstab
#####below are partition mounted from server
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs nfsvers=3,sync 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs nfsvers=3,sync 0 4
### or like this below
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs noauto,x-systemd.automount 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs noauto,x-systemd.automount 0 4
Now mount all pertition from server side as per below : :::::
$ sudo mount -a
Check mounted partion by below commands
$ df -h

Unable to negotiate with XX.XXX.XX.XX: no matching host key type found. Their offer: ssh-dss

I am trying to create a git repository on my web host and clone it on my computer. Here's what I did:
I created a repository on the remote server.
I generated a key pair: ssh-keygen -t dsa.
I added my key to ssh-agent.
I copied to the server public key in ~/.ssh.
And then, after an attempt to run the command git clone ssh://user#host/path-to-repository, I get an error:
Unable to negotiate with XX.XXX.XX.XX: no matching host key type found. Their offer: ssh-dss
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
What does that mean?
The recent openssh version deprecated DSA keys by default. You should suggest to your GIT provider to add some reasonable host key. Relying only on DSA is not a good idea.
As a workaround, you need to tell your ssh client that you want to accept DSA host keys, as described in the official documentation for legacy usage. You have few possibilities, but I recommend to add these lines into your ~/.ssh/config file:
Host your-remote-host
HostkeyAlgorithms +ssh-dss
Other possibility is to use environment variable GIT_SSH to specify these options:
GIT_SSH_COMMAND="ssh -oHostKeyAlgorithms=+ssh-dss" git clone ssh://user#host/path-to-repository
You can also add -oHostKeyAlgorithms=+ssh-dss in your ssh line:
ssh -oHostKeyAlgorithms=+ssh-dss user#host
For me this worked: (added into .ssh\config)
Host *
HostkeyAlgorithms +ssh-dss
PubkeyAcceptedKeyTypes +ssh-dss
If you would like to contain this security hole to a single repo, you can add a config option to any Git repos that need this by running this command in those repos. (Note: only works with git version >= 2.10, released 2016-09-04)
git config core.sshCommand 'ssh -oHostKeyAlgorithms=+ssh-dss'
This only works after the repo is setup, however. If you're not comfortable adding a remote manually (and just want to clone), then you can run the clone like this:
GIT_SSH_COMMAND='ssh -oHostKeyAlgorithms=+ssh-dss' git clone ssh://user#host/path-to-repository
Then run the first command to make it permanent.
If you don't have the latest Git and still would like to keep the hole as local as possible, I recommend putting
export GIT_SSH_COMMAND='ssh -oHostKeyAlgorithms=+ssh-dss'
in a file somewhere, say git_ssh_allow_dsa_keys.sh, and sourceing it when needed.
I want to collaborate a little with the solution for the server side. So, the server is saying it does not support DSA, this is because the openssh client does not activate it by default:
OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use.
So, to fix this this in the server side I should activate other Key algorithms like RSA o ECDSA. I just had this problem with a server in a lan.
I suggest the following:
Update the openssh:
yum update openssh-server
Merge new configurations in the sshd_config if there is a sshd_config.rpmnew.
Verify there are hosts keys at /etc/ssh/. If not generate new ones, see man ssh-keygen.
$ ll /etc/ssh/
total 580
-rw-r--r--. 1 root root 553185 Mar 3 2017 moduli
-rw-r--r--. 1 root root 1874 Mar 3 2017 ssh_config
drwxr-xr-x. 2 root root 4096 Apr 17 17:56 ssh_config.d
-rw-------. 1 root root 3887 Mar 3 2017 sshd_config
-rw-r-----. 1 root ssh_keys 227 Aug 30 15:33 ssh_host_ecdsa_key
-rw-r--r--. 1 root root 162 Aug 30 15:33 ssh_host_ecdsa_key.pub
-rw-r-----. 1 root ssh_keys 387 Aug 30 15:33 ssh_host_ed25519_key
-rw-r--r--. 1 root root 82 Aug 30 15:33 ssh_host_ed25519_key.pub
-rw-r-----. 1 root ssh_keys 1675 Aug 30 15:33 ssh_host_rsa_key
-rw-r--r--. 1 root root 382 Aug 30 15:33 ssh_host_rsa_key.pub
Verify in the /etc/ssh/sshd_config the HostKey configuration. It should allow the configuration of RSA and ECDSA. (If all of them are commented by default it will allow too the RSA, see in man sshd_config the part of HostKey).
# HostKey for protocol version 1
#HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
For the client side, create a key for ssh (not a DSA like in the question) by just doing this:
ssh-keygen
After this, because there are more options than ssh-dss(DSA) the client openssh (>=v7) should connect with RSA or better algorithm.
Here another good article.
This is my first question answered, I welcome suggestions :D .
In my case for bitbucket, the following worked.
Host yourhost(ex: bitbucket.com)
User git
PubkeyAcceptedAlgorithms +ssh-rsa
HostkeyAlgorithms +ssh-rsa
Add to vi ~/.ssh/config:
Host YOUR_HOST_NAME
HostkeyAlgorithms ssh-dss
In my case error was:
Unable to negotiate with IP_ADDRESS port 22: no matching host key
type found. Their offer: ssh-rsa,ssh-dss
In this case it helps in vi ~/.ssh/config:
Host YOUR_HOST_NAME
HostKeyAlgorithms ssh-dss
PubkeyAcceptedKeyTypes ssh-rsa
So, host key algorithm is ssh-dss and pub key is ssh-rsa.
And then I can use ssh for this host in normally (with out any flags).
You either follow above approach or this one
Create the config file in the .ssh directory and add these line.
host xxx.xxx
Hostname xxx.xxx
IdentityFile ~/.ssh/id_rsa
User xxx
KexAlgorithms +diffie-hellman-group1-sha1

gerrit ssh login issue

I am having an issue with my ssh login to gerrit. When I use one key file it works, but with the other it does not.
ssh gerrit_admin#<host> -p 29418 -i ~/.ssh/project/prod_rsa
**** Welcome to Gerrit Code Review ****
.....
ssh gerrit_admin#<host> -p 29418 -i ~/.ssh/id_rsa
Permission denied (publickey).
Now the issue seems obvious that one key is on the server and one is not. However both of these key files are identical. Not just copies, but a hard link meaning they both point to the exact same blocks on disk.
ls -il ~/.ssh/id_rsa ~/.ssh/project/prod_rsa
7603695 -rw------- 2 nellis nellis 1693 Jun 23 13:22 /home/nellis/.ssh/id_rsa
7603695 -rw------- 2 nellis nellis 1693 Jun 23 13:22 /home/nellis/.ssh/project/prod_rsa
Why do these "two" keys which are very much the same produce different reseults?
Not sure exactly what the problem here was, but I ended up just ditching the keys and importing new ones via the web interface.

How do you automatically set permissions on a file when uploaded using a sftp

Hello i currently have a folder set up that can have files uploaded to using a sftp.
drwxrwxr-x. 2 cypress cypress 4096 Apr 30 15:24 sourceit
But when i file gets uploaded it gets uploaded as
-rw-r--r--. 1 cypress sftpusrs 7 Apr 30 15:24 test.file
what do i have to do to set it up so when i file gets uploaded it will automatically set the permissions to
drwxrwxr-x. 1 cypress sftpusrs 7 Apr 30 15:24 test.file
Thank you for your help.
I currently have everything set up in openssh sshd_config for ftping
Match user cypress
ChrootDirectory /mnt/cypress
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
Modify or Add this line to your sshd_config
ForceCommand internal-sftp -u 2
which should apply a umask of 002.
With umask or SFTP, there is no way to automaticaly put a file executable, it would be a huge security risk. You must run chmod in a separate command in order to do that.