Control where the SSH_AUTH_SOCK agent socket gets created - ssh

I run a ssh-agent which works fine in a lot of cases to ssh to some central host and from there to others.
I've one case, the ssh-connection to my ISP, where a
ssh -At user#host
creates on the target host this environment variables:
$ env | grep SSH
SSH_CONNECTION=132.174.172.2 51585 178.254.11.41 22
SSH_AUTH_SOCK=/tmp/ssh-2NIOsqUvTc/agent.30537
SSH_CLIENT=132.174.172.2 51585 22
SSH_TTY=/dev/pts/2
$ ls -l /tmp/ssh-2NIOsqUvTc/agent.30537
ls: cannot access '/tmp/ssh-2NIOsqUvTc/agent.30537': No such file or directory
$ ls -l /tmp
total 0``
Perhaps in the session on host I'm put by the provider into a chroot'ed environment without access to the socket file the ssh daemon created for me...
The question is: can I control somehow where this socket is created, for example in my $HOME?

Related

Multiple jumps ssh tunnel, one command line

I'm currently connecting my local machine with the target running commands in my local (mobaxterm), in pivotonone and pivottwo, this is the flow of data:
mobaxterm <--- pivotone <--- pivottwo <--- target
These are the commands that I run on each machine:
local(mobaxterm)
ssh -L 5601:127.0.0.1:5601 root#pivotone
pivotone
ssh -L 5601:127.0.0.1:5601 root#pivottwo
pivottwo
ssh -L 5601:127.0.0.1:5601 root#target
I was wandering if I could do the same but with just one command in my mobaxterm machine?
You don't need the -L option to manage jump hosts.
ssh -J root#pivotone,root#pivottwo root#target
You can automate this in your .ssh/config file
Host target
ProxyJump root#pivotone,root#pivottwo
Then you can simply run
ssh root#target

SSH to remote server refused if done via GitLab CI

We have a RHEL 7 remote server where I created a dummy user called gitlabci.
While SSH'd into the remote server, I generated a public-private key pair (for use when grabbing files from GitLab)
Uploaded the public key as a deploy key for use later when we get our CI set up
Generated another public-private key pair in my local machine (for use when SSH'ing into the remote server from the GitLab Runner)
Added the public key to the remote server's authorized_keys
Added the private key to the project's CI environment variables
The idea is when the CI runs, the GitLab runner will SSH into the remote server as the gitlabci user I created then fetch the branch into the web directory using the deploy keys.
I thought I have set up the keys properly but whenever the runner tries to SSH, the connection gets refused.
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )
...
$ eval $(ssh-agent -s)
Agent pid 457
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
Identity added: (stdin) (GitLab CI)
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ ssh gitlabci#random.server.com
Pseudo-terminal will not be allocated because stdin is not a terminal.
ssh: connect to host random.server.com port 22: Connection refused
ERROR: Job failed: exit code 1
When I tried to SSH into the remote server via GitBash on my local machine using the key pair I generated it did work.
$ ssh -i ~/.ssh/gitlabci gitlabci#random.server.com
Last login: Mon Nov 4 13:49:59 2019 from machine01.work.server.com
ssh: connect to host random.server.com port 22: Connection refused
"Connection refused" means that the ssh client transmitted a connection request to the named host and port, and it received in response a so-called "reset" packet, indicating that the remote server was refusing to accept the connection.
If you can connect to random.server.com from one host but get connection refused from another host, a few possible explanations come to mind:
You might have an entry in your .ssh/config file which substitutes a different name or address for random.server.com. For example, an entry like the following would cause ssh to connect to random2.server.com when you request random.server.com:
Host random.server.com
Hostname random2.server.com
The IP address lookup for "random.server.com" is returning the wrong address somehow, so ssh is trying to connect to the wrong server. For example, someone might have added an entry to /etc/hosts for that hostname.
Some firewall or other packet inspection software is interfering with the connection attempt by responding with a fake reset packet.

ssh not expanding ~ correctly?

So for permission reasons, I have had to change my default home directory to a non-standard location.
I did export HOME=/non/standard/home and then confirmed this was working with
$ cd ~
$ pwd
/non/standard/home
Even though man ssh says that it looks in ~/.ssh for keys and identity files, it doesn't seem to:
$ ls ~/.ssh
cluster_key cluster_key.pub config
$ ssh host
Could not create directory '/home/myname/.ssh'.
The authenticity of host 'host (<ip address deleted>)' can't be established.
RSA key fingerprint is <finerprint deleted>.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/myname/.ssh/known_hosts).
Permission denied (publickey,gssapi-with-mic).
What does it insist on looking in /home/myname? The man page state that is consults the HOME environment variable. Using the -F option also fails to work.
$ ssh -version
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Bad escape character 'rsion'.
When you run "export" command you actually affect only your process of BASH/SH. When .ssh looks for it it has it's own instance and thus looks in the default directory. You need to run the command usermod -m -d /path/to/new/home/dir userNameHere (change the user that .ssh uses, probably admin)

oneadmin opennebula ssh localhost

We've been trying to use opennebula to simulate a cluster but ssh is driving us crazy.
For some, still unknown reasons, it is necessary that user oneadmin (created by opennebula) is able to ssh to local host. The "home" directory of opennebula (created by it) is /var/lib/one and inside "one" we can find .ssh directory. So here's what I've done up to now:
sudo -su oneadmin
oneadmin#pc:$ cd /var/lib/one/.ssh
oneadmin#pc:/var/lib/one/.ssh$ ssh-keygen -t rsa
oneadmin#pc:/var/lib/one/.ssh$ cat id_rsa.pub >> authorized_keys
Moreover, I've changed all permissions: all files and directory have oneadmin as owner and 600 (as I can read from the opennebula guide)
and finally, by root, I do
service ssh restart
Then I login from one terminal as oneadmin again but when I perform:
ssh oneadmin#localhost
here's what I get
Permission denied (publickey).
where am I making this damned mistake? We've lost more than one day for all these permissions!
I've just run into a similar problem - turns out Open Nebula didn't get on with selinux.
Finally found the solution over here - http://n40lab.wordpress.com/2012/11/26/69/ - we need to restore the context to ~/.ssh/authorized_keys:
$ chcon -v --type=ssh_home_t /var/lib/one/.ssh/authorized_keys
$ semanage fcontext -a -t ssh_home_t /var/lib/one/.ssh/authorized_keys

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/