setting ssh-agent env vars - ssh

A manual workaround has been provided for this issue in:
Could not open a connection to your authentication agent
However why is ssh-agent not setting the correct environmental variables? and how can they be set permanently?

Related

Ansible : Failed to connect to the host via ssh : Permission denied (publickey,password)

i'm new in ansible, i've installed it yesterday and i want to try to ping my remote host (hpe switch 5130).
I have an issue the host is unreachable and i don't know how to fix that.
The config
Here is the issue
The host
The ssh works fine but i can't use ansible :(
How do you ssh to your switch?
If you're using a password, add the "-k" option to the ansible command. It will ask you to enter your ssh password. Alternatively, set the ansible_password variable.
Also you should set some environment related vars, such as ansible_connection and ansible_network_os.

Ansible ask to verify the ssh fingerprint and fails to ssh into newly created ec2 instance

I am creating ec2 instances and configuring them using ansible scripts. I have used
[ssh_connection]
pipelining=true
in my ansible.cfg file but it still asks to verify the ssh fingerprint, when I type yes and press enter it fails to login to the instance.
Just to let you know I am using ansible dynamic inventory and hence am not storing IPs or dns in hosts file.
Any help will be much appreciated.
TIA
Pipelining doesn't have any effect on authentication - it bundles up individual module calls into one bigger file to transfer over once a connection has been established.
In order not to stop execution and prompt you to accept the SSH key, you need to disable strict host key checking, not enable pipelining.
You can set that by exporting ANSIBLE_HOST_KEY_CHECKING=False or set it in ansible.cfg with:
[defaults]
host_key_checking=False
The latter is probably better for your use case, because it's persistent.
Note that even though this is a setting that deals with ssh connections, it is in the [defaults] section, not the [ssh_connection] one.
==
The fact that when you type yes you fail to log in makes it seem like this might not be your only problem, but you haven't given enough information to solve the rest.
If you're still having connection issues after disabling host key checking, edit the question to add the output of you SSHing into the instance manually, alongside the output of an ansible play with -vvv for verbose output.
First steps to look through when troubleshooting:
What are the differences between when I connect and when Ansible does?
Is the ansible_ssh_user set to the right user for the ec2 instance?
Is the ansible_ssh_private_key_file the same as the private part of the keypair you assigned the instance on creation?
Is ansible_ssh_host set correctly by whatever is generating your dynamic inventory?
I think you can find the answer here: ansible ssh prompt known_hosts issue
Basically, when you run ansible-playbook, you will need to use the argument:
ANSIBLE_HOST_KEY_CHECKING=False
Make sure you have your private key added (ssh-add your_private_key).

How prevent HG Workbench to ask for user name by using subrepositories?

I use HG Workbench and subrepositories over a ssh connection. I use also ssh-rsa to avoid typing my password every time. This works great for the main repository. But if I push or pull HG Workbench (or command shell) promt's with a dialog to typing my login name for every subreposititory. Can I prevent this?
Update:
I use windows. Also I have a [ui] section with username in my global mercurial.ini and in hgrc of every subrepositories.
Simple way is to use SSH keys and follow setup sets by this post: Set up SSH for Mercurial. Key step is to add the following content to mercurial.ini (TortoiseHg’s global setting file):
[ui]
# Name data to appear in commits
username = -name-you-want-to-show- <-email-#email.com>
ssh = "C:\Program Files\TortoiseHg\TortoisePlink.exe" - ssh - 2 - batch - C
Also, make sure Pageant.exe is running in background in order to make it work (Add key > load > your input passphrase file).
On windows edit the global mercurial.ini settings file and add ssh key to the ui section:
[ui]
ssh = tortoiseplink.exe -l <username>
You don't say what OS you are using, but if you are on Linux the hg-ssh command ought to work. There's more info on the Mercurial site here. If you don't get any better answers it might be a good workaround.

SSH to AWS Instance without key pairs

1: Is there a way to log in to an AWS instance without using key pairs? I want to set up a couple of sites/users on a single instance. However, I don't want to give out key pairs for clients to log in.
2: What's the easiest way to set up hosting sites/users in 1 AWS instance with different domains pointing to separate directories?
Answer to Question 1
Here's what I did on a Ubuntu EC2:
A) Login as root using the keypairs
B) Setup the necessary users and their passwords with
# sudo adduser USERNAME
# sudo passwd USERNAME
C) Edit /etc/ssh/sshd_config setting
For a valid user to login with no key
PasswordAuthentication yes
Also want root to login also with no key
PermitRootLogin yes
D) Restart the ssh daemon with
# sudo service ssh restart
just change ssh to sshd if you are using centOS
Now you can login into your ec2 instance without key pairs.
1) You should be able to change the ssh configuration (on Ubuntu this is typically in /etc/ssh or /etc/sshd) and re-enable password logins.
2) There's nothing really AWS specific about this - Apache can handle VHOSTS (virtual hosts) out-of-the-box - allowing you to specify that a certain domain is served from a certain directory. I'd Google that for more info on the specifics.
I came here through Google looking for an answer to how to setup cloud init to not disable PasswordAuthentication on AWS. Both the answers don't address the issue. Without it, if you create an AMI then on instance initialization cloud init will again disable this option.
The correct method to do this, is instead of manually changing sshd_config you need to correct the setting for cloud init (Open source tool used to configure an instance during provisioning. Read more at: https://cloudinit.readthedocs.org/en/latest/). The configuration file for cloud init is found at:
/etc/cloud/cloud.cfg
This file is used for setting up a lot of the configuration used by cloud init. Read through this file for examples of items you can configure on cloud-init. This includes items like default username on a newly created instance)
To enable or disable password login over SSH you need to change the value for the parameter ssh_pwauth. After changing the parameter ssh_pwauth from 0 to 1 in the file /etc/cloud/cloud.cfg bake an AMI. If you launch from this newly baked AMI it will have password authentication enabled after provisioning.
You can confirm this by checking the value of the PasswordAuthentication in the ssh config as mentioned in the other answers.
Recently, AWS added a feature called Sessions Manager to the Systems Manager service that allows one to SSH into an instance without needing to setup a private key or opening up port 22. I believe authentication is done with IAM and optionally MFA.
You can find out more about it here:
https://aws.amazon.com/blogs/aws/new-session-manager/
su - root
Goto /etc/ssh/sshd_config
vi sshd_config
Authentication:
PermitRootLogin yes
To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes
:x!
Then restart ssh service
root#cloudera2:/etc/ssh# service ssh restart
ssh stop/waiting
ssh start/running, process 10978
Now goto sudoers files (/etc/sudoers).
User privilege specification
root ALL=(ALL)NOPASSWD:ALL
yourinstanceuser ALL=(ALL)NOPASSWD:ALL / This is the user by which you are launching instance.
AWS added a new feature to connect to instance without any open port, the AWS SSM Session Manager.
https://aws.amazon.com/blogs/aws/new-session-manager/
I've created a neat SSH ProxyCommand script that temporary adds your public ssh key to target instance during connection to target instance. The nice thing about this is you will connect without the need to add the ssh(22) port to your security groups, because the ssh connection is tunneled through ssm session manager.
AWS SSM SSH ProxyComand -> https://gist.github.com/qoomon/fcf2c85194c55aee34b78ddcaa9e83a1
Amazon added EC2 Instance Connect.
There is an official script to automate the process https://pypi.org/project/ec2instanceconnectcli/
pip install ec2instanceconnectcli
Then just
mssh <instance id>

Can I forward env variables over ssh?

I work with several different servers, and it would be useful to be able to set some environment variables such that they are active on all of them when I SSH in. The problem is, the contents of some of the variables contain sensitive information (hashed passwords), and so I don't want to leave it lying around in a .bashrc file -- I'd like to keep it only in memory.
I know that you can use SSH to forward the DISPLAY variable (via ForwardX11) or an SSH Agent process (via ForwardAgent), so I'm wondering if there's a way to automatically forward the contents of arbitrary environment variables across SSH connections. Ideally, something I could set in a .ssh/config file so that it would run automatically when I need it to. Any ideas?
You can, but it requires changing the server configuration.
Read the entries for AcceptEnv in sshd_config(5) and SendEnv in ssh_config(5).
update:
You can also pass them on the command line:
ssh foo#host "FOO=foo BAR=bar doz"
Regarding security, note than anybody with access to the remote machine will be able to see the environment variables passed to any running process.
If you want to keep that information secret it is better to pass it through stdin:
cat secret_info | ssh foo#host remote_program
You can't do it automatically (except for $DISPLAY which you can forward with -X along with your Xauth info so remote programs can actually connect to your display) but you can use a script with a "here document":
ssh ... <<EOF
export FOO="$FOO" BAR="$BAR" PATH="\$HOME/bin:\$PATH"
runRemoteCommand
EOF
The unescaped variables will be expanded locally and the result transmitted to the remote side. So the PATH will be set with the remote value of $HOME.
THIS IS A SECURITY RISK Don't transmit sensitive information like passwords this way because anyone can see environment variables of every process on the same computer.
Something like:
ssh user#host bash -c "set -e; $(env); . thescript.sh"
...might work (untested)
Bit of a hack but if you cannot change the server config for some reason it might work.