How to tell ansible to use a key - ssh

I'm not very familiar with ansible.
The problem I have at the moment is the following:
I have a master - nodes environment with multiple nodes.
My ansible needs to access my nodes but can't access them.
SSH Error: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I'm able to SSH from my master to each node but only by using a key:
ssh -i key-to-node.pem centos#ec2...
Is it possible to setup something to allow ansible to connect to the created hosts?

You can define your pem file in your ansible.cfg:
private_key_file=key-to-node.pem
If you don't have one, create one at the same location where you playbook is or in /etc/ansible/ansible.cfg.
If you have different keys for your hosts, you can also define the key in your inventory:
ansible_ssh_private_key_file=key-to-node.pem
Also, if you would have configured ssh to work without explicitly passing the private key file (in your .ssh/config) Ansible would automatically work.
Adding an example from the OpenShift page, as mentioned in the comments.
I personally have never configured it this way (as I have set up everything via ~/.ssh/config but according to the docs it should be working like this:
[masters]
master.example.com ansible_ssh_private_key_file=1.pem
# host group for nodes, includes region info
[nodes]
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" ansible_ssh_private_key_file=2.pem
Alternatively, since you have multiple nodes and maybe the same key for all of them, you can define a separate nodes:vars section
[nodes:vars]
ansible_ssh_private_key_file=2.pem

Related

Can I specify the expected host key fingerprint for Ansible ssh?

I'd like to be able to check the public key fingerprint for each host (the ssh servers) in an inventory into my host_vars directory. All my searches on the topic lead me to various resources describing either how to specify the client key or how to disable host key checking altogether. It seems to me though that once I've established that a host (server) key is authentic, it would be good to share that knowledge with others and keep the security benefit of host key checking. Is this possible?
I would suspect that in the same way those articles you have read are updating ansible_ssh_common_args in order to set -o UserKnownHostsFile=/dev/null you can similarly update that fact to use -o UserKnownHostsFile={{ playbook_dir }}/my_known_hosts to bundle the known hosts file alongside the playbooks that would use them

Ansible ask to verify the ssh fingerprint and fails to ssh into newly created ec2 instance

I am creating ec2 instances and configuring them using ansible scripts. I have used
[ssh_connection]
pipelining=true
in my ansible.cfg file but it still asks to verify the ssh fingerprint, when I type yes and press enter it fails to login to the instance.
Just to let you know I am using ansible dynamic inventory and hence am not storing IPs or dns in hosts file.
Any help will be much appreciated.
TIA
Pipelining doesn't have any effect on authentication - it bundles up individual module calls into one bigger file to transfer over once a connection has been established.
In order not to stop execution and prompt you to accept the SSH key, you need to disable strict host key checking, not enable pipelining.
You can set that by exporting ANSIBLE_HOST_KEY_CHECKING=False or set it in ansible.cfg with:
[defaults]
host_key_checking=False
The latter is probably better for your use case, because it's persistent.
Note that even though this is a setting that deals with ssh connections, it is in the [defaults] section, not the [ssh_connection] one.
==
The fact that when you type yes you fail to log in makes it seem like this might not be your only problem, but you haven't given enough information to solve the rest.
If you're still having connection issues after disabling host key checking, edit the question to add the output of you SSHing into the instance manually, alongside the output of an ansible play with -vvv for verbose output.
First steps to look through when troubleshooting:
What are the differences between when I connect and when Ansible does?
Is the ansible_ssh_user set to the right user for the ec2 instance?
Is the ansible_ssh_private_key_file the same as the private part of the keypair you assigned the instance on creation?
Is ansible_ssh_host set correctly by whatever is generating your dynamic inventory?
I think you can find the answer here: ansible ssh prompt known_hosts issue
Basically, when you run ansible-playbook, you will need to use the argument:
ANSIBLE_HOST_KEY_CHECKING=False
Make sure you have your private key added (ssh-add your_private_key).

how to connect from one ec2 instance to another ec2 instance through ssh

I have two amazon ec2 instances
i can connect to those ec2 instance from my windows using putty (by the public key generated from the private key provided by amazon)
now i want to install tungsten replicator into my ec2 instances
and tungsten replicator needs ssh access from one ec2 instance to another ec2 instance
i tried to check that ssh is working or not from one ec2 instance to another
i tried:
ssh ec2-user#public ip of destination instance
//also tried
ssh ec2-user#private ip destination instance
but its not working
i got following error:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
i have search on google and tried some trick but none of them worked
sometime i got following error:
Address public_ip maps to xxxx.eu-west-1.compute.amazonaws.com, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
can anyone please tell me how to connect ssh from one ec2 instance to another
I'd suggest you to create a special keypair for the tungsten user.
cd tungsten-user-home/.ssh
ssh-keygen -t rsa
mv id-rsa.pub authorized-keys
And then copy both files to the other host in the same place and permissions.
This will allow tungsten to work without requiring your own key.
Just like when you have to ssh from you local machine to an EC2 instance, you need to provide the ssh command the proper pem file:
ssh -i my_pem_file.pem ec2-user#private-or-public-ip-or-dns
Just in case anyone ponder on this question, here is my 2 cents.
Connecting one EC2 instance from another EC2 instance will work as suggested by "Uri Agassi". Considering best practices and security, it will be good idea to create and assign a role to source EC2 instance.
One way to allow one EC2 instance to connect to another is to set an ingress rule on the target EC2 instance that lets it accept traffic from the source EC2 instance's security group. Here's a Python function that uses Boto3 to do this:
import boto3
ec2 = boto3.resource('ec2')
def allow_security_group_ingress(target_security_group_id, source_security_group_name):
try:
ec2.SecurityGroup(target_security_group_id).authorize_ingress(
SourceSecurityGroupName=source_security_group_name)
logger.info("Added rule to group %s to allow traffic from instances in "
"group %s.", target_security_group_id, source_security_group_name)
except ClientError:
logger.exception("Couldn't add rule to group %s to allow traffic from "
"instances in %s.",
target_security_group_id, source_security_group_name)
raise
After you've set this, put the private key of the key pair on the source instance and use it when you SSH from the source instance:
ssh -i {key_file_name} ec2-user#{private_ip_address_of_target_instance}
There's a full Python example that shows how to do this on GitHub /awsdocs/aws-doc-sdk-examples.
See, if you have deployed both machines with the same key pair, or different, it's not a problem just go to your host ec2 machine and in .ssh folder make a key file with the same name of the key that is used to create the second machine, now use chmod 400 keypair name and then try ssh -i keyname user-name#IP

unable to connect to the aws ec2 instance,Host key verification failed

I had set up a ubuntu instance with rails package and also deployed my app, it is working fine.
But when i try to do SSH I its not allowing me for the remote login and throws errors like host key verification failed.
The problem seem to be persisting, kindly recommend the solution and I have attached a elastic IP to that and I am not able to see the public DNS, my instance is running in singapure region.
You may need to turn off StrictHostChecking by adding this option to ssh command line
-o StrictHostKeyChecking=no
As answered in more detail in your cross posted question on ServerFault: https://serverfault.com/questions/342228/unable-to-connect-to-the-aws-ec2-instance-host-key-verification-failed/342696#342696
Basically your ec2 elastic IP has previously been used with another server instance and your ssh client known hosts file does not match the new one for this IP. Remove offending line in known_host file. (More detail on Server Fault answer)
You need to log in to your instance with the private key that you set it to use.
Depending on your instance, the user might vary
ssh -i [private key file] [user]#[host]
Where user could be one of the following in my experience (or possibly others)
root
ec2-user
ec2user
bitnami
ubuntu

gitolite with non-default port

To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?
The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.
It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.