use other command instead of ssh for ansible - ssh

I have an ansible configuration which I know works on my local machines. However, I'm trying to now set it up on my company's machines which use a wrapper command similar to ssh (let's call it 'myssh')
for example, to access these machines, instead of writing
ssh myuser#123.123.123.123
you write
myssh myuser#123.123.123.123
which ends up calling ssh, among other things.
My question is, is there a way to swap which command ansible uses for accessing machines?

You can create a Connection Type Plugin to archive this. Looking at the ssh plugin, it appears like it might be as easy as replacing the ssh_cmd in line 333. Also specify myssh in line 69.
See here where to place the modified file. Additionally to that information, you can specify a custom location and let Ansible know about it in connection_plugins setting in ansible.cfg.
Finally again in your ansible.cfg set the transport setting to your new plugin:
transport = myssh
PS: I have never done anything like that before. This is only info from the docs.

Related

why yes command not working in git clone?

i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.

gcloud compute ssh connects shows wrong instance name

I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.

Ansible: to how make Paramiko use ~/.ssh/config?

Ideally, of course, I'd like Ansible to completely take care of this.
If this is not possible (why?!), then, at least, I want to be able to extract ~/.ssh/config contents into some other format and then make Ansible feed this to Paramiko. I am sure I'm not the first one faced with this task, so what's the accepted way of doing this?
I need this in order to use authorized_keys module to turn on passwordless authentication.
Btw, I wish Ansible emitted some warning when falling back to non-default backend (like Paramiko). I lost a couple of hours yesterday and actually had to download Ansible sources to figure out why perfectly running Ansible command suddenly stopped running when adding -k / --ask-pass option (yes, I am completely new to Ansible).
You can define this configuration in the Ansible configuration ini file or environment variables -- specifically the section for ANSIBLE_SSH_ARGS.

Ansible, how to copy host key from control machine to targets

How to copy the ansible control machine host key to the target server known_hosts? The problem is, ansible expect this setup have been already made, to connect to those target servers without prompt.
Should I use tasks with password authentication and secret variables to setup the keys, or configure host keys manually before provisioning?
You can either set ansible_ssh_user and ansible_ssh_pass variables, or pass them from command line when you run the playbook:
--user and --ask-pass.
You can put variables to var file encoded with Ansible Vault to keep them secret(but don't forget to include this file for you target hosts)
Please check this answer for more details: https://serverfault.com/questions/628989/how-set-to-ansible-a-default-user-pass-pair-to-ssh-connection
You can specify the variable for SSH key called ansible_ssh_private_key_file later in the task(I suppose you should use set_fact). I'm not completely sure but if you play around perhaps it would work.
P.S. in other hand, if you already specify the username and password, there is no difference what to use

Use ssh script return value in Jenkins

We're deploying our application using SSH scripts. For the production stage we need to figure out which out of two clusters is currently active. This can only be achieved reliably by running a command on a remote host and interpreting its output. Unfortunately there's no SSH plugin that does that AFAIK.
They only seem to be able to interpret if the SSH script return value was different from zero.
Currently I only see two undesirable solutions:
use SSH in a script like Python, Groovy, etc. (means, we would have to provide SSH authentication to it somehow)
Let the SSH-command write to a file, that is then copied to Jenkins and interpreted there (unelegant and cumbersome)
Ok based on what you mentioned in the comment, I think you can try something like given in here and then copy back that file to jenkins using ftp and then read the file contents.
Or you can have the whole process orchestrated in an Ant script by using SSHExec task and get the output in Ant