I used ddev auth ssh to include my mainkey for projects. But accidentially a key from a diffrent customer was included into the ddev-ssh-agent container.
So how can I remove a single ssh-key.
Or how can I remove all keys to add the key I only want again.
I know, if I reboot the computer all keys are gone and I have to include them again, but is there any other way, without booting?
The keys are never copied into the ddev-ssh-agent container at all. It's an agent for the keys which remain on your host. As in Simon's other answer, you can ddev poweroff to turn off all ddev containers. But it's simpler just to run ddev auth ssh again.
It does no harm for the ddev-ssh-agent to proxy multiple keys; that should work out just fine.
If you really want it to only handle a single key, you can put that one key in a directory by itself. For example, you could copy it to a folder named ~/.ddev-ssh-keys. Then you could set it up with the ddev-ssh-agent using ddev auth ssh -d ~/.ddev-ssh-keys
You have to stop the ssh-agent container. One way is simply run ddev poweroff or you can use docker rm.
Related
i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.
I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.
How to copy the ansible control machine host key to the target server known_hosts? The problem is, ansible expect this setup have been already made, to connect to those target servers without prompt.
Should I use tasks with password authentication and secret variables to setup the keys, or configure host keys manually before provisioning?
You can either set ansible_ssh_user and ansible_ssh_pass variables, or pass them from command line when you run the playbook:
--user and --ask-pass.
You can put variables to var file encoded with Ansible Vault to keep them secret(but don't forget to include this file for you target hosts)
Please check this answer for more details: https://serverfault.com/questions/628989/how-set-to-ansible-a-default-user-pass-pair-to-ssh-connection
You can specify the variable for SSH key called ansible_ssh_private_key_file later in the task(I suppose you should use set_fact). I'm not completely sure but if you play around perhaps it would work.
P.S. in other hand, if you already specify the username and password, there is no difference what to use
Using the new Bitbucket Pipelines feature, how can I SSH into my staging box from the docker container it spins up?
The last step in my pipeline is an .sh file that deploys the necessary code on staging, however because my staging box uses public key authentication and doesn't know about the docker container, the SSH connection is being denied.
Anyway of getting around this without using password authentication over SSH (which is causing me issues as well by constantly choosing to authenticate over public key instead.)?
Bitbucket pipelines can use a Docker image you've created, that has the ssh client setup to run during your builds, as long as it's hosted on a publicly accessible container registry.
Create a Docker image.
Create a Docker image with your ssh key available somewhere. The image also needs to have the host key for your environment(s) saved under the user the container will run as. This is normally the root user but may be different if you have a USER command in your Dockerfile.
You could copy an already populated known-hosts file in or configure the file dynamically at image build time with:
RUN ssh-keyscan your.staging-host.com
Publish the image
Publish your image to a publicly accessible, but private registry. You can host your own or use a service like Docker Hub.
Configure Pipelines
Configure pipelines to build with your docker image.
If you use Docker Hub
image:
name: account-name/java:8u66
username: $USERNAME
password: $PASSWORD
email: $EMAIL
Or Your own external registry
name: docker.your-company-name.com/account-name/java:8u66
Restrict access on your hosts
You don't want to have ssh keys to access your hosts flying around the world so I would also restrict access for these deploy ssh keys to only run your deploy commands.
The authorized_keys file on your staging host:
command="/path/to/your/deploy-script",no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAAC8ghi9ldw== deploy#bitbucket
Unfortunately bitbucket don't publish an IP list to restrict access to as they use shared infrastructure for pipelines. If they happen to be running on AWS then Amazon do publish IP lists.
from="10.5.0.1",command="",no-... etc
Also remember to date them an expire them from time to time. I know ssh keys don't enforce dates but it's a good idea to do it anyway.
You can now setup SSH keys under pipeline settings so that you do not need to have a private docker image just to store ssh keys. It is also extracted from your source code so you don't have it in your repo as well.
Under
Settings -> Pipelines -> SSH keys
You can either provide a key pair or generate a new one. The private key will be put in the docker container at ~/.ssh/config and provide you a public key you can put in your host to the ~/.ssh/authorized_keys file. The page also requires an ip or name to setup the fingerprint for known hosts when running on docker as well.
Also, Bitbucket has provided IP addresses you can white list if necessary for the docker containers being spun up. They are currently:
34.236.25.177/32
34.232.25.90/32
52.203.14.55/32
52.202.195.162/32
52.204.96.37/32
52.54.90.98/32
34.199.54.113/32
34.232.119.183/32
35.171.175.212/32
I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.