Can a Packer script change an output image built from an ISO? - ssh

I'm trying to use Packer to build a Vagrant box from an ISO, using a boot2docker ISO. All goes well until I try to run vagrant up, which fails with "Error: Authentication failure. Retrying...". The box is OK - I can get in with vagrant ssh by supplying a password. But ssh authentication doesn't work.
This turns out to be a known problem with a known solution - add a public key to .ssh/authorized_keys on the box. If I do this manually after I've accessed the box with a password, I don't need the password for future access. So I updated my Packer script to do that - and found that changes made to the boot volume are discarded without effect. Packer script changes to other volumes work, but not to the boot volume, which is the one I need to update. It looks like it can only ever be an image of the ISO.
Is my only option to create my own ISO with the public key preinstalled? Is there any way to use Packer to apply the key to the output box?

This is an old question but since there's no answer, I'll contribute.
I was having the same problem; no matter what I changed in my Kickstart or provisioner scripts, my changes to the vagrant user's authorized_keys were not visible in the final box as built by Packer. Until I realized that Vagrant kept using a cached (and older!) version of my Vagrant box built instead of the latest one.
The reason is that the box was copied once by Vagrant as "my-box", and even if the box itself was changing as I was testing fixes for this, Vagrant kept using the old one without my fixes because it caches it, and does not frequently check for updates. The easiest solution is to add
config.vm.box_check_update = true
to your Vagrantfile. Alternatively, you could have your Vagrantfile give a different name to your box every time in config.vm.box via some Ruby code.

Related

why yes command not working in git clone?

i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.

Ansible: to how make Paramiko use ~/.ssh/config?

Ideally, of course, I'd like Ansible to completely take care of this.
If this is not possible (why?!), then, at least, I want to be able to extract ~/.ssh/config contents into some other format and then make Ansible feed this to Paramiko. I am sure I'm not the first one faced with this task, so what's the accepted way of doing this?
I need this in order to use authorized_keys module to turn on passwordless authentication.
Btw, I wish Ansible emitted some warning when falling back to non-default backend (like Paramiko). I lost a couple of hours yesterday and actually had to download Ansible sources to figure out why perfectly running Ansible command suddenly stopped running when adding -k / --ask-pass option (yes, I am completely new to Ansible).
You can define this configuration in the Ansible configuration ini file or environment variables -- specifically the section for ANSIBLE_SSH_ARGS.

Vagrant synced file not updating

I have setup a Vagrant box with Ubuntu 12.04 and Apache2 (all very vanilla, as per Vagrant's tutorial). I've been testing it for web development and I stumbled across a weird issue (not sure if bug or feature):
I have setup a synced folder across my machine and the VM folder. Apache has been serving the files mostly well, except (up to now) for a JSON file I'm using.
If I edit it locally, it seemingly syncs it to the VM folder. Both copies are the same.
Although, if I XHR it from the browser after modifying it, I still get the previously served version of that file.
At first, I thought the browser had it cached, but after trying with 2 different browsers (Chrome(ium) and Firefox), after clearing their respective cache, the issue remained.
I finally managed to go around it by reloading (vagrant reload) the VM.
What I was wondering is if this is a bug or a feature and how can I go around it. Is Apache configurable to not cache server side for a specific folder/file/filetype?
vagrant use previous setting until you provision that new setting again, so after every change in vagrant do provision to see reflected output. There is no apache2 cache problem.
For that use command
vagrant reload vmname --provision
if your vm name is default then use
vagrant reload default --provision
it will reboot vagrant vm and apply change to vm .After provision you will be able to see changes.
Finally figured it out. This relates to an issue that occurs with both Apache and /or Nginx: the sendfile option in server configuration.
Basically a new file wasn't being sent/updated client side even when it was changed server-side by Vagrant sync mechanism.
Check this answer for a solution: here.

Windows / Linux automatic key exchange

I have a build box, which I use to make continuous builds as well as run nightly unit tests. I'm using Jenkins to do by builds/unit test scripts, which is running on a windows box because our compiler is windows based.
One of our enterprise solutions uses Python code with rabbitmq for exchanging messages for syncing specific database tables over a faulty network. I have unit tests to help verify that updates are happening correctly.
In order to unit test the Python updates, I need to be able to stop some services running on my Linux box, then restart them after I update the python code. I setup a key exchange between my Windows box and Linux box, so that I don't have to put a password in the batch script.
When I'm remoted into the windows box, I can successfully run the batch file, which uses plink commands which rely on the key exchange and putty's pageant (which is running in the background). e.g. I use plink to execute commands on the Linux box from command line in my batch file. However, when I try to run the batch file from Jenkins, the batch file doesn't work properly because it is prompted for the SSH password when trying to run the plink commands.
I believe my current issue can be summarized by two issues, which I'm hoping can be verified and rectified:
I think Jenkins may be running as a different user or using different system credentials so it's not able to connect like the logged in user can. If this is the case, what would I need to do, to get it so that Jenkins can run the plink commands properly without being prompted for the password.
Pageant looks like it needs to get a password typed in every time the computer restarts. My research unearthed ways to put Pageant in startup, so you get prompted when you first login, but I need this to be automatic, like how I can on Linux boxes. If Windows reboots because of a Windows update, then the unit tests would fail as they won't be able to connect to the Linux server. Sure this only happens once a week, but over the course of a year it'll be very annoying.
What can I do to solve the above two issues? If there is a good alternative to putty for the automatic key exchange between Windows and Linux, I'd be interested in hearing about it (I would prefer to stay away from Cygwin with OpenSSH, but might go down this route if the above can't be rectified).
I use plink on my Windows Jenkins box to communicate with Linux on daily basis, there is no problem with it.
Like you theorized, Jenkins runs under it's own user (Windows default, I think, is SYSTEM user), which is different than your logged in session, even if you login as Administrator. Your authentication key is stored in your (Administrator or otherwise) profile directory
What you need to do is use Pageant to export your key as ppk file, then supply the path to this ppk file with plink:
plink -i "C:\path\to\id.ppk"
Looks like there is a simpler way to do what I'm trying to do, Jenkin's plugin https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin

Generate key files to connect to Bitbucket in Vagrant boxes

We use Vagrant boxes for development. For every project or small snippet we simply start a new box and provision it with Ansible. This is working fantastic; however, we do get into trouble when connecting to a private Bitbucket repository within a bower install run.
The solution we have now is to generate a new key (ssh-keygen), accept all defaults (pressing <return>, <return>, <return>) and then grab the public key (cat ~/.ssh/id_rsa.pub). Copy it, go to Bitbucket, view your account and add this new ssh key. And repeat for every new box you instantiate.
We have to do this because of some closed source packages (hosted on Bitbucket) we install via Bower. We do have another experience, which is much better: composer (php's package manager) and private Github repositories. With that setup, you have to enter your username/password/2fa token via the command line and an OAuth token is generated for you. This works great.
So, is there a way we can mitigate this bower/bitbucket/ssh issue? For obvious reasons I don't want to provision the boxes with a standard private key, but there has to be another solution?
While I'm not sure that my situation is as complex as yours (I'm not using Ansible or Bower), I solved this problem by using the Vagrant ssh forward agent. This blog post gives you the details on how to get it working:
Cloning from GitHub in Vagrant using SSH agent forwarding
So as long as each of the developers has access on their local machines to the bitbucket repos, it should work.