Provision remote nixos box without sending private keys - ssh

I am provisioning a nixos instance on AWS. The instance has to download a repositiory from a private github repo. Currently I just run a shell script on the remote box using ssh-forwarding to download the repository. In this way I don't have to copy my private key, which gives me access to the repo, to the remote box.
I would like to change this procedure to be more Nix-like. I want to write a nix expression which downloads the repo and put it in /etc/nixos/configuration.nix. At the same time I don't want to copy my private key to the remote machine. Is this possible? Can nixos-rebuild use ssh forwarding?

You can explore --build-host and --target-host options of nixos-rebuild command. That is, make your local machine a build-machine, and remote one - target. You need root passwordless ssh access to remote though.

Related

SFTP - From WinSCP to Terminal Access

I have been able to set up SSH access to my Google Cloud Platform VM via SFTP using WinSCP, but I now wish to do the same using another VM.
I have tried the ssh-keygen -t rsa , ssh-copy-id demo#198.51.100.0 method but always come up against the "Permission denied (public key)" error which from researching seems to be a pretty widespread issue with few reliable fixes (all the ones I tried didn't work).
I used PuttyGen to create the public and private key, and inserted the public key onto the server just through GCP settings, adding it under the SSH settings for my instance.
I am just confused on what to do with the private key when simply trying to sftp through the terminal on a separate VM, as before I would load the private key into WinSCP settings. Is there a folder I need to place it in or?
Regarding your first issue of "Permission denied (public key)" error, please follow the troubleshooting in this link and this.
About your other question of "what to do with the private key when simply trying to sftp through the terminal", that depends on the settings of the specific the 3rd party SFTP tool you are using. To locate the locations of SSH key after generating them, please review this document.
Once you have added the public key in the VM, you would need to boot the VM for public key to take effect. Try rebooting it and try

Allow CI access to private BitBucket repo

I'm running a CI machine on AppCenter and need to allow read/write access to a private BitBucket repository but I can't figure out how to do this.
My approach is to create an ssh key and during CI builds add the private key to the machine ssh-agent using ssh-add -K (mac machine).
I've created an ssh key on my local computer (mac) using ssh-keygen and uploaded the .pub key to BitBucket. Then as my CI runs I'm trying to take the private key and add it to the ssh-agent but I'm being prompt to enter a password and can't figure out how to inject it in a non-interactive shell mode.
Is this the right approach to grant access to BitBucket in CI? if so, how can I add an ssh key without being prompt to enter a password?
Scripts are in Ruby or Bash.
The repo contains certificates used for Fastlane Match
Answering my own question...
I ended up using BitBucket AppPasword and cloning via https. I think there has to be a better way but this work for my needs at the moment.
I needed access from my CI to a private BitBucket certificates repo to use with Fastlane Match, the value in my MatchFile forgit_url that allows me to clone the repo is:
git_url "https://{BITBUCKET_USER}:{BITBUCKET_APP_PASSWORD}#bitbucket.org/{BITBUCKET_USER}/{REPO}.git"
You can obtain a bitbucket app password by clicking your profile (Avatar) -> Settings -> App Passwords

SSH into staging machine from docker instance using Bitbucket Pipelines

Using the new Bitbucket Pipelines feature, how can I SSH into my staging box from the docker container it spins up?
The last step in my pipeline is an .sh file that deploys the necessary code on staging, however because my staging box uses public key authentication and doesn't know about the docker container, the SSH connection is being denied.
Anyway of getting around this without using password authentication over SSH (which is causing me issues as well by constantly choosing to authenticate over public key instead.)?
Bitbucket pipelines can use a Docker image you've created, that has the ssh client setup to run during your builds, as long as it's hosted on a publicly accessible container registry.
Create a Docker image.
Create a Docker image with your ssh key available somewhere. The image also needs to have the host key for your environment(s) saved under the user the container will run as. This is normally the root user but may be different if you have a USER command in your Dockerfile.
You could copy an already populated known-hosts file in or configure the file dynamically at image build time with:
RUN ssh-keyscan your.staging-host.com
Publish the image
Publish your image to a publicly accessible, but private registry. You can host your own or use a service like Docker Hub.
Configure Pipelines
Configure pipelines to build with your docker image.
If you use Docker Hub
image:
name: account-name/java:8u66
username: $USERNAME
password: $PASSWORD
email: $EMAIL
Or Your own external registry
name: docker.your-company-name.com/account-name/java:8u66
Restrict access on your hosts
You don't want to have ssh keys to access your hosts flying around the world so I would also restrict access for these deploy ssh keys to only run your deploy commands.
The authorized_keys file on your staging host:
command="/path/to/your/deploy-script",no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAAC8ghi9ldw== deploy#bitbucket
Unfortunately bitbucket don't publish an IP list to restrict access to as they use shared infrastructure for pipelines. If they happen to be running on AWS then Amazon do publish IP lists.
from="10.5.0.1",command="",no-... etc
Also remember to date them an expire them from time to time. I know ssh keys don't enforce dates but it's a good idea to do it anyway.
You can now setup SSH keys under pipeline settings so that you do not need to have a private docker image just to store ssh keys. It is also extracted from your source code so you don't have it in your repo as well.
Under
Settings -> Pipelines -> SSH keys
You can either provide a key pair or generate a new one. The private key will be put in the docker container at ~/.ssh/config and provide you a public key you can put in your host to the ~/.ssh/authorized_keys file. The page also requires an ip or name to setup the fingerprint for known hosts when running on docker as well.
Also, Bitbucket has provided IP addresses you can white list if necessary for the docker containers being spun up. They are currently:
34.236.25.177/32
34.232.25.90/32
52.203.14.55/32
52.202.195.162/32
52.204.96.37/32
52.54.90.98/32
34.199.54.113/32
34.232.119.183/32
35.171.175.212/32

Generate key files to connect to Bitbucket in Vagrant boxes

We use Vagrant boxes for development. For every project or small snippet we simply start a new box and provision it with Ansible. This is working fantastic; however, we do get into trouble when connecting to a private Bitbucket repository within a bower install run.
The solution we have now is to generate a new key (ssh-keygen), accept all defaults (pressing <return>, <return>, <return>) and then grab the public key (cat ~/.ssh/id_rsa.pub). Copy it, go to Bitbucket, view your account and add this new ssh key. And repeat for every new box you instantiate.
We have to do this because of some closed source packages (hosted on Bitbucket) we install via Bower. We do have another experience, which is much better: composer (php's package manager) and private Github repositories. With that setup, you have to enter your username/password/2fa token via the command line and an OAuth token is generated for you. This works great.
So, is there a way we can mitigate this bower/bitbucket/ssh issue? For obvious reasons I don't want to provision the boxes with a standard private key, but there has to be another solution?
While I'm not sure that my situation is as complex as yours (I'm not using Ansible or Bower), I solved this problem by using the Vagrant ssh forward agent. This blog post gives you the details on how to get it working:
Cloning from GitHub in Vagrant using SSH agent forwarding
So as long as each of the developers has access on their local machines to the bitbucket repos, it should work.

How to get Hudson CI to check out CVS projects over SSH?

I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.