I have a need to disable host time syncing selectively on a vagrant/virtualbox vm. I can do that without issue on the host machine like so:
$ VBoxManage setextradata <name> \
> "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1"
The issue is that I only need to disable syncing while some work is done in the guest vm. Specifically, a unit test job needs to be able to toggle the GetHostTimeDisabled value programmatically, effectively giving full control of the vm system time to the test job.
I've considered running the VBoxManage command over ssh from within the unit test job but that brings up additional difficulty. Like, for example, setting up passwordless login to my local machine as the vagrant user.
I have ssh agent forwarding enabled for the vm and it's confirmed working. Unless anyone has suggestions for a "better" way, I'd appreciated some help figuring out how to ssh back to the host machine (OSX) from the vm (ubuntu linux) as the vagrant user. Ideally I'd like it to work the same for all users of the vagrant. Maybe using the forwarded agent somehow?
Perhaps there's a way to manipulate VBoxService on the guest to temporarily disable time sync?
I've solved this using ssh to run the VBoxManage command on the host machine.
Enable ssh forwarding in your Vagrantfile
e.g., config.ssh.forward_agent = true
Enable yourself to ssh to yourself as yourself ;)
In other words, copy your own public key to your own authorized_keys file
e.g., $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Set the VM to ssh as you by default (instead of the vagrant user)
e.g., via the shell provisioner in your Vagrantfile:
require 'etc'
username = Etc.getlogin
config.vm.provision :shell, :inline => "echo \"Host *\n User #{username}\" > /home/vagrant/.ssh/config"
Freely ssh back to the host like so: (the default NAT IP of the host is 10.0.2.2)
$ ssh 10.0.2.2 'hostname'
$ ssh 10.0.2.2 'VBoxManage setextradata <your vm name> \
"VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1"'
$ echo "GetHostTimeDisabled is now " && \
ssh 10.0.2.2 'VBoxManage getextradata <your vm name> \
"VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled"'
I see that you have found an answer, but I've thought of a different way.
VirtualBox has a concept called "shared folders". This means that you can set up a folder on the host that will be available for mounting on the guest, and which stays the same between host & guest, meaning you can share data while the guest is running. You can even have it automatically mounted, in which case it will be mounted as /media/<folder>. Then, you can symlink it to wherever you want.
For my idea, you would put VBoxManage.exe into the shared folder from the host, then call it from wherever you put it in the guest, and possibly put it in your PATH. Then you'd call
$ /path/to/VBoxManage setextradata <guest name> \
> "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" "1"
from your script or whatever.
I just tried this, and it didn't work, but I have a feeling that was because my host & guest are different OS's, specifically Windows & Linux, respectively. I suspect calling dos2unix on VBoxManage.exe might work, but I'm afraid to try it because it might break VirtualBox. It might work better for you, though.
Related
First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method
I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.
I currently use Vagrant and Chef to provision a VM and setup my PHP based project. This includes running composer install which essentially does a git clone of a number of private repositories.
After setting up ssh agent forwarding as outlined in the docs and the answers here: How to use ssh agent forwarding with "vagrant ssh"? I have successfully got it working.
The problem I'm having is when ever I boot a VM, provision a VM or SSH into a VM I'm now asked for vagrants default password, see examples below:
==> web: Waiting for machine to boot. This may take a few minutes...
web: SSH address: 192.168.77.185:22
web: SSH username: vagrant
web: SSH auth method: private key
Text will be echoed in the clear. Please install the HighLine or Termios libraries to suppress echoed text.
vagrant#192.168.77.185's password:
Example 2
➜ vagrant git:(master) ✗ vagrant ssh
vagrant#192.168.77.185's password:
This is pretty inconvenient as I work across a number of projects, including destroying and creating some a number of times a day (Chef test kitchen). Is there anyway to automatically use my public key as well so I don't need to continually enter a password?
I ran into a similar issue recently after creating a new Vagrant box from scratch. The problem turned out to be old entries in ~/.ssh/known_hosts (on OS X).
Try the following (assumes OS X or linux):
ssh into your Vagrant machine
type ip addr or ifconfig or the like (depending on your OS)
take note of the IP addresses listed, including 127.0.0.1
on your host machine, run ssh-keygen -R {vm-ip-address} (make sure to include 127.0.0.1 and [127.0.0.1]) for the addresses in step 3
confirm the relevant entries have been removed from ~/.ssh/known_hosts
vagrant reload
vagrant ssh
Alternatively, you can just delete/move/rename the ~/.ssh/known_hosts file, though this will require reconfirming authenticity again for multiple machines you've already ssh'd to.
I hope that helps.
Reference: http://www.geekride.com/ssh-warning-remote-host-identification-has-changed/
I am testing a Google Compute Engine, and I created a VM with Ubuntu OS. When I connect to it, by clicking this Connect SSH button, it opens a console window.
Is that the connection you get?
How do I open a real screen with a GUI on it? I don't want the console.
Much better solution from Google themselves:
https://medium.com/google-cloud/linux-gui-on-the-google-cloud-platform-800719ab27c5
You need to forward the X11 session from the VM to your local machine. This has been covered in the Unix and Linux stack site before:
https://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine
Since you are connecting to a server that is expected to run compute tasks there may well be no X11 server installed on it. You may need to install X11 and similar. You can do that by following the instructions here:
https://help.ubuntu.com/community/ServerGUI
Since I have needed to do this recently, I am going to briefly write up the required changes here:
Configure the Server
$ sudo vim /etc/ssh/sshd_config
Ensure that X11Forwarding yes is present. Restart the ssh daemon if you change the settings:
$ sudo /etc/init.d/sshd restart
Configure the Client
$ vim ~/.ssh/config
Ensure that ForwardX11 yes is present for the host. For example:
Host example.com
ForwardX11 yes
Forwarding X11
$ ssh -X -C example.com
...
$ gedit example.txt
Trusted X11 Forwarding
http://dailypackage.fedorabook.com/index.php?/archives/48-Wednesday-Why-Trusted-and-Untrusted-X11-Forwarding-with-SSH.html
You may wish to enable trusted forwarding if applications have trouble with untrusted forwarding.
You can enable this permanently by using ForwardX11Trusted yes in the ~/.ssh/config file.
You can enable this for a single connection by using the -Y argument in place of the -X argument.
These instructions are for setting up Ubuntu 16.04 LTS with LXDE (I use SSH port forwarding instead of opening port 5901 in the VM instance firewall)
1. Build a new Ubuntu VM instance using the GCP Console
2. connect to your instance using google cloud shell
gcloud compute --project "project_name" ssh --zone "project_zone" "instance_name"
3. install the necessary packages
sudo apt update && sudo apt upgrade
sudo apt-get install xorg lxde vnc4server
4. setup vncserver (you will be asked to provide a password for the vncserver)
vncserver
sudo echo "lxpanel & /usr/bin/lxsession -s LXDE &" >> ~/.vnc/xstartup
6. Reboot your instance (this returns you to the Google cloud shell prompt)
sudo reboot
7. Use the google cloud shell download file facility to download the auto-generated private key stored at $HOME/.ssh/google_compute_engine and save it in your local machine*****
cloudshell download-files $HOME/.ssh/google_compute_engine
8. From your local machine SSH to your VM instance (forwarding port 5901) using your private key (downloaded at step 7)
ssh -L 5901:localhost:5901 -i "google_compute_engine" username#instance_external_ip -v -4
9. Run the vncserver in your VM instance
vncserver -geometry 1280x800
10. In your local machine's Remote Desktop Client (e.g. Remmina) set Server to localhost:5901 and Protocol to VNC
Note 1: to check if the vncserver is working ok use:
netstat -na | grep '[:.]5901'
tail -f /home/user_id/.vnc/instance-1:1.log
Note 2: to restart the vncserver use:
sudo vncserver -kill :1 && vncserver
***** When first connected via the Google cloud shell the public and private keys are auto-generated and stored in the cloud shell instance at $HOME/.ssh/
ls $HOME/.ssh/
google_compute_engine google_compute_engine.pub google_compute_known_hosts
The public key should be added to the home/*user_id*/.ssh/authorized_keys
in the VM instance (this is done automatically when you first SHH to the VM instance from the google cloud shell, i.e. in step 2)
you can confirm this in the instance metadata
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer or mobile device. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization.
Check out this google tutorial to use it with Compute Engine : https://cloud.google.com/solutions/chrome-desktop-remote-on-compute-engine
Rather than create a new SSH key pair on a vagrant box, I would like to re-use the key pair I have on my host machine, using agent forwarding. I've tried setting config.ssh.forward_agent to TRUE in the Vagrantfile, then rebooted the VM, and tried using:
vagrant ssh -- -A
...but I'm still getting prompted for a password when I try to do a git checkout. Any idea what I'm missing?
I'm using vagrant 2 on OS X Mountain Lion.
Vagrant.configure("2") do |config|
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
end
config.ssh.private_key_path is your local private key
Your private key must be available to the local ssh-agent. You can check with ssh-add -L, if it's not listed add it with ssh-add ~/.ssh/id_rsa
Don't forget to add you public key to ~/.ssh/authorized_keys on the Vagrant VM. You can do it copy-and-pasting or using a tool like ssh-copy-id
Add it to the Vagrantfile
Vagrant::Config.run do |config|
# stuff
config.ssh.forward_agent = true
end
See the docs
In addition to adding "config.ssh.forward_agent = true" to the vagrant file make sure the host computer is set up for agent forwarding. Github provides a good guide for this. (Check out the troubleshooting section).
I had this working with the above replies on 1.4.3, but stopped working on 1.5. I now have to run ssh-add to work fully with 1.5.
For now I add the following line to my ansible provisioning script.
- name: Make sure ssk keys are passed to guest.
local_action: command ssh-add
I've also created a gist of my setup: https://gist.github.com/KyleJamesWalker/9538912
If you are on Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh). See this particular Vagrant bug report: https://github.com/mitchellh/vagrant/issues/1735
However, there is a workaround! Simply auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
When we recently tried out the vagrant-aws plugin with Vagrant 1.1.5, we ran into an issue with SSH agent forwarding. It turned out that Vagrant was forcing IdentitiesOnly=yes without an option to change it to no. This forced Vagrant to only look at the private key we listed in the Vagrantfile for the AWS provider.
I wrote up our experiences in a blog post. It may turn into a pull request at some point.
Make sure that the VM does not launch its own SSH agent. I had this line in my ~/.profile
eval `ssh-agent`
After removing it, SSH agent forwarding worked.
The real problem is Vagrant using 127.0.0.1:2222 as default port-forward.
You can add one (not 2222, 2222 is already occupied by default)
config.vm.network "forwarded_port", guest: 22, host:2333, host_ip: "0.0.0.0"
"0.0.0.0" is way take request from external connection.
then
ssh -p 2333 vagrant#192.168.2.101 (change to your own host ip address, dud)
will working just fine.
Do thank me, Just call me Leifeng!
On Windows, the problem is that Vagrant doesn't know how to communicate with git-bash's ssh-agent. It does, however, know how to use PuTTY's Pageant. So, as long as Pageant is running and has loaded your SSH key, and as long as you've set config.ssh.forward_agent, this should work.
See this comment for details.
If you use Pageant, then the workaround of updating the Vagrantfile to copy SSH keys on Windows is no longer necessary.