`ssh` executable not found in any directories in the %PATH% - ssh

ERROR:
c:\Users\dhawal.vora>vagrant ssh
`ssh` executable not found in any directories in the %PATH% variable. Is an
SSH client installed? Try installing Cygwin, MinGW or Git, all of which
contain an SSH client. Or use your favorite SSH client with the following
authentication information shown below:
Host: 127.0.0.1
Port: 2222
Username: vagrant
Private key: c:/Users/dhawal.vora/.vagrant/machines/default/virtualbox/private_key
Kindly help????
Vagrant file is below-
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = "precise32"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing "localhost:8080" will access port 80 on the guest machine.
# config.vm.network "forwarded_port", guest: 80, host: 8080
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network "private_network", ip: "192.168.33.10"
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "../data", "/vagrant_data"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider "virtualbox" do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define "atlas" do |push|
# push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
# end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision "shell", inline <<-SHELL
sudo apt-get install apache2
# SHELL
end

Adding C:\Program Files\Git\usr\bin to the PATH environment variable.
Add it manually or I believe you could run this in cmd:
set PATH=%PATH%;C:\Program Files\Git\usr\bin
updated from #Ygor Thomaz's comments
or (64 bits)
set PATH=%PATH%;C:\Program Files\Git\usr\bin
If this doesn't fix your problem, go through :
Get SSH working on Vagrant/Windows/Git

You can alternatively install openssh from here and then you can add the ssh.exe to your PATH by:
set PATH=%PATH%;C:\Program Files (x86)\OpenSSH\bin
or
set PATH=%PATH%;C:\Program Files\OpenSSH\bin

With Windows 10 I also couldn't get the 'set PATH' option to work. But when I amended the PATH variable through System Settings and started a new command prompt it worked fine.
Also, putty worked perfectly after I read the screen which told me to use a username of 'core'.
'core' was a requirement of my configuration which was trying to launch CoreOS.

Adding C:\Program Files\Git\usr\bin to the PATH environment variable didn't work for me.
So I configured PUTTY for ssh connection.

This well-written illustrative tutorial gives a great overview on ways to setup Vagrant SSH. The first way is via Git and the second way describes how to use Putty. It is very easy to follow.
Running Vagrant SSH on Windows

In my case even adding ssh to the PATH didn't solve the problem. What I had to do is connect to vagrant with ssh manually. After executing vagrant up, instead of executing vagrant ssh, I do this:
ssh vagrant#127.0.0.1 -p 2222
And the password is "vagrant"
For getting all the information about the ip, port and user you can use
vagrant ssh-config
Ope this helps somebody...

Related

Selenium4 Dynamic Grid setup using different VM's

In the official documentation of selenium docker setup, I see a config.toml file which contains below info
[docker]
# Configs have a mapping between the Docker image to use and the capabilities that need to be matched to
# start a container with the given image.
configs = [
"selenium/standalone-firefox:4.3.0-20220706", "{\"browserName\": \"firefox\"}",
"selenium/standalone-chrome:4.3.0-20220706", "{\"browserName\": \"chrome\"}",
"selenium/standalone-edge:4.3.0-20220706", "{\"browserName\": \"MicrosoftEdge\"}"
]
# URL for connecting to the docker daemon
# Most simple approach, leave it as http://127.0.0.1:2375, and mount /var/run/docker.sock.
# 127.0.0.1 is used because interally the container uses socat when /var/run/docker.sock is mounted
# If var/run/docker.sock is not mounted:
# Windows: make sure Docker Desktop exposes the daemon via tcp, and use http://host.docker.internal:2375.
# macOS: install socat and run the following command, socat -4 TCP-LISTEN:2375,fork UNIX-CONNECT:/var/run/docker.sock,
# then use http://host.docker.internal:2375.
# Linux: varies from machine to machine, please mount /var/run/docker.sock. If this does not work, please create an issue.
url = "http://127.0.0.1:2375"
# Docker image used for video recording
video-image = "selenium/video:ffmpeg-4.3.1-20220706"
# Uncomment the following section if you are running the node on a separate VM
# Fill out the placeholders with appropriate values
[server]
host = <ip-from-node-machine>
port = <port-from-node-machine>
What does the bottom two parameters represent host and port?
FYI- I am planning to run the hub container in one VM and nodes containers in another VM's.
Correct me if I am wrong, I am guessing config.toml file should be present in the VM's where we would be running the nodes
So, for host= should we need to give Ip of where hub is up and running?
and
for port= where we get the port number?
Expecting answers ASAP, thanks in advance
Yes, the host and port values are the details of where your Hub is running. Port number is 4444 if your hub is running on the default port.

Access vagrant VM with puphpet from local network?

I used vagrant and puphpet for two weeks and it works great. In my case I'll just use http://myserver.dev that I added to my host file as puphpet suggest
192.168.56.101 myserver.dev
Now I want to get access to my VM:s apache www folder from another computer in my local network.
This post suggest to uncomment some lines in vagrant file, but as I use puphpet my autogenerated vagrant file looks like this:
# -*- mode: ruby -*-
dir = File.dirname(File.expand_path(__FILE__))
require 'yaml'
require "#{dir}/puphpet/ruby/deep_merge.rb"
require "#{dir}/puphpet/ruby/to_bool.rb"
require "#{dir}/puphpet/ruby/puppet.rb"
configValues = YAML.load_file("#{dir}/puphpet/config.yaml")
provider = ENV['VAGRANT_DEFAULT_PROVIDER'] ? ENV['VAGRANT_DEFAULT_PROVIDER'] : 'local'
if File.file?("#{dir}/puphpet/config-#{provider}.yaml")
custom = YAML.load_file("#{dir}/puphpet/config-#{provider}.yaml")
configValues.deep_merge!(custom)
end
if File.file?("#{dir}/puphpet/config-custom.yaml")
custom = YAML.load_file("#{dir}/puphpet/config-custom.yaml")
configValues.deep_merge!(custom)
end
data = configValues['vagrantfile']
Vagrant.require_version '>= 1.8.1'
Vagrant.configure('2') do |config|
eval File.read("#{dir}/puphpet/vagrant/Vagrantfile-#{data['target']}")
end
But there are no uncomment lines.
I thought maybe I need to do something in puphpet's config.yaml?
Here's what I've found about ip and port:
machines:
vflm_azud9vpjzelv:
id: machine1
hostname: myserver.puphpet
network:
private_network: 192.168.56.101
forwarded_port:
vflmnfp_rkr38vlo4vcb:
host: '6597'
guest: '22'
memory: '512'
cpus: '1'
You have two simple choices:
Vagrant comes with the vagrant share command that opens up a publicly-accessible, random URL to your VM
Create a forwarded port from your host to your VM. For example, forward port 1080 on your host to port 80 in the VM, so when you go to http://localhost:1080 traffic will be forwarded to your VM. For this you need to set * as your Apache vhost's alias so it catches all traffic to the port you choose (in this case, 80).

Can't ssh to vagrant VMs using the insecure private key (vagrant 1.7.2)

I have a cluster of 3 VMs. Here is the Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
hosts = {
"host0" => "192.168.33.10",
"host1" => "192.168.33.11",
"host2" => "192.168.33.12"
}
Vagrant.configure("2") do |config|
config.vm.box = "precise64"
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.ssh.private_key_path = File.expand_path('~/.vagrant.d/insecure_private_key')
hosts.each do |name, ip|
config.vm.define name do |machine|
machine.vm.hostname = "%s.example.org" % name
machine.vm.network :private_network, ip: ip
machine.vm.provider "virtualbox" do |v|
v.name = name
# #v.customize ["modifyvm", :id, "--memory", 200]
end
end
end
end
This used to work until I upgraded recently:
ssh -i ~/.vagrant.d/insecure_private_key vagrant#192.168.33.10
Instead, vagrant asks for a password.
It seems that recent versions of vagrant (I'm on 1.7.2) create a secure private key for each machine. I discovered it by running
vagrant ssh-config
The output shows different keys for each host. I verified the keys are different by diffing them.
I tried to force the insecure key by setting in Vagrantfile the config.ssh.private_key_path, but it doesn't work.
The reason I want to use the insecure key for all machines is that I want to provision them from the outside using ansible. I don't want to use the Ansible provisioner, but treat the VMs as remote servers. So, the Vagrantfile is just used to specify the machines in the cluster and then provisioning will be done externally.
The documentation still says that by default machines will use the insecure private key.
How can I make my VMs use the insecure private key?
Vagrant changed the behaviour between 1.6 and 1.7 versions and now will insert auto generated insecure key instead of the default one.
You can cancel this behaviour by setting config.ssh.insert_key = false in your Vagrantfile.
Vagrant shouldn't replace insecure key if you specify private_key_path like you did, however the internal logic checks if the private_key_path points to the default insecure_private_key, and if it does, Vagrant will replace it.
More info can be found here.
When Vagrant creates a new ssh key it's saved with the default configuration below the Vagrantfile directory at .vagrant/machines/default/virtualbox/private_key.
Using the autogenerated key you can login with that from the same directory as the Vagrantfile like this:
ssh -i .vagrant/machines/default/virtualbox/private_key -p 2222 vagrant#localhost
To learn about all details about the actual ssh configuration of a vagrant box use the vagrant ssh-config command.
# vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /Users/babo/src/centos/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
Adding config.ssh.insert_key = false to the Vagrantfile and removing the new vm private key .vagrant/machines/default/virtualbox/private_key vagrant automatically updates vagrant ssh-config with the correct private key ~/.vagrant.d/insecure_private_key. The last thing I had to do was ssh into the vm and update the authorized keys file on the vm. curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub > ~/.ssh/authorized_keys
tldr;
ssh vagrant#127.0.0.1 -p2222 -i/~/www/vw/vw-environment/.vagrant/machines/default/virtualbox/private_key
I couldn't get this to work, so in the end I added the following to the ssh.rb ruby script (/opt/vagrant/embedded/gems/gems/vagrant-1.7.1//lib/vagrant/util/ssh.rb)
print(*command_options)
just before this line that executes the ssh call
SafeExec.exec("ssh", *command_options)
So that prints out all the command options passed to the ssh call, from there you can work out something that works for you based on what vagrant calculates to be the correct ssh parameters.
If you are specifically using Ansible (not the Vagrant Ansible provisioner), you might want to consider using the vagrant dynamic inventory script from Ansible's repo:
https://github.com/ansible/ansible/blob/devel/contrib/inventory/vagrant.py
Alternatively, you'd can handcraft your own script and dynamically build your own vagrant inventory file:
SYSTEMS=$(vagrant status | grep running | cut -d ' ' -f1)
echo '[vagrant_systems]' > vagrant.ini
for SYSTEM in ${SYSTEMS}; do
SSHCONFIG=$(vagrant ssh-config ${SYSTEM})
IDENTITY_FILE=$(echo "${SSHCONFIG}" | grep -o "\/.*${SYSTEM}.*")
PORT=$(echo "${SSHCONFIG}" | grep -oE '[0-9]{4,5}')
echo "${SYSTEM} ansible_ssh_host=127.0.0.1 ansible_ssh_port=${PORT} ansible_ssh_private_key_file=${IDENTITY_FILE}" >> vagrant.ini
done
Then use ansible-playbook -i=vagrant.ini
If you try to use the ~/.ssh/config, you'll have to dynamically create or edit existing entries, as the ssh ports can change (due to the collision detection in Vagrant).

Apache doesn't start after Vagrant reload

I'm trying to set up a simple dev environment with Vagrant. The base box (that I created) has CentOS 6.5 64bit with Apache and MySQL.
The issue is, the httpd service doesn't start on boot after I reload the VM (vagrant reload or vagrant halt then up).
The problem only occurs when I run a provision script that alters the DocumentRoot and only after the first time I halt the machine.
More info:
httpd is on chkconfig on levels 2, 3, 4 and 5
There are no errors written to the error_log (on /etc/httpd/logs).
If I ssh into the machine and start the service manually, it starts with no problem.
I had the same issue with other CentOS boxes (like the chef/centos-6.5 available on vagrantcloud.com), that's why I created one myself.
Other services, like mysql, start fine, so it's a problem specific to apache.
Resuming:
httpd always start on first boot, even with the provision script (like after vagrant destroy)
httpd always start when I don't run a provision script (but I need it to set the DocumentRoot)
httpd doesn't start after first halt, with a provision script that messes with DocumentRoot (not sure if that's the problem).
This is my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "centos64_lamp"
config.vm.box_url = "<url>/centos64_lamp.box"
config.vm.hostname = "machine.dev"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.synced_folder ".", "/vagrant", owner: "root", group: "root"
config.vm.provision :shell, :path => "vagrant_files/bootstrap.sh"
end
I tried to create the vagrant folder with owner/group root and apache. Same problem with both (as with owner vagrant).
These are the provision scripts (bootstrap.sh) that I tried. The only thing that I want them to do is to change the DocumentRoot to the vagrant folder. Neither worked.
Try 1
#!/usr/bin/env bash
sudo rm -rf /var/www/html
sudo ln -fs /vagrant/app/webroot /var/www/html
Try 2
#!/usr/bin/env bash
sudo cp /vagrant/vagrant_files/httpd.conf /etc/httpd/conf
sudo service httpd restart
The httpd.conf on the second try is equal to the default one, except for the DocumentRoot path. This second alternative allows me to do vagrant up --provision to force the restart of the service, but that should be an unnecessary step.
What else can I try to solve this? Thank you.
Apparently the problem was due to the vagrant folder not being mounted when Apache tries to start. Although I still don't understand why no error is thrown.
I solved it by creating an Upstart script (on the folder /etc/init) to start the service after vagrant mounts its folder (it emits an event called vagrant-mounted)
This is the script I used (with the filename httpd.conf but I don't think that's necessary).
# start apache on vagrant mounted
start on vagrant-mounted
exec sudo service httpd start
Upstart can do much more but this solves it.
First of all, check if httpd suppose to be started for specific runlevels (at least 2-5) by (which you did):
chkconfig | grep httpd
In that case it may be related that your DocumentRoot or its symlink points to Vagrant synced folder, so it's not available yet during service being started.
Workaround is to add service start httpd command at the end of your shell provisioning script, e.g.:
service httpd status || service httpd start
in order to fix it.
For more bullet-proof workaround, add it into trap function (for Bash script), e.g.:
trap onerror 1 2 3 15 ERR
#--- onerror()
onerror() {
service httpd status || service httpd start
}
This may be not enough, so to make it start in halt & up cases, you need to run your shell as always in your Vagrantfile, for example:
config.vm.provision :shell, run: "always", :inline => "service httpd status || service httpd start"
or provide a script, e.g.:
config.vm.provision :shell, run: "always", path: "scripts/check_vm_services.sh"
Then the script may look like:
#!/usr/bin/env bash
# Script to re-check VM state.
# Run each time when vagrant command is invoked.
# Check if httpd service is running.
echo Checking services...
service httpd status || service httpd start
Alternatively check: Launching services after Vagrant mount which uses upstart event that Vagrant emits each time it mounts a synced folder that is called vagrant-mounted, so we can modify the upstart configuration file for services that depend on the Vagrant synced folder to listen and start check and restart the services after the vagrant-mounted event is emitted.
i confirm that the above ^ solution absolutely works.
i added a file named vagrant-mounted.conf within /etc/init, containing:
start on vagrant-mounted
exec sudo sh /etc/startup.sh
the shell script /etc/startup.sh i had already added, as a means of manually starting up httpd, mysqld and sendmail but required logging in via vagrant ssh after vagrant up to do so... now it's automatic. great!
My nginx was not starting up on Vagrant reload or Vagrant up, so this is my solution:
sudo cat > /etc/init/vagrant-mounted.conf << EOL
# start services on vagrant mounted
start on vagrant-mounted
exec sudo service php5-fpm restart
exec sudo service mysql restart
exec sudo service memcached restart
exec sudo service nginx restart
exec sudo nginx
EOL

How to use ssh agent forwarding with "vagrant ssh"?

Rather than create a new SSH key pair on a vagrant box, I would like to re-use the key pair I have on my host machine, using agent forwarding. I've tried setting config.ssh.forward_agent to TRUE in the Vagrantfile, then rebooted the VM, and tried using:
vagrant ssh -- -A
...but I'm still getting prompted for a password when I try to do a git checkout. Any idea what I'm missing?
I'm using vagrant 2 on OS X Mountain Lion.
Vagrant.configure("2") do |config|
config.ssh.private_key_path = "~/.ssh/id_rsa"
config.ssh.forward_agent = true
end
config.ssh.private_key_path is your local private key
Your private key must be available to the local ssh-agent. You can check with ssh-add -L, if it's not listed add it with ssh-add ~/.ssh/id_rsa
Don't forget to add you public key to ~/.ssh/authorized_keys on the Vagrant VM. You can do it copy-and-pasting or using a tool like ssh-copy-id
Add it to the Vagrantfile
Vagrant::Config.run do |config|
# stuff
config.ssh.forward_agent = true
end
See the docs
In addition to adding "config.ssh.forward_agent = true" to the vagrant file make sure the host computer is set up for agent forwarding. Github provides a good guide for this. (Check out the troubleshooting section).
I had this working with the above replies on 1.4.3, but stopped working on 1.5. I now have to run ssh-add to work fully with 1.5.
For now I add the following line to my ansible provisioning script.
- name: Make sure ssk keys are passed to guest.
local_action: command ssh-add
I've also created a gist of my setup: https://gist.github.com/KyleJamesWalker/9538912
If you are on Windows, SSH Forwarding in Vagrant does not work properly by default (because of a bug in net-ssh). See this particular Vagrant bug report: https://github.com/mitchellh/vagrant/issues/1735
However, there is a workaround! Simply auto-copy your local SSH key to the Vagrant VM via a simple provisioning script in your VagrantFile. Here's an example:
https://github.com/mitchellh/vagrant/issues/1735#issuecomment-25640783
When we recently tried out the vagrant-aws plugin with Vagrant 1.1.5, we ran into an issue with SSH agent forwarding. It turned out that Vagrant was forcing IdentitiesOnly=yes without an option to change it to no. This forced Vagrant to only look at the private key we listed in the Vagrantfile for the AWS provider.
I wrote up our experiences in a blog post. It may turn into a pull request at some point.
Make sure that the VM does not launch its own SSH agent. I had this line in my ~/.profile
eval `ssh-agent`
After removing it, SSH agent forwarding worked.
The real problem is Vagrant using 127.0.0.1:2222 as default port-forward.
You can add one (not 2222, 2222 is already occupied by default)
config.vm.network "forwarded_port", guest: 22, host:2333, host_ip: "0.0.0.0"
"0.0.0.0" is way take request from external connection.
then
ssh -p 2333 vagrant#192.168.2.101 (change to your own host ip address, dud)
will working just fine.
Do thank me, Just call me Leifeng!
On Windows, the problem is that Vagrant doesn't know how to communicate with git-bash's ssh-agent. It does, however, know how to use PuTTY's Pageant. So, as long as Pageant is running and has loaded your SSH key, and as long as you've set config.ssh.forward_agent, this should work.
See this comment for details.
If you use Pageant, then the workaround of updating the Vagrantfile to copy SSH keys on Windows is no longer necessary.