Selenium Grid, Vagrant, unable to run tests from Eclipse - selenium

I am attempting to automate our testing using Selenium and Selenium Grid 2. To do this I have create a VirtualBox VM and packaged it with vagrant into a box. Using simple batch scripts, eventually want to run this on a Jenkins CI server, I can start the vagrant box,but I get:
c:\seleniumServer>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'IE_Vagrant.box'...
==> default: Matching MAC address for NAT networking...
==> default: Setting the name of the VM:seleniumServer_default_1436811491763_573
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: bridged
==> default: Forwarding ports...
default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: password
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
I can start the Selenium Hub, and selenium Node and they register. I can even ssh into the vagrant box after it is done telling it it cannot connect. I have setup cygwin and OpenSSH on the box.
When I try to run the testNg test from Eclipse I get :
Error forwarding the new session Error forwarding the request Connect to 10.0.2.15:5566 [/10.0.2.15] failed: Connection timed out: connect.
Here are the relevant bits.
Start node with
java -jar lib/selenium-server-standalone-2.46.0.jar -role webdriver -hub http://localhost:4444/grid/register -browser browserName="chrome",version=ANY,platform=WINDOWS,maxInstances=5 -Dwebdriver.chrome.driver="c\seleniumDrivers\chromedriver.exe"
Start the Hub with
java -jar selenium-server-standalone-2.46.0.jar -role hub
VagrantFile:
Vagrant.configure(2) do |config|
config.vm.boot_timeout = "300"
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.vm.network "public_network"
config.vm.box = "IE_Vagrant.box"
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
And here is my test:
package com.hiiq.qa.testing.gen2;
import static org.junit.Assert.assertEquals;
import java.net.MalformedURLException;
import java.net.URL;
import org.openqa.selenium.By;
import org.openqa.selenium.Platform;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
public class GridTest {
private static RemoteWebDriver driver;
#BeforeClass
public void setUp() throws MalformedURLException {
DesiredCapabilities capability = new DesiredCapabilities();
//capability.setBrowserName("chrome");
capability.setBrowserName(DesiredCapabilities.chrome().getBrowserName());
capability.setPlatform(Platform.WINDOWS);
//capability.setVersion("");
capability.setJavascriptEnabled(true);
driver = new RemoteWebDriver(new URL("http://10.70.1.28:4444/wd/hub"), capability);
driver.get("http://10.1.6.112:8383");
}
#Test
public void loginTest(){

Check this tutorial if the box is properly setup, especially virtualbox guest additions: https://dennypc.wordpress.com/2014/06/09/creating-a-windows-box-with-vagrant-1-6/
vagrant up and vagrant ssh should work properly.
Then setup your Vagrantfile for Port Forwarding:
Vagrant.configure(2) do |config|
config.vm.boot_timeout = "300"
config.ssh.username = "vagrant"
config.ssh.password = "vagrant"
config.vm.network "public_network"
config.vm.box = "IE_Vagrant.box"
config.vm.network "forwarded_port", guest: 4444, host: 4444
config.vm.network "forwarded_port", guest: 8383, host: 8383
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = true
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
end
Contact your services in your tests by localhost:4444 and localhost:8383.

Related

Vagrant multi vm ssh connection setup works on one but not the others

I have searched many of the similar issues but can't seem to figure out the one I'm having. I have a Vagrantfile with which I setup 3 VMs. I add a public key to each VM so I can run Ansible against the boxes after vagrant up command (I don't want to use the ansible provisioner). I forward all the SSH ports on each box.
I can vagrant ssh <server_name> on to each box successfully.
With the following:
ssh vagrant#192.168.56.2 -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#192.168.56.3 -p 2712 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.3 port 2712: Connection refused
ssh vagrant#192.168.56.4 -p 2713 -i ~/.ssh/ansible <-- connection error
ssh: connect to host 192.168.56.4 port 2713: Connection refused
And
ssh vagrant#localhost -p 2711 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2712 -i ~/.ssh/ansible <-- successful connection
ssh vagrant#localhost -p 2713 -i ~/.ssh/ansible <-- successful connection
Ansible can connect to the first one (vagrant#192.168.56.2) but not the other 2 also. I can't seem to find out why it connects to one and not the others. Any ideas what I could be doing wrong?
The Ansible inventory:
{
"all": {
"hosts": {
"kubemaster": {
"ansible_host": "192.168.56.2",
"ansible_user": "vagrant",
"ansible_ssh_port": 2711
},
"kubenode01": {
"ansible_host": "192.168.56.3",
"ansible_user": "vagrant",
"ansible_ssh_port": 2712
},
"kubenode02": {
"ansible_host": "192.168.56.4",
"ansible_user": "vagrant",
"ansible_ssh_port": 2713
}
},
"children": {},
"vars": {}
}
}
The Vagrantfile:
# Define the number of master and worker nodes
NUM_MASTER_NODE = 1
NUM_WORKER_NODE = 2
PRIV_IP_NW = "192.168.56."
MASTER_IP_START = 1
NODE_IP_START = 2
# Vagrant configuration
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# default box
config.vm.box = "ubuntu/jammy64"
# automatic box update checking.
config.vm.box_check_update = false
# Provision master nodes
(1..NUM_MASTER_NODE).each do |i|
config.vm.define "kubemaster" do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = "kubemaster"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubemaster"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{MASTER_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
# argo and traefik access
node.vm.network "forwarded_port", guest: 8080, host: "#{8080}"
node.vm.network "forwarded_port", guest: 9000, host: "#{9000}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder "sync_folder", "/vagrant_data", create: true, owner: "root", group: "root"
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
# Provision Worker Nodes
(1..NUM_WORKER_NODE).each do |i|
config.vm.define "kubenode0#{i}" do |node|
node.vm.provider "virtualbox" do |vb|
vb.name = "kubenode0#{i}"
vb.memory = 2048
vb.cpus = 2
end
node.vm.hostname = "kubenode0#{i}"
node.vm.network :private_network, ip: PRIV_IP_NW + "#{NODE_IP_START + i}"
node.vm.network :forwarded_port, guest: 22, host: "#{2711 + i}"
# synced folder for kubernetes setup yaml
node.vm.synced_folder ".", "/vagrant", disabled: true
# setup the hosts, dns and ansible keys
node.vm.provision "setup-hosts", :type => "shell", :path => "vagrant/setup-hosts.sh" do |s|
s.args = ["enp0s8"]
end
node.vm.provision "setup-dns", type: "shell", :path => "vagrant/update-dns.sh"
node.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/ansible.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end
end
end
Your Vagrantfile confirms what I suspected:
You define port forwarding as follows:
node.vm.network :forwarded_port, guest: 22, host: "#{2710 + i}"
That means, port 22 of the guest is made reachable on the host under port 2710+i. For your 3 VMs, from the host's point of view, this means:
192.168.2.1:22 -> localhost:2711
192.168.2.2:22 -> localhost:2712
192.168.2.3:22 -> localhost:2713
As IP addresses for your VMs you have defined the range 192.168.2.0/24, but you try to access the range 192.168.56.0/24.
If a Private IP address is defined (for your 1st node e.g. 192.168.2.2), Vagrant implements this in the VM on VirtualBox as follows:
Two network adapters are defined for the VM:
NAT: this gives the VM Internet access
Host-Only: this gives the host access to the VM via IP 192.168.2.2.
For each /24 network, VirtualBox (and Vagrant) creates a separate VirtualBox Host-Only Ethernet Adapter, and the host is .1 on each of these networks.
What this means for you is that if you use an IP address from the 192.168.2.0/24 network, an adapter is created on your host that always gets the IP address 192.168.2.1/24, so you have the addresses 192.168.2.2 - 192.168.2.254 available for your VMs.
This means: You have for your master a collision of the IP address with your host!
But why does the access to your first VM work?
ssh vagrant#192.168.56.1 -p 2711 -i ~/.ssh/ansible <-- successful connection
That is relatively simple: The network 192.168.56.0/24 is the default network for Host-Only under VirtualBox, so you probably have a VirtualBox Host-Only Ethernet Adapter with the address 192.168.56.1/24.
Because you have defined a port forwarding in your Vagrantfile a mapping of the 1st VM to localhost:2711 takes place. If you now access 192.168.56.1:2711, this is your own host, thus localhost, and the SSH of the 1st VM is mapped to port 2711 on this host.
So what do you have to do now?
Change the IP addresses of your VMs, e.g. use 192.168.2.11 - 192.168.2.13.
The access to the VMs is possible as follows:
Node
via Guest-IP
via localhost
kubemaster
192.168.2.11:22
localhost:2711
kubenode01
192.168.2.12:22
localhost:2712
kubenode02
192.168.2.13:22
localhost:2713
Note: If you want to access with the guest IP address, use port 22, if you want to access via localhost, use port 2710+i defined by you.

Vagrant ssh stuck with "default: Warning: Connection timeout. Retrying..."

I am running vagrant(1.7.4)-salt on Virtual box 4.3 on a headless ubuntu 14.04. Salt is a standalone one.The reason I am using these version is because the work on my local ubuntu.
On vagrant up I get the following output:
==> default: Importing base box 'phusion/ubuntu-14.04-amd64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'phusion/ubuntu-14.04-amd64' is up to date...
==> default: Setting the name of the VM: drupal_default_1452863894453_19933
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2201.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 22 => 2201 (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2201
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection timeout. Retrying...
default: Warning: Connection timeout. Retrying...
vagrant ssh-config gives:
Host default
HostName 127.0.0.1
User vagrant
Port 2201
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/user/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
My Vagrantfile is:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.host_name = "site#{rand(0..999999)}"
config.vm.provider "virtualbox" do |v|
config.ssh.insert_key = false
v.memory = 2048
v.cpus = 1
end
## For masterless, mount your salt file root
config.vm.synced_folder "salt/roots/","/srv/salt/"
# Network
config.vm.network :private_network, ip: "172.16.0.100"
# Server provisioner
config.vm.provision :salt do |salt|
salt.masterless = true
salt.minion_config = "salt/minion"
salt.run_highstate = true
salt.bootstrap_options = "-P"
end
# Provisioning scripts
config.vm.provision "dbsync", type: "shell", path: "provision/db.sh"
end
What could have missed? Any ubuntu network configuration? Any ssh configuration?

How to forward port in apache2 cookbook

I tried with below code in apache cookbook to map default port 80 to 443 however still I get error while running the chef. Can you please suggest on this. I tried to map to other than port # 80 since I have nginx recipe also in my cookbook so would like to set up apache2 to listen on diff port -
* apache/attribute/default.rb
default['apache']['dir'] = '/etc/apache2'
default['apache']['listen_ports'] = [ '80','443' ]
* apache/recipes/default.rb
package "apache2" do
action :install
end
service "apache2" do
action [:enable, :start]
end
template "/var/www/index.html" do
source "index.html.erb"
mode "0644"
end
Vagrant provision error -
================================================================================
==> default:
==> default: Error executing action `start` on resource 'service[apache2]'
==> default:
================================================================================
==> default:
==> default: Mixlib::ShellOut::ShellCommandFailed
==> default:
==> default: ------------------------------------
==> default:
==> default: Expected process to exit with [0], but received '1'
This time I had used ==>
attribute/default.rb with below content but still getting error -
default['apache']['dir'] = '/etc/apache2'
default['apache']['listen_ports'] = [ '81’ ]
Error
==> default: STDOUT: * Starting web server apache2
==> default:
==> default: Action 'start' failed.
==> default: The Apache error log may have more information.
==> default: ...fail!
==> default: STDERR: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
==> default: (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80
listen_ports is not a mapping, it sets which ports to listen on. If you don't want to listen on 80, do not include it in that array.

cannot ssh Docker Provided Container with Vagrant. Vagrant ssh doesnt work too

I am fairly new to Vagrant and Docker both.
What I am trying to do here is to get a container provided via docker in Vagrant and install a small webapp using the shell provisioner.
Here is my Vagrantfile
Vagrant.configure(2) do |config|
# config.vm.provision :shell, path: "bootstrap.sh"
config.vm.provision :shell, inline: 'echo Hi there !!!'
config.vm.provider :docker do |d|
d.name="appEnvironment"
d.image = "phusion/baseimage"
d.remains_running = true
d.has_ssh = true
d.cmd = ["/sbin/my_init","--enable-insecure-key"]
end
end
The problem that i am facing here is that after the container is created it keeps running the following and eventually just stops.
I can see a running docker container when i type in docker ps but it hasnt run the provisioning part. I am assuming it is because the ssh wasnt successful
==> default: Creating the container...
default: Name: appEnvironment
default: Image: phusion/baseimage
default: Cmd: /sbin/my_init --enable-insecure-key
default: Volume: /home/devops/vagrantBoxForDemo:/vagrant
default: Port: 127.0.0.1:2222:22
default:
default: Container created: 56a87b7cd10c22fe
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 172.17.0.50:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
default: Warning: Connection refused. Retrying...
Can someone let me know where i might be wrong? I tried changing the image as well but without success.
First download the insecure key provided by phusion from:
https://github.com/phusion/baseimage-docker/blob/master/image/insecure_key
* Remember the insecure key should be used only for development purposes.
Now, you need to enable ssh by adding the following into your Dockerfile:
FROM phusion/baseimage
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
RUN /usr/sbin/enable_insecure_key
Enable ssh and specify the key file in your Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.define "app" do |app|
app.vm.provider "docker" do |d|
d.build_dir = "."
d.cmd = ["/sbin/my_init", "--enable-insecure-key"]
d.has_ssh = true
end
end
config.ssh.username = "root"
config.ssh.private_key_path = "path/to/your/insecure_key"
end
Up your environment
vagrant up
Now you should be able to access your container by ssh
vagrant ssh app
phusion/baseimage does not have the insecure private key enabled by default. You have to create your own base image FROM phusion/baseimage with the following:
RUN /usr/sbin/enable_insecure_key

In custom AMI sshd is not getting started

I created my own AMI & when I start my instance sshd is not getting started. What might be the problem?
Please find below the system log snippet
init: rcS main process (199) terminated with status 1
Entering non-interactive startup
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
Bringing up loopback interface: OK
Bringing up interface eth0:
Determining IP information for eth0...type=1400 audit(1337940238.646:4): avc: denied { getattr } for pid=637 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
martian source 255.255.255.255 from 169.254.1.0, on dev eth0
ll header: ff:ff:ff:ff:ff:ff:fe:ff:ff:ff:ff:ff:08:00
type=1400 audit(1337940239.023:5): avc: denied { getattr } for pid=647 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.515:6): avc: denied { getattr } for pid=674 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
type=1400 audit(1337940239.560:7): avc: denied { getattr } for pid=690 comm="dhclient-script" path="/etc/sysconfig/network" dev=xvde1 ino=136359 scontext=system_u:system_r:dhcpc_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=file
done.
OK
Starting auditd: OK
Starting system logger: OK
Starting system message bus: OK
Retrigger failed udev events OK
Starting sshd: FAILED
The problem was due to selinux. Once I disabled selinux during boot up by providing selinux=0 as argument in GRUB for kernel field, the machine booted with sshd service started and I'm able to connect to it.