Using translations of Behat predefined steps (Phar install) - behat

I've run some tests with the predefined step definitions of Mink Extension. They work as long as they're in english language.
Now I've tried the following scenario with german steps:
# language: de
Funktionalität: Demo
#javascript
Szenario: Test 1
Angenommen I am on "/"
Angenommen ich bin auf "/"
...
Behat now tells me that the german step definition is undefined, while the english version works.
According to the CLI help, behat --lang de -dl should display the translated definitions, but it only shows me the english ones ...
What am I doing wrong here?
Edit:
Here's a script to rebuild the scenario. It follows the install steps from the docs (http://extensions.behat.org/mink/#through-phar) in a temporary directory and runs the test feature file.
#!/bin/bash
set -e
TEMPDIR=/tmp/behat-$$
mkdir $TEMPDIR
cd $TEMPDIR
curl http://behat.org/downloads/behat.phar >behat.phar
curl http://behat.org/downloads/mink.phar >mink.phar
curl http://behat.org/downloads/mink_extension.phar >mink_extension.phar
cat >behat.yml <<EOF
default:
extensions:
mink_extension.phar:
mink_loader: 'mink.phar'
base_url: 'http://behat.org'
goutte: ~
EOF
mkdir features
cat >features/test.feature <<EOF
# language: de
Funktionalität: Demo
Szenario: Öffne Startseite DE + EN
Angenommen I am on "/"
Angenommen ich bin auf "/"
EOF
php behat.phar

Basically you didn't do anything wrong.
Although the translation of Behat/Gherkin itself is included in the behat.phar file, the translations of the step definitions from MinkExtension are missing in the mink_extension.phar archive.
This seems to be the case because the build script only includes the files in MinkExtension/src/ without MinkExtension/i18n/. You could open an issue for MinkExtension at to get this fixed.
As a workaround I suggest to install Behat/Mink using composer instead of working with phar archives.
Create the following composer.json file:
{
"require": {
"behat/behat": "2.4.*#stable",
"behat/mink": "1.4.*#stable",
"behat/mink-extension": "*",
"behat/mink-goutte-driver": "*",
"behat/mink-selenium2-driver": "*"
},
"minimum-stability": "dev",
"config": {
"bin-dir": "bin/"
}
}
and then install it with:
curl http://getcomposer.org/installer | php
php composer.phar install

Related

Unable to install Sylius 1.3 in prod env, nelmio_alice error

Unable to install Sylius in prod env(Edited $APP_ENV = prod in .env file), getting this error when executing : sudo php bin/console sylius:install
But I am able to install in dev environment, please help!
Error.log
ubuntu#ip:/var/www/pwsstore_prod/Sylius$ sudo php bin/console sylius:install
In FileLoader.php line 168:
There is no extension able to load the configuration for "nelmio_alice" (in
/var/www/pwsstore_prod/Sylius/config/packages/nelmio_alice.yaml). Looked f
or namespace "nelmio_alice", found "framework", "monolog", "security", "swi
ftmailer", "twig", "doctrine", "doctrine_cache", "sylius_order", "sylius_mo
ney", "sylius_currency", "sylius_locale", "sylius_product", "sylius_channel
", "sylius_attribute", "sylius_taxation", "sylius_shipping", "sylius_paymen
t", "sylius_mailer", "sylius_promotion", "sylius_addressing", "sylius_inven
tory", "sylius_taxonomy", "sylius_user", "sylius_customer", "sylius_ui", "s
ylius_review", "sylius_core", "sylius_resource", "sylius_grid", "winzou_sta
te_machine", "sonata_core", "sonata_block", "sonata_intl", "bazinga_hateoas
", "jms_serializer", "fos_rest", "knp_gaufrette", "knp_menu", "liip_imagine
", "payum", "stof_doctrine_extensions", "white_october_pagerfanta", "doctri
ne_migrations", "doctrine_fixtures", "sylius_fixtures", "sylius_payum", "sy
lius_theme", "sylius_admin", "sylius_shop", "fos_oauth_server", "sylius_adm
in_api" in /var/www/pwsstore_prod/Sylius/config/packages/nelmio_alice.yaml
(which is loaded in resource "/var/www/pwsstore_prod/Sylius/config/packages
/nelmio_alice.yaml").
In YamlFileLoader.php line 657:
There is no extension able to load the configuration for "nelmio_alice" (in
/var/www/pwsstore_prod/Sylius/config/packages/nelmio_alice.yaml). Looked f
or namespace "nelmio_alice", found "framework", "monolog", "security", "swi
ftmailer", "twig", "doctrine", "doctrine_cache", "sylius_order", "sylius_mo
ney", "sylius_currency", "sylius_locale", "sylius_product", "sylius_channel
", "sylius_attribute", "sylius_taxation", "sylius_shipping", "sylius_paymen
t", "sylius_mailer", "sylius_promotion", "sylius_addressing", "sylius_inven
tory", "sylius_taxonomy", "sylius_user", "sylius_customer", "sylius_ui", "s
ylius_review", "sylius_core", "sylius_resource", "sylius_grid", "winzou_sta
te_machine", "sonata_core", "sonata_block", "sonata_intl", "bazinga_hateoas
", "jms_serializer", "fos_rest", "knp_gaufrette", "knp_menu", "liip_imagine
", "payum", "stof_doctrine_extensions", "white_october_pagerfanta", "doctri
ne_migrations", "doctrine_fixtures", "sylius_fixtures", "sylius_payum", "sy
lius_theme", "sylius_admin", "sylius_shop", "fos_oauth_server", "sylius_adm
in_api"
Move the nelmo_alice.yml file from config/packages/nelmo_alice.yml into config/packages/dev/nelmio_alice.yml
(Create the dev folder in packages directory and add the nelmio_alice file)
Then, run the installation command as usual!

Creating a Perl 6 module containing Perl 5 utility scripts in bin/

Perl 6 script in a Perl 5 module distribution
I can include a Perl 6 script in a Perl 5 module distribution:
# Create a new module
dzil new My::Dist
cd My-Dist/
# Add necessary boilerplate
echo '# ABSTRACT: boilerplate' >> lib/My/Dist.pm
# Create Perl 6 script in bin directory
mkdir bin
echo '#!/usr/bin/env perl6' > bin/hello.p6
echo 'put "Hello world!";' >> bin/hello.p6
# Install module
dzil install
# Test script
hello.p6
# Hello world!
# See that it is actually installed
which hello.p6
# ~/perl5/perlbrew/perls/perl-5.20.1/bin/hello.p6
Perl 5 script in a Perl 6 module distribution
However, I'm having a hard time including Perl 5 scripts in a Perl 6 distribution.
In the module directory is a META6.json file and a subdirectory called bin. In bin is a Perl 5 file called hello.pl.
zef install . runs without error in the top directory. But when trying to run hello.pl, I get an error. Come to find out, a Perl 6 wrapper script had been installed for hello.pl and that is what is giving me the error. If I run the original hello.pl directly, it works fine.
META6.json
{
"perl" : "6.c",
"name" : "TESTING1234",
"license" : "Artistic-2.0",
"version" : "0.0.2",
"auth" : "github:author",
"authors" : ["First Last"],
"description" : "TESTING module creation",
"provides" : {
},
"depends" : [ ],
"test-depends" : [ "Test", "Test::META" ]
}
bin/hello.pl
#!/usr/bin/env perl
use v5.10;
use strict;
use warnings;
say 'Hello world!';
This installs without error, but when I try to run hello.pl, I get the following error:
===SORRY!===
Could not find Perl5 at line 2 in:
/home/username/.perl6
/path/to/perl6/rakudo-star-2017.07/install/share/perl6/site
/path/to/perl6/rakudo-star-2017.07/install/share/perl6/vendor
/path/to/perl6/rakudo-star-2017.07/install/share/perl6
CompUnit::Repository::AbsolutePath<64730416>
CompUnit::Repository::NQP<43359856>
CompUnit::Repository::Perl5<43359896>
which hello.pl from the command line indicated that it was installed in /path/to/perl6/rakudo-star-2017.07/install/share/perl6/site/bin/hello.pl. That file is actually the following code:
/path/to/perl6/rakudo-star-2017.07/install/share/perl6/site/bin/hello.pl
#!/usr/bin/env perl6
sub MAIN(:$name is copy, :$auth, :$ver, *#, *%) {
CompUnit::RepositoryRegistry.run-script("hello.pl", :dist-name<TESTING1234>, :$name, :$auth, :$ver);
}
I filed a Rakudo bug report (https://rt.perl.org/Ticket/Display.html?id=131911), but I'm not totally convinced that there isn't a simple work around.
As an example, I created a simple cat replacement in Perl 5 and created a Perl 6 module that "wrapped" around it (see the GitHub repository for it if you'd like to download the code and try it yourself).
Below are copies of the relevant files. After creating these files, running zef install . installs fine with my Rakudo Star 2017.07 installation. This installs a run_cat executable in your Rakudo bin directory.
It seemed like the secret was to make a Perl 6 module file to wrap the Perl 5 script and a corresponding Perl 6 script to use the Perl 6 module.
Perl 5 script
resources/scripts/cat.pl
#!/bin/env perl
use v5.10;
use strict;
use warnings;
while(<>) {
print;
}
Wrapper scripts
module: lib/catenate.pm6
unit module catenate;
sub cat ($filename) is export {
run('perl',%?RESOURCES<scripts/cat.pl>,$filename);
}
executable: bin/run_cat
#!/bin/env perl6
use catenate;
sub MAIN ($filename) {
cat($filename);
}
Boilerplate and tests
META6.json`
{
"perl" : "6.c",
"name" : "cat",
"license" : "Artistic-2.0",
"version" : "0.0.9",
"auth" : "github:author",
"authors" : ["First Last"],
"description" : "file catenation utility",
"provides" : { "catenate" : "lib/catenate.pm6" },
"test-depends" : [ "Test", "Test::META" ],
"resources" : [ "scripts/cat.pl" ]
}
t/cat.t
#!/bin/env perl6
use Test;
constant GREETING = 'Hello world!';
my $filename = 'test.txt';
spurt($filename, GREETING);
my $p5 = qqx{ resources/scripts/cat.pl $filename };
my $p6 = qqx{ bin/run_cat $filename };
is $p6, $p5, 'wrapped script gives same result as original';
is $p6, GREETING, "output is '{GREETING}' as expected";
unlink $filename;
done-testing;
Thanks #moritz and #ugexe for getting me pointed in the right direction!

Packer.io fails using puppet provisioner: /usr/bin/puppet: line 3: rvm: command not found

I'm trying to build a Vagrant box file using Packer.io and Puppet.
I have this template as a starting point:
https://github.com/puphpet/packer-templates/tree/master/centos-7-x86_64
I added the Puppet provisioner after the shell provisioner:
{
"type": "puppet-masterless",
"manifest_file": "../../puphpet/puppet/site.pp",
"manifest_dir": "../../puphpet/puppet/nodes",
"module_paths": [
"../../puphpet/puppet/modules"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant' | {{.FacterVars}}{{if .Sudo}} sudo -S -E bash {{end}}/usr/bin/puppet apply --verbose --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
}
}
}
When I start building the image like
packer-io build -only=virtualbox-iso template.json
Then I get this error:
==> virtualbox-iso: Provisioning with Puppet...
virtualbox-iso: Creating Puppet staging directory...
virtualbox-iso: Uploading manifest directory from: ../../puphpet/puppet/nodes
virtualbox-iso: Uploading local modules from: ../../puphpet/puppet/modules
virtualbox-iso: Uploading manifests...
virtualbox-iso:
virtualbox-iso: Running Puppet: echo 'vagrant' | sudo -S -E bash /usr/bin/puppet apply --verbose --modulepath='/tmp/packer-puppet-masterless/module-0' --manifestdir='/tmp/packer-puppet-masterless/manifests' --detailed-exitcodes /tmp/packer-puppet-masterless/manifests/site.pp
virtualbox-iso: /usr/bin/puppet: line 3: rvm: command not found
==> virtualbox-iso: Unregistering and deleting virtual machine...
==> virtualbox-iso: Deleting output directory...
Build 'virtualbox-iso' errored: Puppet exited with a non-zero exit status: 127
If I log in into the box via tty, I can run both rvm and puppet commands as vagrant user.
What did I do wrong?
I am trying out the exact same route as you are:
Use relevant scripts for provisioning the vm from this repo.
Use the puppet scripts from a puphpet.com configuration to further provision the vm using puppet-masterless provioner in packer.
Still working on it, not a successful build yet, but I can share the following:
Inspect line 50 from puphpet/shell/install-puppet.sh. So the puppet command will trigger rvm to be executed.
Inspect your packer output during provisioning. Your read something along the lines of:
...
Creating alias default for ruby-1.9.3-p551
To start using RVM you need to run `source /usr/local/rvm/scripts/rvm` in all
your open shell windows, in rare cases you need to reopen all shell windows.
Cleaning up rvm archives
....
Apparently the command source /usr/local/rvm/scripts/rvm is needed for each user that needs to run rvm. It is executed and set to bash profiles in the script puphpet/shell/install-ruby.sh. However, this does not seem to affect the context/scope of the puppet masterless provisioning execute_command of packer. Reason for the line /usr/bin/puppet: line 3: rvm: command not found in your output.
My current way forward is the following configuration in template.json (packer template), the second and third line will help get beyond the point where you are stuck currently:
{
"type": "puppet-masterless",
"prevent_sudo": true,
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "./puphpet/puppet/site.pp",
"manifest_dir": "./puphpet/puppet",
"hiera_config_path": "./puphpet/puppet/hiera.yaml",
"module_paths": [
"./puphpet/puppet/modules"
],
"facter": {
"ssh_username": "vagrant",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
Note the following things:
Probably running puppet as vagrant user will not complete provisioning due to permission issues. In that case we need a way to run source /usr/local/rvm/scripts/rvm in a sudo and affect the scope of the puppet provisioning command.
The puphpet.com output scripts have /vagrant/puphpet hardcoded in their puppet scripts (e.g. puphpet/puppet/nodes/Apache.pp first line). So you might require a packer file provisioning to your vm before you execute puppet masterless, in order for it to find the dependencies in /vagrant/.... My packer.json conf for this:
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir /vagrant",
"chown -R vagrant:vagrant /vagrant"
]
},
{
"type": "file",
"source": "./puphpet",
"destination": "/vagrant"
},
Puppet will need some Facter variables as they are expected in the puphpet/puppet/nodes/*.pp scripts. Refer to my template.json above.
As said. No success in a complete puppet provisioning yet on my side, but the above got me beyond the point where you are stuck currently. Hope it helps.
Update:
I replaced my old execute command for puppet provisioner
"execute_command": "source /usr/local/rvm/scripts/rvm && {{.FacterVars}}{{if .Sudo}} sudo -E{{end}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}"
with a new one
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\""
This will ensure puppet (rvm) is running as root and finishes provisioning successfully.
As an alternative to my other answer, I hereby provide my steps and configuration to get this provisioning scenario working with packer & puphpet.
Assuming the following to be in place:
./: a local directory acting as your own repository being
./ops/: a directory ops inside which holds packer scripts and required files
./ops/template.json: the packer template used to build the VM
./ops/template.json expects the following is in place:
./ops/packer-templates/: a clone of this repo
./ops/ubuntu-14.04.2-server-amd64.iso: the iso for the ubuntu you want to have running in your vm
./puphpet: the output of walking through the configuration steps on puphpet.com (so this is one level up from ops)
The contents of template.json:
{
"variables": {
"ssh_name": "vagrant",
"ssh_pass": "vagrant",
"local_packer_templates_dir": "./packer-templates/ubuntu-14.04-x86_64",
"local_puphput_dir": "../puphpet",
"local_repo_dir": "../",
"repo_upload_dir": "/vagrant"
},
"builders": [
{
"name": "ubuntu-14.04.amd64.virtualbox",
"type": "virtualbox-iso",
"headless": false,
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "{{user `local_packer_templates_dir`}}/http",
"iso_checksum": "83aabd8dcf1e8f469f3c72fff2375195",
"iso_checksum_type": "md5",
"iso_url": "./ubuntu-14.04.2-server-amd64.iso",
"ssh_username": "{{user `ssh_name`}}",
"ssh_password": "{{user `ssh_pass`}}",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo '{{user `ssh_pass`}}'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version",
"vboxmanage": [
["modifyvm", "{{.Name}}", "--memory", "2048"],
["modifyvm", "{{.Name}}", "--cpus", "4"]
]
}
],
"provisioners": [
{
"type": "shell",
"execute_command": "echo '{{user `ssh_pass`}}'|sudo -S bash '{{.Path}}'",
"scripts": [
"{{user `local_packer_templates_dir`}}/scripts/base.sh",
"{{user `local_packer_templates_dir`}}/scripts/virtualbox.sh",
"{{user `local_packer_templates_dir`}}/scripts/vagrant.sh",
"{{user `local_packer_templates_dir`}}/scripts/puphpet.sh",
"{{user `local_packer_templates_dir`}}/scripts/cleanup.sh",
"{{user `local_packer_templates_dir`}}/scripts/zerodisk.sh"
]
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"mkdir {{user `repo_upload_dir`}}",
"chown -R vagrant:vagrant {{user `repo_upload_dir`}}"
]
},
{
"type": "file",
"source": "{{user `local_repo_dir`}}",
"destination": "{{user `repo_upload_dir`}}"
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"rm -fR {{user `repo_upload_dir`}}/.vagrant",
"rm -fR {{user `repo_upload_dir`}}/ops"
]
},
{
"type": "puppet-masterless",
"execute_command": "{{if .Sudo}}sudo -E {{end}}bash -c \"source /usr/local/rvm/scripts/rvm; {{.FacterVars}} puppet apply --verbose --parser future --modulepath='{{.ModulePath}}' {{if ne .HieraConfigPath \"\"}}--hiera_config='{{.HieraConfigPath}}' {{end}} {{if ne .ManifestDir \"\"}}--manifestdir='{{.ManifestDir}}' {{end}} --detailed-exitcodes {{.ManifestFile}}\"",
"manifest_file": "{{user `local_puphput_dir`}}/puppet/site.pp",
"manifest_dir": "{{user `local_puphput_dir`}}/puppet",
"hiera_config_path": "{{user `local_puphput_dir`}}/puppet/hiera.yaml",
"module_paths": [
"{{user `local_puphput_dir`}}/puppet/modules"
],
"facter": {
"ssh_username": "{{user `ssh_name`}}",
"provisioner_type": "virtualbox",
"vm_target_key": "vagrantfile-local"
}
},
{
"type": "shell",
"execute_command": "sudo bash '{{.Path}}'",
"inline": [
"echo '{{user `repo_upload_dir`}}/puphpet' > '/.puphpet-stuff/vagrant-core-folder.txt'",
"sudo bash {{user `repo_upload_dir`}}/puphpet/shell/important-notices.sh"
]
}
],
"post-processors": [
{
"type": "vagrant",
"output": "./build/{{.BuildName}}.box",
"compression_level": 9
}
]
}
Narration of what happens:
execute the basic provisioning of the VM using the scripts that are used to build puphpet boxes (first shell provisioner block)
create a directory /vagrant in the VM and set permissions for vagrant user
upload local repository to /vagrant (important as puphpet/puppet expects it to exist at that location in its scripts)
remove some unneeded stuff from /vagrant after upload
start puppet provisioner with custom execute_command and facter configuration
process the remaining provisioning scripts. To be extended with exec once/always, start once/always files
Note: you might need to prepare some more things before the puppet provisioner kicks off. E.g. I need a directory in place that will be the docroot of a vhost in apache. Use shell provisioning to complete the template for your own puphpet configuration.

What is the cause of this exception .?

I am trying to run Behat\mink using this code "bin\behat --format html --out report.html --profile firefox" . But i am getting this error .
[RuntimeException]
MinkExtension 1.3 only supports Goutte 1.x for MinkGoutteDriver, not Goutte 2.x.
composer.json looks like this
{
"require": {
"behat/behat": "2.5.*#stable",
"behat/mink": "1.6.*#stable",
"behat/mink-extension": "*",
"behat/mink":"~1.5#dev",
"behat/mink":"~1.6#dev",
"behat/mink-goutte-driver": "*",
"behat/mink-selenium2-driver": "*"
},
"minimum-stability": "dev",
"config": {
"bin-dir": "bin/"
}
}
behat.yml
firefox:
context:
parameters:
Browser_Name: firefox
extensions:
Behat\MinkExtension\Extension:
base_url: https://google.com
javascript_session: selenium2
browser_name: firefox
selenium2:
wd_host: http://127.0.0.1:4444/wd/hub
It would be so helpful if you could tell me where i have gone wrong .
I would say that your first problem lies within your composer.json file. It would appear as though you are attempting to load both developmental and stable versions of the same library.
Unless you are attempting to test / load some dev code you can simplify your require section to:
"require": {
"behat/mink-selenium2-driver" : "~1.2",
"behat/mink-goutte-driver" : "~1.1",
"behat/mink-extension" : "~2.0"
}
Your behat/behat, and behat/mink libraries will automatically be pulled in by composer to fulfil the requirements for those libraries.
Information on the tilde operator, within your composer.json file can be found within composers documentation
Again, unless you are using dev based releases you might want to look at omitting:
"minimum-stability": "dev",

protractor could not find protractor/selenium/chromedriver.exe at codeship

i'm trying to configure the integration to run portractor tests.
I'm using grunt-protractor-runner task
with following configuration:
protractor: {
options: {
configFile: "protractor.conf.js", //your protractor config file
keepAlive: true, // If false, the grunt process stops when the test fails.
noColor: false, // If true, protractor will not use colors in its output.
args: {
// Arguments passed to the command
}
},
run: {},
chrome: {
options: {
args: {
browser: "chrome"
}
}
}
}
and here is grunt task which i use for running the protractor after the server is running:
grunt.registerTask('prot', [
'connect:test',
'replace:includemocks',//for uncommenting angular-mocks reference
'protractor:run',
'replace:removemocks',//for commenting out angular-mocks reference
]);
It is running well on my local machine, but at codeship i'm getting following error:
Error: Could not find chromedriver at /home/rof/src/bitbucket.org/myrepo/myFirstRepo/node_modules/grunt-protractor-runner/node_modules/protractor/selenium/chromedriver.exe
Which i guess, a result of not having this "chromedriver.exe" at this path.
How can i solve it in codeship environment?
Thanks forwards
Add postinstall to your package.json file and that way npm install will take care of placing the binaries for you ahead of time:
"scripts": {
"postinstall": "echo -n $NODE_ENV | \
grep -v 'production' && \
./node_modules/protractor/bin/webdriver-manager update || \
echo 'will skip the webdriver install/update in production'",
...
},
And don't forget to set NODE_ENV ... not setting it at all will result in echo 'will skip the webdriver install/update in production' piece running. Setting it to dev or staging will get desired results.
Short answer (pulkitsinghal gave the original solution):
./node_modules/grunt-protractor-runner/node_modules/protractor/bin/webdriver-manager update
I'm one of the founders at Codeship.
The error seems to be because you are trying to use the exe file, but we're on Linux on our system. Did you hardcode that executable?
Could you send us an in-app support request so we have a link to look at and can help you fix this?