Codeception Acceptance Testing issue using session snapshot - selenium

Update 10 Jun, 2021
So when removing the populator from codeception.yml the session problem goes away.
BUT: In the dump.sql is nothing having influence on users or sessions or cookies. There are only a few tables with demo data, and they are needed!
The relevant part in the file is this:
codeception.yml
...
modules:
enabled: [Db]
config:
Db:
dsn: "mysql:host=%HOST%;dbname=%DBNAME%"
user: "root"
password: "root"
populate: true
cleanup: true
# populator: "mysql -u$user -p$password $dbname < tests/codeception/_data/dump.sql"
...
Original Post
I think i read almost all similar reccources considering this issue, but nothing helped so far.
I am moving our Codeception tests on Github Actions. The whole build process is running but the acceptance tests not, because the session snapshot can't be restored.
The same Workflow works on a local server where i use selenium webdriver. I tried to run selenium in Actions (commented in build.yml) but that made some port problems.
What i'm doing in this short example is installing Joomla (works) and after that creating a Content category.
The second step (creating a Content category) tries to pick up the created session from the first step.
It's very simple and no problem locally, but on Actions the created session cannot be read.
The relevant report part:
InstallCest: Install joomla
Signature: InstallCest:installJoomla
Test: tests/codeception/acceptance/install/InstallCest.php:installJoomla
... works
InstallCest: createCategory
Signature: InstallCest:createcategory
Test: tests/codeception/acceptance/install/InstallCest.php:createcategory
Scenario --
[Db] Executing Populator: `mysql -uroot -proot test < tests/codeception/_data/dump.sql`
[Db] Populator Finished.
I create category "test 123"
Category creation in /administrator/
I open Joomla Administrator Login Page
[GET] http://127.0.0.1:8000/administrator/index.php
[Cookies] [{"name":"9d4bb4a09f511681369671a08beff228","value":"fail5495jbd01q6dc2nm06i7gf","path":"/","domain":"127.0.0.1","expiry":1623346855,"secure":false,"httpOnly":false},{"name":"8b5558aac8008f05fd8f8e59a3244887","value":"irhlqlj8jabat2n5746ba0sb5r","path":"/","domain":"127.0.0.1","expiry":1623346855,"secure":false,"httpOnly":false}]
[Snapshot] Restored "admin" session snapshot
[GET] http://127.0.0.1:8000/administrator/index.php?option=com_categories
Screenshot and page source were saved into '/home/runner/work/project_b/project_b/tests/codeception/_output/' dir
ERROR
The report:
session not created: No matching capabilities found
The HTML Snapshot:
Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
Error: Failed to start application: Failed to start the session
The php.log part
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 Accepted
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 [200]: GET /media/jui/fonts/IcoMoon.woff
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 Closing
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 Accepted
[Wed Jun 9 18:24:16 2021] PHP Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 [500]: GET /administrator/index.php
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 Closing
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 Accepted
[Wed Jun 9 18:24:16 2021] PHP Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 [500]: GET /administrator/index.php?option=com_categories
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 Closing
I tried change the session.save_path without effect.
Posting relevant pieces:
composer.json
{
"name": "company/tests",
"description": "Company Product",
"license": "GPL-2.0+",
"require": {},
"require-dev": {
"codeception/codeception": "^4",
"fzaninotto/faker": "^1.6",
"behat/gherkin": "^4.4.1",
"phing/phing": "2.*",
"codeception/module-asserts": "^1.3",
"codeception/module-webdriver": "^1.2",
"codeception/module-filesystem": "^1.0",
"codeception/module-db": "^1.1"
}
}
build.yml
name: Codeception Tests
on: [push]
jobs:
tests:
runs-on: ${{ matrix.operating-system }}
strategy:
fail-fast: false
matrix:
operating-system: [ubuntu-latest]
php: ["7.4"]
name: PHP ${{ matrix.php }} Test on ${{ matrix.operating-system }}
env:
php-ini-values: post_max_size=32M
DB_DATABASE: test
DB_NAME: test
DB_ADAPTER: mysql
DB_USERNAME: root
DB_PASSWORD: root
DB_HOST: 127.0.0.1
DB_PORT: 3306
APP_URL: http://127.0.0.1:8000
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Checkout Joomla 3
uses: actions/checkout#v2
with:
repository: joomla/joomla-cms
ref: "3.9.27"
path: joomla
- name: Setup PHP
uses: shivammathur/setup-php#v2
with:
php-version: ${{ matrix.php }}
# ini-values: session.save_path=/tmp
extensions: mbstring, intl, zip, json
tools: composer:v2
- name: Start MySQL
run: |
sudo /etc/init.d/mysql start
mysql -e 'CREATE DATABASE test;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
# Composer stuff ...
- name: Run chromedriver
run: nohup $CHROMEWEBDRIVER/chromedriver --url-base=/wd/hub /dev/null 2>&1 &
# - name: Start ChromeDriver (was a try)
# run: |
# google-chrome --version
# xvfb-run --server-args="-screen 0, 1280x720x24" --auto-servernum \
# chromedriver --port=4444 --url-base=/wd/hub &> chromedriver.log &
- name: Run PHP webserver
run: |
php -S 127.0.0.1:8000 -t joomla/ &> php.log.txt &
sleep 1;
- name: Install Tests
run: |
php vendor/bin/codecept run "tests/codeception/acceptance/install/InstallCest.php" -vv --html
env:
DB_PORT: ${{ job.services.mysql.ports[3306] }}
- name: Upload Codeception output
if: ${{ always() }}
uses: actions/upload-artifact#v2
with:
name: codeception-results
# path: Tests/Acceptance/_output/
path: tests/codeception/_output/
- name: Upload PHP log
if: ${{ failure() }}
uses: actions/upload-artifact#v2
with:
name: php-log
path: php.log.txt
Acceptance Suite
class_name: AcceptanceTester
modules:
enabled:
- Asserts
- JoomlaBrowser
- Helper\Acceptance
- DbHelper
- Filesystem
config:
JoomlaBrowser:
url: "http://127.0.0.1:8000/"
browser: "chrome"
restart: true
clear_cookies: true
# window_size: 1280x1024
window_size: false
port: 9515
capabilities:
unexpectedAlertBehaviour: "accept"
chromeOptions:
args: ["--headless", "--disable-gpu"] # Run Chrome in headless mode
# prefs:
# download.default_directory: "..."
username: "admin" # UserName for the Administrator
password: "admin" # Password for the Administrator
database host: "127.0.0.1:3306" # place where the Application is Hosted #server Address
database user: "root" # MySQL Server user ID, usually root
database password: "root" # MySQL Server password, usually empty or root
database name: "test" # DB Name, at the Server
database type: "mysqli" # type in lowercase one of the options: MySQL\MySQLi\PDO
database prefix: "jos_" # DB Prefix for tables
install sample data: "no" # Do you want to Download the Sample Data Along with Joomla Installation, then keep it Yes
sample data: "Default English (GB) Sample Data" # Default Sample Data
admin email: "admin#mydomain.com" # email Id of the Admin
language: "English (United Kingdom)" # Language in which you want the Application to be Installed
Helper\Acceptance:
url: "http://127.0.0.1:8000/" # the url that points to the joomla installation at /tests/system/joomla-cms - we need it twice here
MicrosoftEdgeInsiders: false # set this to true, if you are on Windows Insiders
error_level: "E_ALL & ~E_STRICT & ~E_DEPRECATED"
InstallCest.php
<?php
/**
* Install Joomla and create Category
*
* #since 3.7.3
*/
class InstallCest
{
/**
* Install Joomla, disable statistics and enable Error Reporting
*
* #param AcceptanceTester $I The AcceptanceTester Object
*
* #since 3.7.3
*
* #return void
*
*/
public function installJoomla(\AcceptanceTester $I)
{
$I->am('Administrator');
$I->installJoomlaRemovingInstallationFolder();
$I->doAdministratorLogin();
$I->disableStatistics();
$I->setErrorReportingToDevelopment();
}
/**
* Just create Category
*
* #param AcceptanceTester $I The AcceptanceTester Object
*
* #since 3.7.3
*
* #return void
*
*/
public function createCategory(\AcceptanceTester $I)
{
$I->createCategory('test 123');
}
}

Related

How to deal with multiple when condition for registered variable in ansible

I have a playbook 3 raw task (or more) with sample commands like below:
Playbook mytest.yml
- hosts: remotehost
gather_facts: no
tasks:
- name: Execute command1
raw: "ls -ltr"
register: cmdoutput
when: remcmd == "list"
- name: Execute command2
raw: "hostname"
register: cmdoutput
when: remcmd == "host"
- name: Execute command3
raw: "uptime"
register: cmdoutput
when: remcmd == "up"
- hosts: localhost
gather_facts: no
tasks:
- debug:
msg: "Printing {{ hostvars['remotehost']['cmdoutput'] }}"
This is my nventory myhost.yml
[remotehost]
myserver1
Here is how I run the playbook:
ansible-playbook -i myhost.yml mytest.yml -e remcmd="host"
PLAY [remotehost] ***************************************************************************************************************
TASK [Execute command1] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.013) 0:00:00.013 ******
skipping: [myserver1]
TASK [Execute command2] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.023) 0:00:00.036 ******
changed: [myserver1]
TASK [Execute command3] *********************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.521) 0:00:00.557 ******
skipping: [myserver1]
PLAY [localhost] ****************************************************************************************************************
TASK [debug] ********************************************************************************************************************
Thursday 06 October 2022 07:06:06 -0500 (0:00:00.032) 0:00:00.590 ******
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['remotehost']\" is undefined\n\nThe error appears to be in '/home/wladmin/mytest.yml': line 22, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - debug:\n ^ here\n"}
PLAY RECAP **********************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
myserver1 : ok=1 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
My requirement is no matter what value is passed for remcmd my localhost play should print stdoutlines of cmdoutput
Preliminary notes:
Using raw is evil.
Don't use raw unless to install prereqs (i.e. python) on the target host. Then switch to modules or at the very least command/shell
If you still intend to use raw, go back to point 1 above
In case your forgot to go back to point 1: using raw is evil
Don't register several tasks with the same var name (the last one always win, even if skipped). Don't create tasks you can avoid up-start.
As an illustration of the above principles
- hosts: remotehost
gather_facts: no
vars:
cmd_map:
list: ls -ltr
host: hostname
up: uptime
tasks:
- name: Make sure remcmd is known
assert:
that: remcmd in cmp_map.keys()
fail_msg: "remcmd must be one of: {{ cmd_map.keys() | join(', ') }}"
- name: Execute command
command: "{{ cmd_map[remcmd] }}"
register: cmdoutput
- name: Show entire result from above task
debug:
var: cmdoutput
my localhost play should print stdout_lines of cmdoutput
As far as I understand "How the debug module works", it can only print on the Control Node.
Therefore you could just remove three (3) lines in your example
- hosts: localhost
gather_facts: no
tasks:
and give it a try with
- hosts: remotehost
gather_facts: no
tasks:
- name: Execute command1
raw: "ls -ltr"
register: cmdoutput
when: remcmd == "list"
- name: Execute command2
raw: "hostname"
register: cmdoutput
when: remcmd == "host"
- name: Execute command3
raw: "uptime"
register: cmdoutput
when: remcmd == "up"
- debug:
msg: "Printing {{ cmdoutput }}"
and independently of which task became executed the result would be provided.
Apart from the answer about "How the debug module works" here, I like to recommended to proceed further with the answer of Zeitounator, since it will address your possible use case more complete.

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

"ChromeHeadless have not captured in 60000 ms, killing." occuring only in Gitlab hosted CI/CD pipeline

When running a CI/CD pipeline on Gitlab, my Karma tests are timing out with the error:
ℹ 「wdm」: Compiled successfully.
05 08 2019 22:25:31.483:INFO [karma-server]: Karma v4.2.0 server started at http://0.0.0.0:9222/
05 08 2019 22:25:31.485:INFO [launcher]: Launching browsers ChromeHeadlessNoSandbox with concurrency 1
05 08 2019 22:25:31.488:INFO [launcher]: Starting browser ChromeHeadless
05 08 2019 22:26:31.506:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:26:31.529:INFO [launcher]: Trying to start ChromeHeadless again (1/2).
05 08 2019 22:27:31.580:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:27:31.600:INFO [launcher]: Trying to start ChromeHeadless again (2/2).
05 08 2019 22:28:31.659:WARN [launcher]: ChromeHeadless have not captured in 60000 ms, killing.
05 08 2019 22:28:31.689:ERROR [launcher]: ChromeHeadless failed 2 times (timeout). Giving up.
npm ERR! Test failed. See above for more details.
This problem does not occur when running tests locally, and it does not occur when running the tests using the same Docker image with Gitlab Runner locally.
I feel like I have tried every possible configuration with karma.conf.js. I have Googled this issue relentlessly and have tried every suggestion from proxy servers, to environment variables, to flags... but alas, no luck. I have tried multiple Docker images as this was initially failing on local Gitlab Runner but I have found that the Docker image selenium/standalone-chrome:latest works fine in local Gitlab Runner.
Here is my karma.conf.js file:
const process = require('process');
process.env.CHROME_BIN = require('puppeteer').executablePath();
module.exports = function(config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: '',
// frameworks to use
frameworks: [ 'jasmine' ],
// list of files / patterns to load in the browser
files: [
'src/**/*.spec.js'
],
// list of files / patterns to exclude
exclude: [],
// preprocess matching files before serving them to the browser
preprocessors: {
'src/**/*.spec.js': [ 'webpack' ]
},
webpack: {
// webpack configuration
mode: 'development',
module: {
rules: [
{
test: /\.js$/,
loader: 'babel-loader',
exclude: /node_modules/,
query: {
presets: ['env']
}
}
]
},
stats: {
colors: true
}
},
// test results reporter to use
reporters: [ 'spec' ],
// web server port
port: 9222,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: true,
// plugins for karma
plugins: [
'karma-chrome-launcher',
'karma-webpack',
'karma-jasmine',
'karma-spec-reporter'
],
// start these browsers
browsers: ['ChromeHeadlessNoSandbox'],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless',
flags: [
'--headless',
'--no-sandbox',
'--disable-gpu'
]
}
},
captureTimeout: 60000,
browserDisconnectTolerance: 5,
browserDisconnectTimeout : 30000,
browserNoActivityTimeout : 30000,
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: true,
// Concurrency level
// how many browser should be started simultaneous
concurrency: 1
})
}
And here is my .gitlab-ci.yml file:
.prereq_scripts: &prereq_scripts |
sudo apt -y update && sudo curl -sL https://deb.nodesource.com/setup_10.x | sudo bash && sudo apt -y install nodejs
image: 'selenium/standalone-chrome:latest'
stages:
- test
test:
stage: test
script:
- *prereq_scripts
- npm install
- npm test
I am expecting the tests to run successfully in all three instances (local npm, local Gitlab Runner and remote Gitlab CI/CD pipeline). Currently it only runs in successfully in the first two.
In your karma.conf.js file you need to declare the CHROME_BIN variable inside the module.exports function:
module.exports = function(config) {
const process = require('process');
process.env.CHROME_BIN = require('puppeteer').executablePath();
config.set({
...
Currently, Puppeteer has an issue with Karma on Linux machines, see GitHub issue
There are plenty of solutions on how to make it works without Puppeteer if you use it just to install Headless Chromium.
I have installed it on my Jenkins Alpine machine using only two bash lines:
apk add chromium
export CHROME_BIN=/usr/bin/chromium-browser
Alternatively, you can use Docker with the same setup. One of the examples is here
Docker image with chromeheadless
for example, use a docker image of angular/ngcontainer with chrome headless for testing UI apps.
image: 'angular/ngcontainer:latest'
Also, I created one docker image with the latest chrome
image: 'anulals/angular'
https://hub.docker.com/r/angular/ngcontainer
https://hub.docker.com/r/anulals/angular

traefik with systemd don't see containers docker

I want to start traefik trough systemd, but I don't have the same results with systemd vs manual start.
Here is an example of when I start traefik manually:
$ traefik --web \
--docker \
--docker.domain=docker
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 377
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:26:09 GMT
{
"docker": {
"backends": {
"backend-rancher": {
"loadBalancer": {
"method": "wrr"
},
"servers": {
"server-rancher": {
"url": "http://172.17.0.2:8080",
"weight": 0
}
}
}
},
"frontends": {
"frontend-Host-rancher-docker": {
"backend": "backend-rancher",
"basicAuth": [],
"entryPoints": [
"http"
],
"passHostHeader": true,
"priority": 0,
"routes": {
"route-frontend-Host-rancher-docker": {
"rule": "Host:rancher.docker"
}
}
}
}
}
}
And when I use systemd:
$ sudo systemctl status traefik
● traefik.service - Traefik reverse proxy
Loaded: loaded (/usr/lib/systemd/system/traefik.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-10-15 12:27:35 CEST; 4s ago
Main PID: 12643 (traefik)
Tasks: 9 (limit: 4915)
Memory: 14.6M
CPU: 256ms
CGroup: /system.slice/traefik.service
└─12643 /usr/bin/traefik --web --docker --docker.domain=docker
Oct 15 12:27:35 devbox systemd[1]: Started Traefik reverse proxy.
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:28:18 GMT
{}
Any idea why I don't see my container docker ?
By adding this with my user/group, it works!
[Service]
User=...
Group=...

Codeception - " Curl error thrown for http POST to /session with params:"

first of all, sorry for my poor english.
I have a problem with the test automation.
I am a beginner. I have virtual box instance with my app, where is set a standard address with VirtualBox outside (192.168.56.101). I connect netbeans with this instance, and i have project with my app on this. And the last thing - I'm trying to biuld test automation for it.
Unfortunately, I have a problem,. Test is created, selenium started, PhantomJs also started. I go to the project folder, "codecept run" and...
acceptance.suite config:
class_name: AcceptanceTester
modules:
enabled:
- WebDriver
- \Helper\Acceptance
- Db:
dsn: 'mysql:host=localhost;dbname=xxx'
user: 'root'
password: 'xxx'
dump: 'tests/_data/dump.sql'
populate: true
cleanup: false
reconnect: true
config:
WebDriver:
url: 'http://xxx.app/'
browser: chrome
host: '192.168.56.101'
port: 22
window_size: 'maximize'
env:
phantom:
modules:
config:
- WebDriver:
browser: 'phantomjs'
Please help.