traefik with systemd don't see containers docker - traefik

I want to start traefik trough systemd, but I don't have the same results with systemd vs manual start.
Here is an example of when I start traefik manually:
$ traefik --web \
--docker \
--docker.domain=docker
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 377
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:26:09 GMT
{
"docker": {
"backends": {
"backend-rancher": {
"loadBalancer": {
"method": "wrr"
},
"servers": {
"server-rancher": {
"url": "http://172.17.0.2:8080",
"weight": 0
}
}
}
},
"frontends": {
"frontend-Host-rancher-docker": {
"backend": "backend-rancher",
"basicAuth": [],
"entryPoints": [
"http"
],
"passHostHeader": true,
"priority": 0,
"routes": {
"route-frontend-Host-rancher-docker": {
"rule": "Host:rancher.docker"
}
}
}
}
}
}
And when I use systemd:
$ sudo systemctl status traefik
● traefik.service - Traefik reverse proxy
Loaded: loaded (/usr/lib/systemd/system/traefik.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-10-15 12:27:35 CEST; 4s ago
Main PID: 12643 (traefik)
Tasks: 9 (limit: 4915)
Memory: 14.6M
CPU: 256ms
CGroup: /system.slice/traefik.service
└─12643 /usr/bin/traefik --web --docker --docker.domain=docker
Oct 15 12:27:35 devbox systemd[1]: Started Traefik reverse proxy.
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:28:18 GMT
{}
Any idea why I don't see my container docker ?

By adding this with my user/group, it works!
[Service]
User=...
Group=...

Related

Codeception Acceptance Testing issue using session snapshot

Update 10 Jun, 2021
So when removing the populator from codeception.yml the session problem goes away.
BUT: In the dump.sql is nothing having influence on users or sessions or cookies. There are only a few tables with demo data, and they are needed!
The relevant part in the file is this:
codeception.yml
...
modules:
enabled: [Db]
config:
Db:
dsn: "mysql:host=%HOST%;dbname=%DBNAME%"
user: "root"
password: "root"
populate: true
cleanup: true
# populator: "mysql -u$user -p$password $dbname < tests/codeception/_data/dump.sql"
...
Original Post
I think i read almost all similar reccources considering this issue, but nothing helped so far.
I am moving our Codeception tests on Github Actions. The whole build process is running but the acceptance tests not, because the session snapshot can't be restored.
The same Workflow works on a local server where i use selenium webdriver. I tried to run selenium in Actions (commented in build.yml) but that made some port problems.
What i'm doing in this short example is installing Joomla (works) and after that creating a Content category.
The second step (creating a Content category) tries to pick up the created session from the first step.
It's very simple and no problem locally, but on Actions the created session cannot be read.
The relevant report part:
InstallCest: Install joomla
Signature: InstallCest:installJoomla
Test: tests/codeception/acceptance/install/InstallCest.php:installJoomla
... works
InstallCest: createCategory
Signature: InstallCest:createcategory
Test: tests/codeception/acceptance/install/InstallCest.php:createcategory
Scenario --
[Db] Executing Populator: `mysql -uroot -proot test < tests/codeception/_data/dump.sql`
[Db] Populator Finished.
I create category "test 123"
Category creation in /administrator/
I open Joomla Administrator Login Page
[GET] http://127.0.0.1:8000/administrator/index.php
[Cookies] [{"name":"9d4bb4a09f511681369671a08beff228","value":"fail5495jbd01q6dc2nm06i7gf","path":"/","domain":"127.0.0.1","expiry":1623346855,"secure":false,"httpOnly":false},{"name":"8b5558aac8008f05fd8f8e59a3244887","value":"irhlqlj8jabat2n5746ba0sb5r","path":"/","domain":"127.0.0.1","expiry":1623346855,"secure":false,"httpOnly":false}]
[Snapshot] Restored "admin" session snapshot
[GET] http://127.0.0.1:8000/administrator/index.php?option=com_categories
Screenshot and page source were saved into '/home/runner/work/project_b/project_b/tests/codeception/_output/' dir
ERROR
The report:
session not created: No matching capabilities found
The HTML Snapshot:
Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
Error: Failed to start application: Failed to start the session
The php.log part
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 Accepted
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 [200]: GET /media/jui/fonts/IcoMoon.woff
[Wed Jun 9 18:24:13 2021] 127.0.0.1:41972 Closing
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 Accepted
[Wed Jun 9 18:24:16 2021] PHP Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 [500]: GET /administrator/index.php
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41982 Closing
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 Accepted
[Wed Jun 9 18:24:16 2021] PHP Warning: session_start(): Failed to read session data: user (path: /var/lib/php/sessions) in /home/runner/work/project_b/project_b/joomla/libraries/joomla/session/handler/native.php on line 260
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 [500]: GET /administrator/index.php?option=com_categories
[Wed Jun 9 18:24:16 2021] 127.0.0.1:41986 Closing
I tried change the session.save_path without effect.
Posting relevant pieces:
composer.json
{
"name": "company/tests",
"description": "Company Product",
"license": "GPL-2.0+",
"require": {},
"require-dev": {
"codeception/codeception": "^4",
"fzaninotto/faker": "^1.6",
"behat/gherkin": "^4.4.1",
"phing/phing": "2.*",
"codeception/module-asserts": "^1.3",
"codeception/module-webdriver": "^1.2",
"codeception/module-filesystem": "^1.0",
"codeception/module-db": "^1.1"
}
}
build.yml
name: Codeception Tests
on: [push]
jobs:
tests:
runs-on: ${{ matrix.operating-system }}
strategy:
fail-fast: false
matrix:
operating-system: [ubuntu-latest]
php: ["7.4"]
name: PHP ${{ matrix.php }} Test on ${{ matrix.operating-system }}
env:
php-ini-values: post_max_size=32M
DB_DATABASE: test
DB_NAME: test
DB_ADAPTER: mysql
DB_USERNAME: root
DB_PASSWORD: root
DB_HOST: 127.0.0.1
DB_PORT: 3306
APP_URL: http://127.0.0.1:8000
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Checkout Joomla 3
uses: actions/checkout#v2
with:
repository: joomla/joomla-cms
ref: "3.9.27"
path: joomla
- name: Setup PHP
uses: shivammathur/setup-php#v2
with:
php-version: ${{ matrix.php }}
# ini-values: session.save_path=/tmp
extensions: mbstring, intl, zip, json
tools: composer:v2
- name: Start MySQL
run: |
sudo /etc/init.d/mysql start
mysql -e 'CREATE DATABASE test;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
# Composer stuff ...
- name: Run chromedriver
run: nohup $CHROMEWEBDRIVER/chromedriver --url-base=/wd/hub /dev/null 2>&1 &
# - name: Start ChromeDriver (was a try)
# run: |
# google-chrome --version
# xvfb-run --server-args="-screen 0, 1280x720x24" --auto-servernum \
# chromedriver --port=4444 --url-base=/wd/hub &> chromedriver.log &
- name: Run PHP webserver
run: |
php -S 127.0.0.1:8000 -t joomla/ &> php.log.txt &
sleep 1;
- name: Install Tests
run: |
php vendor/bin/codecept run "tests/codeception/acceptance/install/InstallCest.php" -vv --html
env:
DB_PORT: ${{ job.services.mysql.ports[3306] }}
- name: Upload Codeception output
if: ${{ always() }}
uses: actions/upload-artifact#v2
with:
name: codeception-results
# path: Tests/Acceptance/_output/
path: tests/codeception/_output/
- name: Upload PHP log
if: ${{ failure() }}
uses: actions/upload-artifact#v2
with:
name: php-log
path: php.log.txt
Acceptance Suite
class_name: AcceptanceTester
modules:
enabled:
- Asserts
- JoomlaBrowser
- Helper\Acceptance
- DbHelper
- Filesystem
config:
JoomlaBrowser:
url: "http://127.0.0.1:8000/"
browser: "chrome"
restart: true
clear_cookies: true
# window_size: 1280x1024
window_size: false
port: 9515
capabilities:
unexpectedAlertBehaviour: "accept"
chromeOptions:
args: ["--headless", "--disable-gpu"] # Run Chrome in headless mode
# prefs:
# download.default_directory: "..."
username: "admin" # UserName for the Administrator
password: "admin" # Password for the Administrator
database host: "127.0.0.1:3306" # place where the Application is Hosted #server Address
database user: "root" # MySQL Server user ID, usually root
database password: "root" # MySQL Server password, usually empty or root
database name: "test" # DB Name, at the Server
database type: "mysqli" # type in lowercase one of the options: MySQL\MySQLi\PDO
database prefix: "jos_" # DB Prefix for tables
install sample data: "no" # Do you want to Download the Sample Data Along with Joomla Installation, then keep it Yes
sample data: "Default English (GB) Sample Data" # Default Sample Data
admin email: "admin#mydomain.com" # email Id of the Admin
language: "English (United Kingdom)" # Language in which you want the Application to be Installed
Helper\Acceptance:
url: "http://127.0.0.1:8000/" # the url that points to the joomla installation at /tests/system/joomla-cms - we need it twice here
MicrosoftEdgeInsiders: false # set this to true, if you are on Windows Insiders
error_level: "E_ALL & ~E_STRICT & ~E_DEPRECATED"
InstallCest.php
<?php
/**
* Install Joomla and create Category
*
* #since 3.7.3
*/
class InstallCest
{
/**
* Install Joomla, disable statistics and enable Error Reporting
*
* #param AcceptanceTester $I The AcceptanceTester Object
*
* #since 3.7.3
*
* #return void
*
*/
public function installJoomla(\AcceptanceTester $I)
{
$I->am('Administrator');
$I->installJoomlaRemovingInstallationFolder();
$I->doAdministratorLogin();
$I->disableStatistics();
$I->setErrorReportingToDevelopment();
}
/**
* Just create Category
*
* #param AcceptanceTester $I The AcceptanceTester Object
*
* #since 3.7.3
*
* #return void
*
*/
public function createCategory(\AcceptanceTester $I)
{
$I->createCategory('test 123');
}
}

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

PHP CURL failed with Operation timeout in Kubernetes CronJob

I have a cronJob that runs at some time interval to download images from remote servers. I had alpine-php:7.2-fpm docker image. It works fine with some of the URLs. but it is failing with some URLs.
Here is the code for CURL
$fp = fopen($fileNameWithPath, 'w');
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => $url,
CURLOPT_FILE => $fp,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_CONNECTTIMEOUT => 90,
CURLOPT_TIMEOUT => 180,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_VERBOSE => 1
));
$result = curl_exec($ch);
$statusCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
fclose($fp);
I had enabled verbose and the logs in Kubernetes pods gives the following output
* TCP_NODELAY set
* Connected to images.asos-media.com (23.32.5.80) port 443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=GB; L=London; O=ASOS.com Limited; CN=*.asos-media.com
* start date: Feb 26 00:00:00 2020 GMT
* expire date: May 27 12:00:00 2021 GMT
* issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=DigiCert Secure Site ECC CA-1
* SSL certificate verify ok.
> GET /products/wonderbra-new-ultimate-strapless-bra-a-g-cup/5980845-1-beige?$XXL$ HTTP/1.1
Host: images.asos-media.com
Accept: */*
Accept-Encoding: deflate, gzip
* old SSL session ID is stale, removing
* Operation timed out after 180000 milliseconds with 0 bytes received
* Closing connection 0
If I run this code from docker-image locally it works fine.
Kubernetes Deployment Files
CronJoB
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: scheduleApp
name: imagedownlload
labels:
app: scheduleApp
spec:
schedule: "5 */4 * * *" # Specify schedule using linux cron syntax
concurrencyPolicy: Allow
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 2
jobTemplate:
spec:
parallelism: 1 # Number of Pods start together with Job
template:
metadata:
labels:
tier: cronservice
spec:
volumes:
- name: pv-restorage
persistentVolumeClaim:
claimName: pipeline-volumeclaim
containers:
- name: imagedownload
image: gcr.io/{project_id}/{image_name}:v1.0.2 # Set the image tobe used in container with full repository URL
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secret
volumeMounts:
- name: pv-restorage
mountPath: /var/www/html/restorage
restartPolicy: Never
Service file
apiVersion: v1
kind: Service
metadata:
name: cron-loadbalancer
namespace: scheduleApp
spec:
selector:
tier: cronservice
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
sessionAffinity: None
type: LoadBalancer
Dockerfile
FROM php:7.2-fpm-alpine
RUN apk update && apk add \
libzip-dev \
unzip \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install mysqli zip \
&& rm -rf /var/cache/apk/*
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
COPY composer.* /var/www/html/
RUN cd /usr/local/etc/php/conf.d/ \
&& echo 'memory_limit = -1' >> /usr/local/etc/php/conf.d/docker-php-memlimit.ini
WORKDIR /var/www/html
RUN composer install && composer clear-cache
COPY . /var/www/html/
ENTRYPOINT ["php","console"]
CMD ["-V"]

Arquillian cube : issue starting multiple images

I have a compose file which I am able to run using docker-compose command,
m-c02wt0e3htdg:arquillian-cub r0s0164$ docker-compose -f docker_compose.yml up -d
Creating network "arquillian-cub_default" with the default driver
Creating arquillian-cub_fake_1 ... done
Creating arquillian-cub_tomcat_1 ... done
m-c02wt0e3htdg:arquillian-cub r0s0164$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ef8693bc7006 tutum/tomcat:7.0 "/run.sh" 10 seconds ago Up 9 seconds 0.0.0.0:8181->8080/tcp arquillian-cub_tomcat_1
8b11de635750 cicd/my-fake-service:latest "java -cp app:app/li…" 10 seconds ago Up 9 seconds 8081-8082/tcp, 0.0.0.0:9191->8080/tcp arquillian-cub_fake_1
m-c02wt0e3htdg:arquillian-cub r0s0164$ curl -I http://localhost:8181
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=ISO-8859-1
Transfer-Encoding: chunked
Date: Tue, 11 Dec 2018 06:01:25 GMT
m-c02wt0e3htdg:arquillian-cub r0s0164$ curl -I http://localhost:9191
HTTP/1.1 404
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Tue, 11 Dec 2018 06:01:35 GMT
The same docker-compose file I am specifying in arquillian.xml
<?xml version="1.0" encoding="UTF-8"?>
<arquillian
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://jboss.org/schema/arquillian"
xsi:schemaLocation="http://jboss.org/schema/arquillian
http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
<extension qualifier="docker">
<property name="serverVersion">1.30</property>
<property name="definitionFormat">COMPOSE</property>
<property name="dockerContainersFile">docker_compose.yml</property>
</extension>
</arquillian>
Console:
CubeDockerConfiguration:
serverVersion = 1.30
serverUri = unix:///var/run/docker.sock
tlsVerify = false
dockerServerIp = localhost
definitionFormat = COMPOSE
clean = false
removeVolumes = true
dockerContainers = containers:
tomcat:
alwaysPull: false
image: tutum/tomcat:7.0
killContainer: false
manual: false
networkMode: arquillian-cub_default
networks: [arquillian-cub_default]
portBindings: !!set {8181->8080/tcp: null}
readonlyRootfs: false
removeVolumes: true
fake:
alwaysPull: false
exposedPorts: !!set {8082/tcp: null}
image: cicd/my-fake-service:latest
killContainer: false
manual: false
networkMode: arquillian-cub_default
networks: [arquillian-cub_default]
portBindings: !!set {9191->8080/tcp: null}
readonlyRootfs: false
removeVolumes: true
networks:
arquillian-cub_default: {driver: bridge}
I am geting ERROR while running tests:
Caused by: java.lang.IllegalArgumentException: No port was specified and in all containers there are more than one bind port.
 at org.arquillian.cube.docker.impl.util.SinglePortBindResolver.resolvePortBindPort(SinglePortBindResolver.java:161)
Would appreciate help on this. I know I am missing something here

Trying to use curl to create product but keep getting Error 500

I am trying to generate some test data for my Rails app that interfaces with Shopify through the shopify_api gem. I am using curl (command line utility) on a OS X machine. I keep getting Error 500 from Shopify (see below). I am at my wits end as I can't see what I'm doing wrong. Any help would be greatly appreciated.
* About to connect() to [edited out].myshopify.com port 80 (#0)
* Trying 204.93.213.44...
* connected
* Connected to [edited out].myshopify.com (204.93.213.44) port 80 (#0)
* Server auth using Basic with user '[edited out]'
> POST /admin/products.json HTTP/1.1
> Authorization: Basic [edited out]
> User-Agent: curl/7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r zlib/1.2.5
> Host: [edited out].myshopify.com
> Accept: */*
> Content-Type: application/json
> Content-Length: 694
>
* upload completely sent off: 694 out of 694 bytes
< HTTP/1.1 500 Internal Server Error
< Server: nginx
< Date: Sat, 22 Sep 2012 10:06:22 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Status: 500 Internal Server Error
< X-Shopify-Shop-Api-Call-Limit: 1/500
< HTTP_X_SHOPIFY_SHOP_API_CALL_LIMIT: 1/500
< Cache-Control: no-cache
< X-Request-Id: 3a03a70617be67e89ab103a9b8053da9
< X-UA-Compatible: IE=Edge,chrome=1
< X-Runtime: 1.566916
<
* Connection #0 to host [edited out].myshopify.com left intact
{"errors":"Error"}* Closing connection #0
This is how I invoke curl:
curl -v -X POST -d #ss12-absolute.json -H 'Content-Type: application/json' http://some_key:some_password#myshop.myshopify.com/admin/products.json
The POSTed data file looks like this:
{
"product": {
"title": "absolute",
"handle": "ss12-absolute",
"vendor": "deNada",
"product_type": "top",
"tags": "top,ss12,knits,casual,sleeveless",
"body_html": "",
"variants": [
{
"title": "absolute 08 eggplant",
"sku": 555647,
"price": "245.0",
"compare_at_price": "245.0",
"option1": "eggplant",
"option2": "08",
"option3": null
}
],
"options": [
{
"name": "Colour"
},
{
"name": "Size"
}
],
"metafields": [
{
"namespace": "retail_pro",
"key": "rp_style_sid",
"value": -5642228920827310084,
"value_type": "integer"
}
]
}
}
Sku is a text field, you are submitting an integer. Switching that will fix it for you, we will fix it on our end so you can submit either.
The server is returning "500 Internal Server Error" so it's not an error in your client code, this needs to be debbugged in the server, if you don't have access to the server code that process your call (myshop.myshopify.com/admin/products.json) then you have to open an issue with the administrators or support team of that service so they have a look at what's wrong.
It's possible that a parameter in your POST data causes the error, but it's impossible to figure out exactly what's wrong without looking at the server code.