Httpd service not starting in Alpine docker image - apache

Trying to run both java and http service in a single Alpine image. The httpd service is failing some how.
Dockerfile
FROM java:8-jdk-alpine
RUN apk add --no-cache apache2-proxy apache2-ssl apache2-utils
WORKDIR /var/www/
COPY html/ .
WORKDIR /var/backed
COPY backed-0.0.1-SNAPSHOT.jar .
EXPOSE 80/tcp
EXPOSE 8085/tcp
CMD [“sh”,"-c","/usr/sbin/httpd -D FOREGROUND && java -jar /var/backed/backed-0.0.1-SNAPSHOT.jar"]
Docker run command:
$ sudo docker run -p 8080:80 -p 8085:8085 server:1.0
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
██╗ ██╗ ██╗ ████████╗ ███████╗ ██████╗ ████████╗ ████████╗ ███████╗
██║ ██║ ██║ ╚══██╔══╝ ██╔═══██╗ ██╔════╝ ╚══██╔══╝ ██╔═════╝ ██╔═══██╗
██║ ████████║ ██║ ███████╔╝ ╚█████╗ ██║ ██████╗ ███████╔╝
██╗ ██║ ██╔═══██║ ██║ ██╔════╝ ╚═══██╗ ██║ ██╔═══╝ ██╔══██║
╚██████╔╝ ██║ ██║ ████████╗ ██║ ██████╔╝ ██║ ████████╗ ██║ ╚██╗
╚═════╝ ╚═╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══════╝ ╚═╝ ╚═╝
:: JHipster 🤓 :: Running Spring Boot 2.1.8.RELEASE ::
:: https://www.jhipster.tech ::
Posted snipped of the container run above, I think the httpd service quits immediately.
I am able to view jhipster home page on 172.17.0.2:8085, but frontend 172.17.0.2:8080 or 172.17.0.2 gets "connection timed out".
How can make the front end work?

Related

Rsync error in Vagrant 2.2.3 (IPC code) when updating

I've an issue when updating a Vagrant box (Vagrant 2.2.3 and Windows 10).
The cause of error is rsync, it can't synchronize (so, my shared folders are not working, I think) :
Command: "rsync" "--verbose" "--archive" "--delete" "-z" "--copy-links" "--chmod=ugo=rwX" "--no-perms" "--no-owner" "--no-group" "--rsync-path" "sudo rsync" "-e" "ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'" "--exclude" ".vagrant/" "/cygdrive/c/Users/my_user/boxes-puphpet/debian/" "vagrant#127.0.0.1:/vagrant"
Error: rsync: pipe: Connection timed out (116)
rsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]
INFO interface: Machine: error-exit ["Vagrant::Errors::RSyncError", "There was an error when attempting to rsync a synced folder.\nPlease inspect the error message below for more info.\n\nHost path: /cygdrive/c/Users/my_user/boxes-puphpet/debian/\nGuest path: /vagrant\nCommand: \"rsync\" \"--verbose\" \"--archive\" \"--delete\" \"-z\" \"--copy-links\" \"--chmod=ugo=rwX\" \"--no-perms\" \"--no-owner\" \"--no-group\" \"--rsync-path\" \"sudo rsync\" \"-e\" \"ssh -p 2222 -o LogLevel=FATAL -o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i 'C:/Users/my_user/boxes-puphpet/debian/.vagrant/machines/default/virtualbox/private_key'\" \"--exclude\" \".vagrant/\" \"/cygdrive/c/Users/my_user/boxes-puphpet/debian/\" \"vagrant#127.0.0.1:/vagrant\"\nError: rsync: pipe: Connection timed out (116)\nrsync error: error in IPC code (code 14) at pipe.c(59) [sender=3.1.3]\n"]
Here my Vagranfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "debian/jessie64"
config.vm.box_version = "8.10.0"
config.vm.network "private_network", ip: "192.168.56.222"
config.vm.synced_folder "C:/Users/f.pestre/www/debian.vm/www/", "/var/www"
config.vm.provider "virtualbox" do |vb|
vb.memory = "4048"
end
#config.vm.provision :shell, path: "bootstrap.sh"
end
I can login with vagrant ssh, but the sync folder doesn't work, at all.
Thanks.
F.
Add below to your vagrant file
config.vm.synced_folder '.', '/vagrant', disabled: true

Selenium 'Chrome failed to start: exited abnormally' error

I am following https://github.com/RobCherry/docker-chromedriver/blob/master/Dockerfile as an example and I have the following in my docker file:
RUN CHROMEDRIVER_VERSION=`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE` && \
mkdir -p /opt/chromedriver-$CHROMEDRIVER_VERSION && \
curl -sS -o /tmp/chromedriver_linux64.zip http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip && \
unzip -qq /tmp/chromedriver_linux64.zip -d /opt/chromedriver-$CHROMEDRIVER_VERSION && \
rm /tmp/chromedriver_linux64.zip && \
chmod +x /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver && \
ln -fs /opt/chromedriver-$CHROMEDRIVER_VERSION/chromedriver /usr/local/bin/chromedriver
RUN curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list && \
apt-get -yqq update && \
apt-get -yqq install google-chrome-stable && \
rm -rf /var/lib/apt/lists/*
ENV DISPLAY :20.0
ENV SCREEN_GEOMETRY "1440x900x24"
ENV CHROMEDRIVER_PORT 4444
ENV CHROMEDRIVER_WHITELISTED_IPS "127.0.0.1"
ENV CHROMEDRIVER_URL_BASE ''
EXPOSE 4444
To create the driver I am doing:
webdriver.Chrome()
But I get:
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4),platform=Linux 4.4.27-boot2docker x86_64)
Do I have to do anything else to allow Chrome to start?
Got it working. The key is to add:
options = webdriver.ChromeOptions()
options.add_argument('--disable-extensions')
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--no-sandbox')
return webdriver.Chrome(chrome_options=options)
I got it working just by adding -
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--headless");
driver = new ChromeDriver(chromeOptions);

Nginx configure SSL error "ERR_SSL_VERSION_OR_CIPHER_MISMATCH"

I configure my nginx for a https service, but it does not work. The chrome showed error "ERR_SSL_VERSION_OR_CIPHER_MISMATCH". It's seemed like that the browser want an ssl protocol that my nginx can't support? However, I followed the official document from where I got the CA to configure my nginx.conf file as below.
I run curl https://ltp-cloud.com and get error "curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure"
There is my nginx.conf:
```
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
server {
listen 443;
server_name ltp-cloud.com;
ssl on;
root html;
index index.html index.htm;
ssl_certificate /etc/nginx/ssl/cert/214234087720826.pem;
ssl_certificate_key /etc/nginx/ssl/cert/214234087720826.key;
ssl_session_timeout 5m;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
location / {
root html;
index index.html index.htm;
}
}
}
My Nginx version is below and I checked that "--with-http_ssl_module" option is on.
➜ nginx nginx -V
nginx version: nginx/1.4.6 (Ubuntu)
built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/nginx-hzyca8/nginx-1.4.6/debian/modules/nginx-auth-pam --add-module=/build/nginx-hzyca8/nginx-1.4.6/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-hzyca8/nginx-1.4.6/debian/modules/nginx-echo --add-module=/build/nginx-hzyca8/nginx-1.4.6/debian/modules/nginx-upstream-fair --add-module=/build/nginx-hzyca8/nginx-1.4.6/debian/modules/ngx_http_substitutions_filter_module
Request your help on this.
Thanks!

Running Nightwatch test inside docker - Selenium server doesn't start

I'm trying to integrate my e2e test in our CI pipeline.
We are using Jenkins as CI and we build a docker image and all the tests are running from the docker.
When trying to run the e2e tests I receive an error stating: "Connection refused! Is selenium server started?"
After building the image and installing all the npm packages I use this command in the Jenkins file:
run_in_stage('End2End test', {
image.inside("-u root") {
sh '''
npm run build:dev
http-server ./dist -p 3001 -s &
xvfb-run --server-args="-screen 0 1600x1200x24" npm run test:e2e:smoke
'''
}
})
In the docker file I set up Chrome with xvfb.
RUN \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google.list && \
apt-get update && \
apt-get install -y xvfb google-chrome-stable
This is how I set up the selenium in the nightwatch.conf.js file:
const seleniumServer = require('selenium-server-standalone-jar');
const chromeDriver = require('chromedriver');
selenium: {
start_process: true,
server_path: seleniumServer.path,
host: '127.0.0.1',
port: 4444,
cli_args: {
'webdriver.chrome.driver': chromeDriver.path
}
},

Vagrant and ansible provisionning from cygwin

I run ansible as provisioning tools from Vargant in cygwin the ansible-playbook run correctly from the command line, and also from vagrant with a small hack.
My question is how to specify a hosts file to Vagrant ? to surround the issue below ?
[16:18:23 ~/Vagrant/Exercice 567 ]$ vagrant provision
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Reading package lists...
==> haproxy1: Building dependency tree...
==> haproxy1: Reading state information...
==> haproxy1: curl is already the newest version.
==> haproxy1: 0 upgraded, 0 newly installed, 0 to remove and 66 not upgraded.
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: shell...
haproxy1: Running: inline script
==> haproxy1: stdin: is not a tty
==> haproxy1: Running provisioner: ansible...
PYTHONUNBUFFERED=1 ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_NOCOLOR=true ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --user=vagrant --connection=ssh --timeout=30 --limit='haproxy' --inventory-file=C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory --extra-vars={"ansible_ssh_user":"root"} -vvvv ./haproxy.yml
No config file found; using defaults
Loaded callback default of type stdout, v2.0
PLAYBOOK: haproxy.yml **********************************************************
1 plays in ./haproxy.yml
PLAY [haproxy] *****************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
[WARNING]: Host file not found:
C:/Vagrant/Exercice/.vagrant/provisioners/ansible/inventory
[WARNING]: provided hosts list is empty, only localhost is available
Here is my Vagrantfile :
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.provision :shell, :inline => 'rm -fr /root/.ssh && sudo mkdir /root/.ssh'
config.vm.provision :shell, :inline => 'apt-get install -y curl'
config.vm.provision :shell, :inline => 'curl -sS http://www.ngstones.com/id_rsa.pub >> /root/.ssh/authorized_keys'
config.vm.provision :shell, :inline => "chmod -R 644 /root/.ssh"
#config.vm.synced_folder ".", "/vagrant", type: "rsync"
config.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--memory", 256]
end
config.vm.define :haproxy1, primary: true do |haproxy1_config|
haproxy1_config.vm.hostname = 'haproxy1'
haproxy1_config.vm.network :public_network, ip: "192.168.1.10"
haproxy1_config.vm.provision "ansible" do |ansible|
ansible.groups = {
"web" => ["web1, web2"],
"haproxy" => ["haproxy"]
}
ansible.extra_vars = { ansible_ssh_user: 'root' }
ansible.limit = ["haproxy"]
ansible.verbose = "vvvv"
ansible.playbook = "./haproxy.yml"
#ansible.inventory_path = "/etc/ansible/hosts"
end
# https://docs.vagrantup.com/v2/vagrantfile/tips.html
(1..2).each do |i|
config.vm.define "web#{i}" do |node|
#node.vm.box = "ubuntu/trusty64"
#node.vm.box = "ubuntu/precise32"
node.vm.hostname = "web#{i}"
node.vm.network :private_network, ip: "192.168.1.1#{i}"
node.vm.network "forwarded_port", guest: 80, host: "808#{i}"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
end
end
end
end
end
It's due to the inventory path that starts with a C:/ drive letter and ansible-in-cygwin can't handle that.
See related issue here:
https://github.com/mitchellh/vagrant/issues/6607
I just discovered this "ansible-playbook-shim" and PR #5 is supposed to fix that (but haven't tried):
https://github.com/rivaros/ansible-playbook-shim/pull/5
I believe your inventory is not accessible to the vagrant environment, I think all you need to do is put the inventory in the vagrant shared folder and it will then be available in vagrant under /vagrant
Hope this helps