How to interact between multiple docker containers, eg ubuntu container with selenium hub container - selenium

I have the following three docker containers
1. Ubuntu Container with Mono that has selenium scripts(DLL)
2. Selenium Hub Container
3. Selenium Chrome Node Container
when I build the Docker Compose File, All three containers are up and running, the Ubuntu container exits after sometime without executing any tests.Any idea on how to implement this?
I am executing the tests in the Ubuntu container using mono and would like to create a docker image once this works. Any explanation or sample code on this would be really great.
I have created a bridge and have assigned static ip to all three containers.
Docker Compose File:
version: '3.7'
services:
seleniumhub:
image: selenium/hub
container_name: hubcontainer
networks:
ynetwork:
ipv4_address: 172.21.0.2
ports:
- "4444:4444"
privileged: true
nodechrome:
image: selenium/node-chrome-debug
container_name: chromecontainer
volumes:
- /dev/shm:/dev/shm
depends_on:
- seleniumhub
environment:
- HUB_HOST=seleniumhub
- HUB_PORT=4444
- NODE_MAX_INSTANCES=5
- NODE_MAX_SESSION=5
- START_XVFB=false
networks:
ynetwork:
ipv4_address: 172.21.0.10
Mytests:
container_name: Myubuntutests
depends_on:
- seleniumhub
- nodechrome
networks:
ynetwork:
ipv4_address: 172.21.0.11
build:
context: .
dockerfile: ubuntu.Dockerfile
networks:
ynetwork:
name: ytestsnetwork
driver: bridge
ipam:
config:
- subnet: 172.21.0.0/16
Docker File ubuntu.Dockerfile
FROM ubuntu
COPY /bin/Debug/ /MyTests
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=Asia/Tokyo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && apt-get update && apt-get clean && apt-get install -y wget && apt-get install -y curl && apt-get install -y nuget && apt-get install -y mono-complete && apt-get update && nuget update -self && nuget install testrunner
WORKDIR "/MyTests"
ENTRYPOINT mono /TestRunner.1.8.0/tools/testrunner.exe MyTests.dll
Docker Compose commands used (tried):
docker-compose up --build
docker-compose up --build -d
I expect the Docker Compose to Build all three containers and execute the tests and exit once done

Related

Recv failure when I use docker-compose for set up redisDB

sorry but I'm new to redis and dockers and I'm getting stuck.
I want to connect redis to my localhost with docker-compose. When I use docker-compose my web and my redis shows that they are ON but when i try to make curl -L http://localhost:8081/ping for test it I get this message "curl: (56) Recv failure:"
I tryed to change my docker-compose.yaml but is not working
docker-compose:
version: '3'
services:
redis:
image: "redis:latest"
ports:
- "6379:6379"
web:
build: .
ports:
- "8081:6379"
environment:
REDIS_HOST: 0.0.0.0
REDIS_PORT: 6379
REDIS_PASSWORD: ""
depends_on:
- redis
Dockerfile
FROM python:3-onbuild
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
CMD ["python", "main.py"]
My expected results are this:
curl -L http://localhost:8081/ping
pong
curl -L http://localhost:8081/redis-status
{"redis_connectivity": "OK"}

Build failure while trying to use docker-compose build step

I am trying to execute my selenium test on Jenkins node (Ubuntu) which has docker already installed. I added docker-compose build step plugin to my Jenkins project. When i try to build the project, I am getting an error in console -
$ docker-compose -f /home/jenkins/workspace/OM/TestWDM/docker-compose.yml up -d
Build step 'Docker Compose Build Step' changed build result to FAILURE
I am able to execute the project successfully on my local machine. I do have docker-compose.yml file in the root directory. I tried docker ps -a command just to see if it's partially . working, but it's not.
docker-compose file:
version: "3"
services:
selenium-hub:
restart: always
image: selenium/hub:latest
ports:
- "4444:4444"
#selenium-chrome
selenium-chrome:
restart: always
image: selenium/node-chrome-debug
stdin_open: true
links:
- selenium-hub:hub
#selenium-firefox
selenium-firefox:
restart: always
image: selenium/node-firefox-debug
links:
- selenium-hub:hub
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
firefox:
image: selenium/node-firefox
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
The reason I am trying to use docker here because I was facing an issue with chrome binary not found without it. My expectation over here was to have my test run successfully on Jenkins node.

How to setup maxSession in selenium docker

docker run -d -p 4444:4444 --name selenium-hub selenium/hub
docker run -d --link selenium-hub:hub -v /dev/shm:/dev/shm selenium/node-chrome
Ubuntu 16.04
after these 2 commands, I have successfully setup a selenium hub and a selenium node. however current maxSession of this node is set to 1, I need increase to 5. How can I do that?
Thanks.
I highly recommend you use a docker compose file.
Below is a simple example you can use.
Just change the MAX_SESSION and MAX_INSTANCES to what you need.
version: "3.1"
services:
hub:
image: selenium/hub
container_name: "hub"
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
volumes:
- /dev/shm:/dev/shm
shm_size: 2gb
depends_on:
- hub
environment:
- HUB_HOST=hub
- HUB_PORT=4444
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1080
- NODE_MAX_INSTANCES=2
- NODE_MAX_SESSION=2

Module (nodemon) not found(package.json not found) DOCKER ISSUE

i'm trying to dockerize my express app, but when i try to run the CMD in the container , docker says me ' "Command \"nodemon\" not found."' like it doesn't find package.json in container. This is my dockerfile:
FROM node:8
WORKDIR /express-app/
COPY package.json .
RUN yarn
COPY . .
ARG MONGO_DB_DATABASE
ENV MONGO_DB_DATABASE ${MONGO_DB_DATABASE}
ARG MONGO_DB_USERNAME
ENV MONGO_DB_USERNAME ${MONGO_DB_USERNAME}
ARG MONGO_DB_PASSWORD
ENV MONGO_DB_PASSWORD ${MONGO_DB_PASSWORD}
EXPOSE 3000
CMD ["yarn", "start"]
and this is my docker-compose.yml
express-app:
build: ../../express-app
command:nodemon
environment:
- MONGO_DB_DATABASE=testDb
- MONGO_DB_USERNAME=test
- MONGO_DB_PASSWORD=test
expose:
- 3000
ports:
- "3000:3000"
volumes:
- ../../express-app:/express-app
depends_on:
- mongodb
links:
- mongodb
restart: always
Somewhere in your Dockerfile, throw in a RUN npm install nodemon -g. That installs and adds to your path

Docker : image build failed

when building docker apache image, the building fail in this step :
Step n/m : COPY httpd-foreground /usr/local/bin/
ERROR: Service 'apache' failed to build: COPY failed: stat
/var/lib/docker/tmp/docker-builder511740141/httpd-foreground: no such
file or directory
this is my docker_compose.yml file
version: '3'
services:
mysql:
image: mysql:5.7
container_name: mysql_octopus_dev
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: app
MYSQL_USER: root
MYSQL_PASSWORD: root
apache:
build: .
container_name: apache_octopus_dev
volumes:
- .:/var/www/html/
ports:
- "8000:80"
depends_on:
- mysql
this is my docker file
FROM debian:jessie-backports
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
#RUN groupadd -r www-data && useradd -r --create-home -g www-data www-data
...
COPY httpd-foreground /usr/local/bin/
EXPOSE 80
CMD ["httpd-foreground"]
any help please?
Paths in a Dockerfile are always relative to the the context directory. The context directory is the positional argument passed to docker build (often .).
I should place the httpd-foreground file in the same folder of dockerfile.
From : https://github.com/docker/for-linux/issues/90