Unable to run Springboot app using Selenium RemoteWebDriver with docker-compose - selenium

i am containerising my spring boot app which uses selenium/standalone-firefox-debug i have created docker compose file,but when i up it it gives me error as
Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failure.
But if i run the spring-boot app directly and selenium/standalone-firefox-debug seperatly it works.I want to run it with docker-compose
Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT exec java -jar /app.jar
Dockercompose:
version: '2.2'
services:
employer-url:
image: "adib/employer-url"
ports:
- "8080:8080"
depends_on:
- firefox
firefox:
image: "selenium/standalone-firefox-debug"
ports:
- "4444:4444"
environment:
- no_proxy=localhost
this is how i create driver in spring app
RemoteWebDriver driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), DesiredCapabilities.firefox());

Problem
This http://localhost:4444/wd/hub URL refers to the localhost which belongs to the container runtime. The springboot container does not have port 4444 running that's why it's complaining.
Solution
You should access the selenium service from its hostname (not localhost). In springboot application, you can use, http://firefox:4444/wd/hub URL and you'd be good to go.
Rationale
You are missing the core of the networking concept in containers here. Both of these images (viz springboot and selenium) are running inside containers and hence they have their separate environments. If you refer to localhost inside any container, it means the localhost of that container. You are expecting localhost to refer to the localhost of the docker host machine. You exposed port 4444 on the docker host machine. So if you try to run your jar from the docker host (while selenium is containerized) localhost:4444 would work but if you access it from inside a container, it's not going to work. Following Diagram shows the concept:

Related

React works from docker image on localmachine but unreachable from kubernetes. service looks to be configured correctly

I want to put my docker image running react into kubernetes and be able to hit the main page. I am able to get the main page just running docker run --rm -p 3000:3000 reactdemo locally. When I try to deploy to my kubernetes (running locally via docker-desktop) I get no response until eventually a timeout.
I tried this same process below with a springboot docker image and I am able to get a simple json response in my browser.
Below is my Dockerfile, deployment yaml (with service inside it), and commands I'm running to try and get my results. Morale is low, any help would be appreciated!
Dockerfile:
# pull official base image
FROM node
# set working directory
RUN mkdir /app
WORKDIR /app
# install app dependencies
COPY package.json /app
RUN npm install
# add app
COPY . /app
#Command to build ReactJS application for deploy might not need this...
RUN npm run build
# start app
CMD ["npm", "start"]
Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: reactdemo
image: reactdemo:latest
imagePullPolicy: Never
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: demo
spec:
type: NodePort
selector:
app: demo
ports:
- port: 3000
targetPort: 3000
protocol: TCP
nodePort: 31000
selector:
app: demo
I then open a port on my local machine to the nodeport for the service:
PS C:\WINDOWS\system32> kubectl port-forward pod/demo-854f4d78f6-qv4mt 31000:3000
Forwarding from 127.0.0.1:31000 -> 3000
My assumption is that everything is in place at this point and I should be able to open a browser to hit localhost:31000. I expected to see that spinning react symbol for their landing page just like I do when I only run a local docker container.
Here is it all running:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/demo-854f4d78f6-7dn7c 1/1 Running 0 4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/demo NodePort 10.111.203.209 <none> 3000:31000/TCP 4s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/demo 1/1 1 1 4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/demo-854f4d78f6 1 1 1 4s
Some extra things to note:
Although I don't have it setup currently I did have my springboot service in the deployment file. I logged into it's pod and ensured the react container was reachable. It was.
I haven't done anything with my firewall settings (but sort of assume I dont have to since the run with the springboot service worked?)
I see this in chrome developer tools and so far don't think it's related to my problem: crbug/1173575, non-JS module files deprecated. I see this response in the main browser page after some time:
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
if you are using Kubernetes using minikube in your local system then it will not work with localhost:3000, because it runs in minikube cluster which has there owned private IP address, so instead of trying localhost:3000 you should try minikube service <servicename> this in your terminal and it shows the URL of your service.
Thanks for all the feedback peeps! In trying out the solutions presented I found my error and it was pretty silly. I tried removing the service and trying the different port configs mentioned above. What solved it was using 127.0.0.1:31000 instead of localhost. Not sure why that fixed it but it did!
That being said a few comments I found while looking at the above comments.
I found that I couldnt hit the cluster without doing the port forwarding regardless of whether I had a service defined or not.
ContainerPort to my understanding is for kubernetes to work on pod to pod communication and doesnt impact application function from a user perspective (could be wrong!)
good to know on minikube Im thinking about trying it out and if I do I'll know why port 3000 stops working.
Thanks
You don't need a Service to port-forward to a pod. You just port-forward straight to it
kubectl port-forward <pod-name> <local-port>:<pod-port>
This creates a "tunnel" between your local machine on a chosen port (<local-port>) and your pod (pod-name), on a chosen pod's port (<pod-port>).
Then you can curl your pod with
curl localhost:<local-port>
If you really want to use a Service, then port-forward to the Service (service/demo in your case) + service port, but it will be translated to POD's IP eventually.
Change .spec.port.[0].targetPort in your Service to be the same as .spec.template.spec.containers.port.containerPort in your deployment, so in your case:
...
- port: 3000
targetPort: 80
...
Port forward to a Service, with Service port
kubectl port-forward service/demo 8080:3000
Then curl your Service with
curl localhost:8080
This have a side effect if there are more pods under the same service. Port forward almost always will be connected to the same pod.

How to make REST calls between Frontend and Backend using Docker containers

I have 3 docker containers:
Backend (Spring boot rest api)
Frontend (Js and html in the apache image)
Mongodb
I'm orchestrating them through docker-compose and works nicely.
However I don't know how to let my frontend javascript client know the backend container's host/ip in order to reach it.
This is my docker-compose.yml:
version: '3.1'
services:
project-server:
build: .
restart: always
container_name: project-server
ports:
- 8200:8200
working_dir: /opt/app
depends_on:
- mongo
httpd:
image: project-ui
container_name: project-ui
ports:
- 8201:80
mongo:
image: project-mongo
container_name: project-mongo
ports:
- 27018:27017
volumes:
- $HOME/data/mongo-data:/data/db
- $HOME/data/mongo-bkp:/data/bkp
restart: always
So i've tried with this in my js client app:
export default {
REMOTE_HOST: 'http://project-server:8200'
}
But it doesn't work. (Failed to load resource: net::ERR_NAME_NOT_RESOLVED)
And i'm pretty sure it's because JS runs locally on the browser so it has no way to resolve that.
What's the right way to do this? There is any way for the frontend service (apache) to pass/render the real host to Javascript and get it somehow?
Thanks a lot
project-server can be resolved only within the network created by docker-compose. As you mentioned, to connect from the outside world you need to export the IP of your host instead of project-server. The problem is the guest container doesn't know the IP of the guest. Here is a detailed discussion about that: How to get the IP address of the docker host from inside a docker container
What you probably need in your situation is to run the container passing the IP of the host as an environment variable:
run --env <IP>=<value>
Then in node you can just read that variable.
Hope it helps

browse postgres in a docker container

I am using docker-compose to work across multiple docker containers, all these containers are mostly individual django rest framework built applications. I have downloaded all the containers and am able to build the whole application using all these containers.
Each container has postgres db running, I want to browse the db now using any ui tool. I know pgadmin can do the work here, but how I can configure my pgadmin to showcase any postgres database from these containers?
It should be possible to expose your database port also to your local network.
Normally you connect your application containers internally to the database container. In that case it's not needed declare the ports section in your compose file for the database, but if you have that entry you bind your database in addition to your local host.
After you have also expose the postgres port to your host port it should be no problem to connect with the gui tool of your choice.
version: '3.2'
services:
httpd:
image: "oth/d_apache2.4:0.2"
ports:
# container port 80 of the webserver to localhost 80
- "80:80"
keycloak:
# keycloak uses keycloak_db
image: "jboss/keycloak-postgres:3.2.1.Final"
environment:
# internal network reference to db container
- POSTGRES_PORT_5432_TCP_ADDR=keycloak_db
- POSTGRES_PORT_5432_TCP_PORT=5432
keycloak_db:
environment:
image: "postgres:alpine"
ports:
# container port 5432 to localhost 5432
# stack intern is the port still available
- "5432:5432"
Make sure that the port of the postgres container is mapped to the host system. The default postgres port is 5432. You can do that with the port directive in your docker-compose.yml. You are only able to map the port once. So your config file would look like:
services:
postgres_1:
ports:
- "49000:54321"
[...]
postgres_2:
ports:
- "49001:54321"
[...]
After that you should be able to access the desired database with the IP of your docker host and the above specified port.
If you still encounter problems connecting with a client like pgadmin check the following configuration files inside your container.
Is there anything blocking your connection attempt? Is yourdocker host behind a firewall?
postgresql.conf under the section connections and authentication:
listen_addresses
port
Check your pg_hba.conf, which controls client authentication.
For debug purposes you can set it to the following:
Don't do the following in production:
host all all all trust

Running PhantomJS Selenium Node on Kubernetes

Does anyone have a Dockerfile or advice for getting a selnium grid node with phantomJS running on Kubernetes? I'm able to get docker images run locally in docker and register to a grid hub, but the same node does not appear to connect to the grid hub when run in Kubernetes. The same setup works fine for other docker images running in kubernetes with selenium grid nodes having Chrome and Firefox.
Two example images I've been battling with to try to get it to run are: this and this. Each works in docker locally (at least to connect to the hub; the latter has a likely unrelated bug in selenium after it connects), but when run in kubernetes it spits out only the first of the usual three log messages:
[INFO - 2017-03-06T15:28:42.018Z] GhostDriver - Main - running on port 4444
But it never connects to the hub, even though I can wget to the hub container from this node if I connect to it and exec bash.
seluser#selenium-node-phantomjs-f8vj6:/$ wget selenium-hub:4444 --2017-03-06 15:33:29-- http://selenium-hub:4444/
Resolving selenium-hub (selenium-hub)... 100.68.165.77
Connecting to selenium-hub (selenium-hub)|100.68.165.77|:4444... connected.
HTTP request sent, awaiting response... 200 OK
...
Locally, it connects:
[INFO - 2017-03-06T15:31:56.443Z] GhostDriver - Main - running on port 4444
[INFO - 2017-03-06T15:31:56.443Z] GhostDriver - Main - registering to Selenium HUB 'http://172.17.0.2:4444' using '172.17.0.3:4444'
[INFO - 2017-03-06T15:31:56.454Z] HUB Register - register - Registered with grid hub: http://172.17.0.2:4444/ (ok)

Testing Codeception on Gitlab CI with Selenium service

I'm trying to setup selenium standalone chrome service to test my Codeception suit.
I run chrome standalone as a service:
services:
- mysql:latest
- selenium/standalone-chrome:latest
And then I setup the connections for my Codeception test uses WebDriver with an extension for WordPress:
WPWebDriver:
url: 'http://localhost'
host: 'selenium__standalone-chrome'
browser: chrome
port: 4444
restart: true
wait: 2
adminUsername: admin
adminPassword: 1234
adminUrl: /wp-admin
All other tests run well but when it comes to the suite where I use Selenium it refuses to connect:
Time: 7.55 seconds, Memory: 16.00MB
There was 1 failure:
---------
1) SampleTestCept: Test if wp is working in selenium
Test tests/php/acceptance/SampleTestCept.php
Step See "Just another WordPress site"
Fail Failed asserting that on page /
--> This site can’t be reached
localhost refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Reload
DETAILS
--> contains "this site can't be reached".
Scenario Steps:
2. $I->see("This site can't be reached") at tests/php/acceptance/SampleTestCept.php:6
1. $I->amOnPage("/") at tests/php/acceptance/SampleTestCept.php:4
Any ideas of what am I doing wrong?
Use environment variable HOSTNAME, to find gitlab runner actual hostname.
I worked around this by replacing 'localhost' in your webdriver config by the ip-address of the gitlab runner. You might want to check out my blog post about running codeception tests on gitlab-ci.
Probably the issue is that you are using http://localhost url and running selenium server on separate host.
Selenium tries to connect to port 80 of its own, not of the machine which is running tests.