Docker swarm - selenium VNC port - how to make it distinct? - selenium

I'm coming from VMs background and with each having a different IP there's no issue connecting to a specific node in a group on a VNC port.
With containers, looking at https://github.com/SeleniumHQ/docker-selenium/blob/master/README.md , "Version 3 with Swarm support"
I can see that I can publish a port for a service corresponding to a specific container image, but I think that'd be a single value for a number of replicas.
So, if I use, say, 20 containers and each container suffixed "debug" exposes VNC on port 5900, how can I access a specific container I want that I assume is identified within an output of a Jenkins job, which sends a selenium test script to one of the nodes on the grid?
I.e. if there's an issue with the test script and I see a container identifier, how can I access that specific container over VNC to see what's going on there? Since there's a single host IP for multiple containers, they need to have different ports published externally vs 5900 to be distinguishable, but I don't see how this can be done in docker-compose/swarm. Is this doable?
As an alternative, would that be any easier with Kubernetes rather than docker swarm? (I have not done much research on it yet)

Related

Cross module direct communication (via IP / sockets)

If we have two custom modules that need to communicate directly via sockets, is there a way to know what the IP address assigned to each module?
After reading this article I was under the impression the azure-iot-edge network bridge would possibly support referencing the running module by the module name as the hostname. This doesn't seem to work.
I guess we are trying to avoid having to scan the network or use some local storage option and don't want to join the host network so any ideas how one module that is running can find the IP of another module that is expected to be running?
Here is a picture showing the two containers I am testing with. The one container is just an alpine instance that I can attach the console to and use to try to ping / access other containers. I can ping by IP address but want to ping by container name instead.
After further study of this issue, it turns out the issue was the arm32v7 arm image I was using when deployed had some issues. One of the oddities was that the date on the container was "Sun Jan 0 00:100:4174038 1900" and there were some other commands failing that should have worked.
I ended up switching over to an ubuntu image with iputils-ping installed and confirmed that the azuire-iot-edge bridge allows accessing other containers by their module name which servers as the host name, so all good here, works as expected, user error!

TCP route binding to specific hosts in traefik

We are using traefik for simulating our production environment. We have multiple services running in kubernetes running in docker. Few of them are java applications. In this stack, a developer can come and deploy the code as per the git branches they are working on. So at a given point, we can have 100s of full fledged stack running. We use traefik for certificates resolution so that each stack can be hosted based on branch names and all.
Now I want to give developer the facility to debug their java applications. Its fairly simple to do it in java. You need to attach java agent while starting up the docker image for application. Basically we need to pass -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=37000 as JVM argument and JVM is ready to attach remote debuggers.
Now JVM is using JDWP protocol. And as far as I understand, it is a tcp protocol. Now my problem is: I want to traefik to create routes dynamically based on my docker service labels. That is also I am able to figure out. I used these labels in the docker service.
And this is how you connect to JVM remotely.
Now if in RULE, if is use HostSNI(*) then I cam able to connect to the container. But problem is when I am doing remote connection for debugging, traefik can direct my request to any container. And this whole thing won't work as expected.
I believe we must have some other supported function for TCP rule as well, apart from only HostSNI. What is your opinion on this ? Or Have I missed something here ?

Distributed selenium grid and http proxy

I have seen many questions about using Selenium behind proxy where selenium nodes are connecting to internet via proxy. The solution is indicated everywhere is to specify proxy settings in the code when creating the webdriver instance.
Unfortunately in my case this is not going to work, as I am using a distributed selenium grid where different nodes require different proxy settings. When a test is run, the test running only communicates with the grid hub and does not have any control over what node it will run over - thus setting proxy from inside the test is not possible. Each node is a linux machine with both Firefox and Chrome running in virtual framebuffer. Presently the grid has about 25 nodes distributed across multiple data centers, but this number may grow to anywhere up to 1000 in the future.
There are business reasons for such a setup - and I am not in a position (both technically and politically) to change them.
Is there any way to set proxy on a node level and have it apply to everything that's happening on that node only?
Apparently, all I need to do is to define http_proxy and https_proxy environment variables, which chrome will then honour.
For firefox, proxy parameters can be added to /etc/firefox-$version/pref/firefox.js where $version can be determined by running firefox -v | awk '{print substr($3,1,3)}'.

Jenkins selenium docker and application files

I have a docker hub and a docker node up and running.I have also a docker container which includes my application up and running with the same set up as my pc. I get the following error.
[ConnectionException] Can't connect to Webdriver at http://ip:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.
The IP is correct since I see there the selenium grid as it should be. Which might be the problem. When I get inside the container that i have in jenkins it runs my tests also.
Have you explicitly instructed the Hub Docker Container to expose it's internal port 4444 as 4444 externally?
Instructing a container to expose ports does not enforce the same port numbers to be used. So in your case, while internally it is running on 4444, externally it could be whatever port Docker thought was the best choice when it started.
How did you start your container? If via the docker cmd line, then did you use -P or -p 4444:4444? (Note the difference in case). -P simply exposes ports but no guarantee of number, where as -p allows you to map as you wish.
There are many ways to orchestrate Docker which may allow you to control this in a different way.
For example, if you used Docker Compose that has the potential to allow your containers to communicate via 4444 even if those are not the actually exposed ports. It achieves this through some clever networking but is very simple to set up and use.

Docker: Direct subdomains to specific containers

I'm new to docker therefore apologies if that has already been answered, however I looked and didn't really know how to search for it so I thought I'll ask a question, and if it's already answered, at least someone that knows in docker terms how this works, can help me.
So here is what I want to do.
Subdomain x.x.com (IP A)
Container A
Container B
Container C -webserver
Subdomain y.x.com (IP B (or it could even be A, I don't know what's best)
Container D (same as container A but different user)
Container E (same as container B but different user)
Container F -webserver (same as container C but different user)
And here are my questions
For subdomain y.x.com should I use the same IP or a different one?
How can I point these subdomains to the specific containers so that if you have a container at port y.x.com:8000, you can't access the container x.x.com:8001 by simply doing y.x.com:8001?
How can I make sure that both webservers are accessible through the different subdomains (assuming that they both run at port 80?)
I'm not 100% sure I've understood the way networks work when using docker so any pointers, would be really helpful. Should I use link? should I use --net=bridge? Is there any simpler way to do any of that? What's the best way?
Thank you in advance
First, it is important to clarify what are you trying to configure. Are you configuring an Apache server as the frontend to the two sub-domains? Are you running apache in a container? What do you have in containers A, B, D, and E? Are they providing support services to the web servers (e.g., database)?
Independently of these clarifications, the most important thing you need to understand about Docker networking is that containers, by default, receive an IP belonging to a 'virtual network' that exists only in the host in which they run. Because of that, they cannot be accessed from the "outside world" (even though they can access the outside world by using the host as a gateway).
In this case, the most straightforward way to access containers from the "outside world" is to use port mapping, in which you map a port from your physical host to a container port.
For example, let's say your host has IP 10.0.0.1, and your container runs a web server on port 80. In order to access this container, the first thing you need to do is to start the container and map its port 80 to some port in the physical host. This will look like:
docker run -d -p 8000:80 <image> <command>
where -p is the relevant option that you use to map ports (in this case, you are mapping port 8000 in the physical host to port 80 in the container). Therefore, to access the container web server, you will need to use the host IP with the mapped port (10.0.0.1:8000) - and the request will be redirected to port 80 of the container.
So, assuming you are running all containers on the same host, you could map each subdomain to the same IP but different ports, and map each of these ports to the port 80 of containers C and F.
Having said all of this, recent Docker versions have been adding many new ways of configuring the network, but I feel it is really important to understand the basic behaviour before moving to more complicated scenarios.
Have a look at the basic configuration instructions here:
https://docs.docker.com/engine/userguide/containers/networkingcontainers/