I use Drone as CI tool. I have drone sever and drone agent which are docker containers. I connected Drone to my GitHub repository. And it works perfectly it responds for each pull and build docker container. I can see built container in output of command:
docker ps
In this container I have node server. It listens 3001 port. I want to expose this port. I want to do something like:
ports:
- 3001:3001
in docker-compose.yml file.
Is it possible to expose ports in .drone.yml file? If it is, how to do it?
You cannot use expose on Drone, because each test should be isolated to outer environment.
Are you about to run E2E test for web server build on node server?
If so, service section is available. http://docs.drone.io/services/
Related
I want to be able to ssh into a container within an OpenShift pod.
I do know that, I can simply do so using oc rsh. But this is based on the assumption that I have the openshift cli installed on the node where I want to ssh into the container from.
But what I want to actually achieve is, to ssh into a container from a node that does not have openshift cli installed. The node is on the same network as that of OpenShift. The node does have access to web applications hosted on a container (just for the sake of example). But instead of web access, I would like to have ssh access.
Is there any way that this can be achieved?
Unlike a server, which is running an entire operating system on real or virtualized hardware, a container is nothing more than a single Linux process encapsulated by a few kernel features: CGroups, Namespacing, and SELinux. A "fancy" process if you will.
Opening a shell session into a container is not quite the same as opening an ssh connection to a server. Opening a shell into a container requires starting a shell process and assigning it to the same cgroups and namespaces on the same host as the container process and then presenting that session to you, which is not something ssh is designed for.
Using oc exec, kubectl exec, podman exec, or docker exec cli commands to open a shell session inside a running container is the method that should be used to connect with running containers.
I have a docker hub and a docker node up and running.I have also a docker container which includes my application up and running with the same set up as my pc. I get the following error.
[ConnectionException] Can't connect to Webdriver at http://ip:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.
The IP is correct since I see there the selenium grid as it should be. Which might be the problem. When I get inside the container that i have in jenkins it runs my tests also.
Have you explicitly instructed the Hub Docker Container to expose it's internal port 4444 as 4444 externally?
Instructing a container to expose ports does not enforce the same port numbers to be used. So in your case, while internally it is running on 4444, externally it could be whatever port Docker thought was the best choice when it started.
How did you start your container? If via the docker cmd line, then did you use -P or -p 4444:4444? (Note the difference in case). -P simply exposes ports but no guarantee of number, where as -p allows you to map as you wish.
There are many ways to orchestrate Docker which may allow you to control this in a different way.
For example, if you used Docker Compose that has the potential to allow your containers to communicate via 4444 even if those are not the actually exposed ports. It achieves this through some clever networking but is very simple to set up and use.
I have a squid proxy container on my local Docker for Mac (datadog/squid image). Essentially I use this proxy so that app containers on my local docker and the browser pod (Selenium) on another host use the same network for testing (so that the remote browser can access the app host). But with my current setup, when I run my tests the browser starts up on the remote host and then after a bit fails the test. The message on the browser is ERR_PROXY_CONNECTION_FAILED right before it closes. So I assume that there is an issue with my squid proxy config. I use the default config and on the docker hub site it says
Please note that the stock configuration available with the container is set for local access, you may need to tweak it if your network scenario is different.
I'm not really sure how my network scenario is different. What should I be looking into for more information? Thanks!
I'm trying to create Arquillian unit test using http://arquillian.org/arquillian-cube extension where you can set a breakpoint on the server side.
I've created a project which executes a simple test successfully (all details are here):
https://github.com/scetix/arquillian-cube-wildfly-quickstart
Is there any way of automatically attaching IntelliJ IDEA debugger to Wildfly running in Docker container when the test starts?
Automatically, I don't think so. In case of the Docker example, from the point of view of the IDE that is considered a remote server.
So what you need to do is first of all start Wildfly with debug enabled (http://tools.jboss.org/blog/2015-03-17-debugging-an-externally-launched-wildfly.html) and expose the debugger port correctly (https://github.com/scetix/arquillian-cube-wildfly-quickstart/blob/master/src/test/resources/Dockerfile#L12). Put these lines into you Dockerfile:
# Expose JBoss/Wildfly management port
EXPOSE 9990
# Expose JBoss/Wildfly debug port
EXPOSE 8787
You also need to set the port binding for Docker Compose (https://github.com/scetix/arquillian-cube-wildfly-quickstart/blob/master/src/test/resources/docker-compose.yml#L5). Add port 8787 for the debugger to the YAML file (first number - 58787 here - may be any number you prefer):
ports:
- 58787:8787/tcp
And finally start IntelliJ as a remote debugger, setting the IP of the Docker host (localhost in case of native Docker, Docker Machine IP in case of Docker Machine).
You can see how to do this with IntelliJ: http://blog.trifork.com/2014/07/14/how-to-remotely-debug-application-running-on-tomcat-from-within-intellij-idea/. The example is for Tomcat; specify JBoss instead.
Currently I can use rsub with sublime to edit remotely but the container is a second layer of ssh that is only accessible from the host machine.
Just curious, how do you use your remote host machine if you even have no ssh running on it?
Regarding to your question, I think you need to install openssh-server directly inside the container and map container's 22 port to the host's custom port. Inside your container you'll have to run some initial process that will launch all the processes you need (like openssh-server).
Consider this comprehensive example of the use of supervisord inside Docker container.