Selenium cannot find file running in separate container - selenium

I have 4 containers running in the same Docker network
mongodb
our api server
a selenium server
our tests themselves
I get this error from our test container:
WebDriverError: File not found: /root/cdt-tests/csv-data/IT-DE-Jasper.csv
However, from my test logs, this file totally exists...in the test container.
The problem, I think, is that the selenium server is not looking at the same filesystem as our test container, because they are running in different containers.
What is the best way to solve this problem?
a. Should I try to run them in the same container?
b. Can I somehow get them to share the same filesystem?
c. ?

Just because you are sharing the network doesn't mean you are sharing the volumes, see how to do that In Docker, how can I share files between containers and then save them to an image? as #alex-blex suggested

You may be able to connect the containers using a user-defined network, as explained on Docker's site
If you've already done that, it might be an issue with the path to the file you're using in your test. Perhaps it wants an absolute path, because the containers are considered different entities on the Docker network

Related

Netdata api for traffic of specific docker image

I have a server running several docker containers. I wanted to know about network usage of individual and some specific docker containers.
I wad able to get entire traffic using below api
http://<server-ip>:19999/api/v1/data?chart=net.docker0&after=-60&before=0&points=1&group=median&gtime=0&format=json&options=seconds&options=jsonwrap
I goes through documentations and didn't find anything helpfull.
Usually, the issue with such questions comes from Netdata not being granted the access required to identify the docker container.
I'd take a look at https://learn.netdata.cloud/docs/agent/packaging/docker/#docker-container-names-resolution and go through https://github.com/netdata/netdata/issues/6882 as well.
If you know the specific container name then you should i think just be able to pull the data direct from the container specific chart.
For example i have a container called airbyte-webapp and so i can get its network usage via its own specific chart at /api/v1/data?chart=cgroup_airbyte-webapp.net_eth0

Copy / update the code in docker container without stopping container

I have a docker-composer setup in which i am uploading source code for server say flask api . Now when i change my python code, I have to follow steps like this
stop the running containers (docker-compose stop)
build and load updated code in container (docker-compose up --build)
This take a bit long time . Is there any better way ? Like update code in the running docker and then restarting Apache server without stopping whole container ?
There are few dirty ways you can modify file system of running container.
First you need to find the path of directory which is used as runtime root for container. Run docker container inspect id/name. Look for the key UpperDir in JSON output. You can edit/copy/delete files in that directory.
Another way is to get the process ID of the process running within container. Go to the /proc/process_id/root directory. This is the root directory for the process running inside docker. You can edit it on the fly and changes will appear in the container.
You can run the docker build while the old container is still running, and your downtime is limited to the changeover period.
It can be helpful for a couple of reasons to put a load balancer in front of your container. Depending on your application this could be a "dumb" load balancer like HAProxy, or a full Web server like nginx, or something your cloud provider makes available. This allows you to have multiple copies of the application running at once, possibly on different hosts (helps for scaling and reliability). In this case the sequence becomes:
docker build the new image
docker run it
Attach it to the load balancer (now traffic goes to both old and new containers)
Test that the new container works correctly
Detach the old container from the load balancer
docker stop && docker rm the old container
If you don't mind heavier-weight infrastructure, this sequence is basically exactly what happens when you change the image tag in a Kubernetes Deployment object, but adopting Kubernetes is something of a substantial commitment.

How to configure Redis cache on local?

I have implemented Redis cache with .net core 2.1 application. Now the issue is I have only development connection string. I want to configure and test Redis cache somehow on my local pc. I have read somewhere that it is possible using chocalatey. So can body refer me any link?
PS: When I tried to run redis cache from development server using vpn, It shown me popup to select "ResultBox.cs" file. So I created new ResultBox.cs file and give it the path, but when I call rediscache.Get() method it opens ResultBox.cs file but nothing happens then. Can anybody tell what is ResultBox.cs for?
I have found a way to configure Redis on local using chocolatey. Use this link. If you face Misconf issues while testing on redis-cli this link will be helpful.
You can run a local docker redis image. See this and this for reference.

Dockerized Gitlab Container Backup

I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?

Apache Hama on Amazon Elastic MapReduce

I am trying to run Apache Hama on Amazon Elastic MapReduce using https://github.com/awslabs/emr-bootstrap-actions/tree/master/hama script. However, when trying out with one master node and two slave nodes, peer.getNumPeers() in the BSP code reports only 1 peer. I am suspecting whether Hama runs in local mode.
Moreover, looking at configurations at https://hama.apache.org/getting_started_with_hama.html, my understanding is that the list of all the servers should go in hama-site.xml file for property hama.zookeeper.quorum and also in groomservers file. However, I wonder whether these are being configured properly in the install script. Would really appreciate if anyone could point out whether it's a limitation in the script or whether I am doing something wrong.
#Madhura
Hama doesn't always need groomserver file to run fully distributed mode.
groomserver file is needed to run hama cluster using only start-bspd.sh. But emr-bootstrap-action of hama runs groomservers on each slave nodes using hama-daemon.sh file. Code executed in install script is as follow.
$ /bin/hama-daemon.sh --config ${HAMA_HOME}/conf start groom
I think you need to check the emr logs whether they have error or not.