Add a link to docker run - testing

I am making a test file. I need to have a docker image and run it like this:
docker run www.google.com
Everytime that url changes, I need to pass it into a file inside the docker. Is that possible?

Sure. You need a custom docker image but this is definitely possible.
Let's say you want to execute the command "ping -c 3" and pass it the parameter you send in the command line.
You can build a custom image with the following Dockerfile:
FROM alpine:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT /entrypoint.sh
The entrypoint.sh file contains the following:
#!/bin/sh
ping -c 3 "$WEBSITE"
Then, you have to build you image by running:
docker build -t pinger .
Now, you can run your image with this command:
docker run --rm -e WEBSITE=www.google.com pinger
By changing the value of the WEBSITE env variable in the latest step you can get what you requested.

I just solved it by adding this:
--env="url=test"
to the docker run, but I guess your way of doing it, is better.
Thank you

Related

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

docker file to run automation test in JS files

I am trying to create a docker file to run selenium tests for a java script based project. Below is my docker file so far:
#base image
FROM selenium/standalone-chrome
#access to the project within docker container - Bundle app source
COPY ./seleniumTest/project /app
# Install Node.js
RUN sudo apt-get update
RUN sudo apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_8.x | sudo bash -
#binding
EXPOSE 8080
#Define runtime
ENTRYPOINT /app/login.test.js
while running with $ docker run -p 4000:80 lamgadekamal/dockertest
returns: Unable to find image 'lamkam/dockertest:latest' locally docker: Error response from daemon: manifest for lamkam/dockertest:latest not found. Could not figure out why am I getting this?
I suspect that you need to build your image first, since the image cannot be found.
Run this command from the same directory where your Dockerfile is located. This will build the image.
docker build -t lamgadekamal/dockertest .
You can then verify that the image exists by running docker images
EDIT: After looking at this again, it appears that you are trying to run the wrong image. You are trying to run lamgadekamal/dockertest, but you built the image with the tag lamkam/dockertest? Seems like you have a typo. I would suggest running docker images to see exactly what is there, but in all likelihood, you need to run lamkam/dockertest.
docker run -p 4000:80 lamkam/dockertest

Create docker image to run selenium tests on different browser versions

I am currently learning to use docker to run selenium tests.
However, to run tests on different versions of the browser, it requires creating our own image.
I tried few ways but failed to run them.
I used the docker file at below path:
https://hub.docker.com/r/selenium/node-chrome/~/dockerfile/
and tried to build the image by using the following command:
docker build -t my-chrome-image --build-arg CHROME_DRIVER_VERSION=2.23 --build-arg CHROME_VERSION=google-chrome-beta=53.0.2785.92-1 NodeChrome
Can anyone guide me on how to implement the same?
Regards,
Ashwin Karangutkar
Use
docker build -t my-chrome-image --build-arg CHROME_DRIVER_VERSION=2.23 --build-arg CHROME_VERSION=google-chrome-beta <path_to_Dockerfile>
I am using elgalu/selenium.
docker run -d --name=grid -p 4444:24444 -p 5900:25900 --shm-size=1g elgalu/selenium
And looking in elgalu looks like you can change the browser versions.
Adding -e FIREFOX_VERSION=38.0.6 to the docker run command.

How to test docker image with external script

Assume that I have a code repo. That code repo has the following files in it:
Dockerfile
test.sh
lets say I now build a docker image from the Dockerfile
docker build -t my-image .
Now I want to execute the test.sh in the context of my-image in a container, yet I don't have added test.sh to the docker image during build.
How do I run the docker image and execute the test.sh in it? Do I have to mount the repo as volume first or is there a quicker way?
Couple of options:
Copy it in (docker cp test.sh <container_id>:<path_file_should_go_inside>) - but then you gotta run the file as a separate step
Mount it in (docker run -v $(pwd)/test.sh:<path_file_should_go_inside> my-image <path_file_should_go_inside>)
If test.sh is not part of image then you will have to mount the local repository as a volume.
docker run -v /path/to/local/repo:/tmp my-image /tmp/test.sh

"docker run -dti" with a dumb terminal

updated: added the missing docker attach.
Hi am trying to run a docker container, with -dti. but I cannot access with a terminal set to dumb. is there a way to change this (it is currently set to xterm, even though my ssh client is dumb)
example:
create the container
docker run -dti --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs
cd /my-folder
npm install -g gulp
the last command always contains ascii escape chars to move the cursor.
I have tried "export TERM=dumb" inside the running container, but it does not work.
is there a way to "run" this using the dumb terminal?
I am running this from a script on another computer, via (dumb) ssh.
using the -t which sets this https://docs.docker.com/engine/reference/run/#env-environment-variables, however removing effects the command prompt (the prompt is not shown)
possible solution 1 remove the -t and keep the -i. To see if the command has completed echo out a known token (ENDENDEND). ie
docker run -di --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs;echo ENDENDEND
cd /my-folder;echo ENDENDEND
npm install -g gulp;echo ENDENDEND
not pretty, but it works (there is no ascii in the results)
Possible solution 2 use the journal, docker can log out to the linux journal, this can be gathered as commands are executed in the container. (I have yet to fully test this one out. however the log seems to be a nicer output of what happened)
update:
Yep -t is the problem.
However if you want to see the entire process when running a command, maybe this way is better:
docker run -di --name test -v/my-folder alpine /bin/ash
docker exec -it test /bin/ash
finally you need to kill the container after all jobs finished.
docker run -d means "Run container in background and print container ID"
not start the container as a daemon
I was hitting this issue on OSx running docker, i had to do 2 things to stop the terminal/ascii/ansi escape sequences.
remove the "t" option on the docker run command (from docker run -it ... to docker run -i...)
ensure to force bash or sh shells used on osx when running the command from a script file, not the default zsh
Also
the escape sequences were not always visible on the terminal
even so, they still usually caused content corruption, even with SED brought to bear
they always were shown in my editor