I have installed tensorflow on windows through docker toolbox. Everything goes well except I can't use tensorboard. The command line shows 'Starting Tensorboard 29 on port 6006. You can navigate to http://localhost:6006/'.However, when I opened this address on my webbrowser, it just can not connect to it. Does anyone know how to solve this problem?
If you're running TensorBoard inside a Docker container, and trying to use a web browser in Windows to view it, you will need to set up port forwarding from the container to your Windows machine. See this answer for a longer discussion about port forwarding for TensorBoard, but you should be able to make progress by using the following command:
docker run -p 0.0.0.0:6006:6006 -it b.gcr.io/tensorflow/tensorflow
However, it may be easier to install TensorFlow directly on Windows, and run TensorBoard there. If you install Python 3.5 for Windows, you can install TensorFlow and TensorBoard by running:
pip install tensorflow
You can then run TensorBoard directly from the command prompt, and you will not need to worry about port forwarding. See the Windows installation instructions for more details.
I'd like to update the answer here, since I just ran into the same problem on Ubuntu 20.04 and the latest-gpu tensorflow docker image (03e706e09b04).
What worked for me was the following docker run:
docker run -p 8888:8888 -p 6006:6006 -it --rm -v <path_to_summaries>:/opt/summaries tensorflow/tensorflow tensorboard
And then from inside the container:
tensorboard --logdir /opt/summaries/ --bind_all
The server is then accessible at localhost:6006 as one would expect.
The main difference here is, I guess, adding the --bind_all flag to the tensorboard call which exposes the server to external networks, thus allowing the host machine access.
Maybe you should map your volumes to the folder with the logs and enter with bash well:
docker run -v //c/pathto/tf_logs:/tf_logs
-p 0.0.0.0:6006:6006 -p 8888:8888 -it b.gcr.io/tensorflow/tensorflow bash
cd ..
tensorboard --logdir tf_logs/
hit the mapping in your browser
http://192.168.99.100:6006
On Windows 10 + WSL2 + Docker using the offical tensorflow/tensorflow:latest-gpu-py3-jupyter image, I had to tell TB to bind to the wildcard address. That is, in the Jupyter notebook, I called:
%tensorboard --logdir logs/ --host 0.0.0.0
After this, I was able to see the embedded dashboard in my notebook.
Related
I am having some difficulties in starting airflow using docker-compose with appropriate GPU libraries to run my machine learning tasks.
The airflow-scheduler throws this error:
airflow-scheduler_1 | 2022-03-21 12:33:36.919960: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
Basically, there is no CUDA libraries installed in the /usr/local within the airflow container hence the error. I have installed nvidia-container runtime and set the deamon default runtime in deamon.json file
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update
And I have managed to use the runtime:nvidia in the docker-compose.yaml file. This way within the airflow container I can see nvidia-smi. However CUDA libraries are still missing.
Is there a way to install these libraries automatically (ideally FROM tensorflow/tensorflow:latest-gpu) as these set the CUDA libraries within the container?
On the other hand, if I am not using docker-compose I can start a container with docker:
docker run -it --gpus all tensorflow/tensorflow:latest-gpu
This container has all the libraries that I need. However, I would like to use docker-compose as life will be much easier to run multiple containers and setting up all network. So I would like to avoid this approach.
Also I can use the docker in airflow and mount the docker socket to airflow container such that I can initialise a new container from the airflow. This way, I can have all the CUDA libraries also installed however, it sounds very counter-intuitive and I am having difficulties understanding why I can't set all these within the airflow container originally.
client = docker.from_env()
# run the container
response = client.containers.run(
# The container you wish to call
'tensorflow/tensorflow:latest-gpu',
# The command to run inside the container
'find / -name "libcudart.so.11.0"',
# Passing the GPU access
device_requests=[
docker.types.DeviceRequest(count=-1, capabilities=[['gpu']])
]
)
I would appreciate if you can assist me in the right direction.
First of all, I would like to say that I'm new to Docker and all that is around it.
I have been wanting to make a container where I have Apache, php and Firebird installed. So far, so good ; everything seems to work and I can get my default page when I type in my Internet search bar my ip address and :8080. I do so by first starting my container like this :
docker run -p 8080:80 -d apps
Where "apps" is the name of my container.
I have achieved this with my Dockerfile, which looks like this (it might be a bit messy, still learning the good practices !) :
# Download of base image - ubuntu 20.04
FROM ubuntu:20.04
# Updating/upgrading
RUN apt-get update -y && apt-get upgrade -y
# Installing apache2, php and firebird with modules
RUN DEBIAN_FRONTEND="noninteractive" apt-get install apache2 php libapache2-mod-php -y && \
apt-get install php-curl php-gd php-intl php-json php-mbstring php-xml php-zip -y && \
DEBIAN_FRONTEND="noninteractive" apt-get install firebird3.0-server -y && apt-get install firebird->
# Start up apache in foreground by default
CMD /usr/sbin/apache2 -D FOREGROUND
ENTRYPOINT service apache2 restart && /bin/bash
# Expose apache
EXPOSE 80
Now, my idea was to export this container to another computer and try the same thing. I have followed a few tutorials and got to import my container on the new machine. My problem here is that somehow, the command I previously used doesn't work ; it shows me this error :
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Which is odd, because it works just fine on the other machine. I also did this command, WHICH WORKS :
docker run -i -t -p 8080:80 apps /bin/bash
This one works alright, but I don't want to have to access the bash everytime I want my Apache page to load. I would want my container to run without me having to get in my container, if that makes sense.
In my opinion, it probably comes from the fact that I only loaded the container, and not the image used to build it (maybe a bad practice? Couldn't find anything about it on google).
Here is my setup just in case ---
On the first machine (which is the one where I created the image and the container) :
Ubuntu 20.04 LTS
Apache/2.4.41
Docker 19.03.8
On the other machine which I'm trying to make my container work :
Ubuntu 18.04 LTS
Apache/2.4.29
Docker 19.03.6
Thank you for your patience and time !
apps is your docker image, if you want to give name for your container you can specify --name in the run command ie,
docker run --name container_name -p 8080:80 -d apps
You can use sudo docker save -o apps.tar apps to create a tar file of the image
then change the root permission of the tar file sudo chmod 777 apps.tar
Copy this tar file to the other system you want to try, then
sudo docker load --input apps.tar
This will load the image, then you can use the previous command to start the container
docker run -p 8080:80 -d apps
Where "apps" is the name of my container. <- This statement is incorrect and perhaps the misunderstood concept that leads you to the problem.
apps is the name of the image, not the name of the container. On the host on which you can run the container, you must have built that image from the Dockerfile that you shared using the command:
docker build -t apps .
Copy the Dockerfile on the host where you cannot run the container, built the image in-there as well and try again running the container.
I am rather new to docker images and am trying to set up a selenium/standalone-firefox image linked to a local folder.
I'm running Docker version 19.03.2, build 6a30dfc on Windows 10 and have unsuccessfully tried figuring out the correct working of the docker run -v syntax because it either is unspecific (i.e. too little context for me to make sense of it) or on the wrong platform).
Running docker as admin the the cmd, I used docker run -d -v LOCAL_PATH:C:\Users\Public.
This throws docker: Error response from daemon: invalid mode: \Users\Public as an error message.
I want to bind the running container to the folder C:\Users\Public (or another folder on the host machine - this is for illustration purposes).
Can someone point me to the (I fear obvious) mistake I'm making? I essentially want to achieve the container's output data (for later scraping) being stored in the host machine's folder C:\Users\Public. The container's output folder should be named myfolder.
** EDIT **
Digging around, I found this (see Volume Mapping).
I have thus tried the following code:
>docker run -d -p 4444:4444 --name selenium-hub selenium/hub
>docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
while the former works fine (it only runs the container), the latter throws the error:
docker: Error response from daemon: Drive has not been shared.
Docker for Windows (and Mac) require you to share drives to be able to volume mount - https://docs.docker.com/docker-for-windows/ (Under Shared drives).
You should be able to find it under your Docker Settings > Shared Drives. Ensure your C:\ is selected and restart the daemon. After that, you can run:
docker run -d --link selenium-hub:hub -v C:/Users/Public:/home/seluser/Downloads selenium/node-chrome
base on the documation:
https://github.com/SeleniumHQ/docker-selenium
this path does not exist in container and its linux container.
"C:\Users\Public\Documents\TMP_DOCKERS\firefox selenium/standalone-firefo"
Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.
When I run the example from the Docker doc in the "Viewing our web application container" section, i.e.,
docker run -d -P training/webapp python app.py
...I'm able to view the "Hello World" output in a browser. Success. This seems to indicate that the network I'm on may not be the problem.
Now I'm trying to view a container that runs a webdriver suite (test automation of a browser). Based on the output in docker logs -f, the webdriver suite runs to completion. But when I try to point a browser at the webdriver container (which is running the browser), I get a error saying:
ERR_CONNECTION_REFUSED
Here are the steps I'm following:
Start webdriver container with this command
docker run -d -p 8080:5000 "/bin/bash" "-c" "/dir1/dir2/filename.sh $PARAMETER1 $PARAMETER2"
point a browser to:
http://subdomain.mydomain.com:5000
Docker output:
user#server$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2fa83fc0401a 65525ab9ad78 "/bin/bash -c '/opt/y" 55 minutes ago Up 55 minutes 2222/tcp, 0.0.0.0:8080->5000/tcp
user#server$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' 2fa83fc0401a
111.22.33.4444
Other info:
Server config: Ubuntu 14.04
Docker version: 1.8.1, build d12ea79
I've reviewed the following questions but I'm not running on a VM and I'm not running NodeJS.
Unable to view rails app running in docker container from browser
Docker: Unable to specify port for a running container
Does anyone have suggestions on how I might troubleshoot this problem? Any assistance gratefully accepted.
:) jay
Update 1:
Based on the NodeJS question noted above, I'm thinking that I'm not setting a port correctly in the Dockerfile. Maybe this is as simple as setting the correct port for Selenium?
Update 2: as #hunter noted, I had the ports in the wrong order, but switching the ports does not resolve the problem. I think the bigger problem is that I was assigning the wrong port. So, I changed docker run -d -p 8080:5000 to docker run -d -P. When I did that, I got the following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
f375251b61d7 65525ab9ad78 "/bin/bash -c '/opt/y" About an hour ago Up About an hour 0.0.0.0:33073->2222/tcp
I then pointed the browser to that port: http://subdomain.mydomain.com:33073
But I still get the same error: ERR CONNECTION REFUSED
I think you're using the wrong port - the external port is 8080 not 5000.