The docker image does not provide updated version of tensorflow. How should I upgrade to 0.12.0 cpu version?
I tried getting the latest-devel cpu version using:
docker run -it -p 8888:8888 -v /notebooks_proj b.gcr.io/tensorflow/tensorflow:latest-devel
but it is 0.8.0 version. How to get 0.12.0 docker?
The latest-devel and latest tags on gcr.io and docker hub should both be up-to-date (0.12.0-rc1 currently)
For gcr.io
docker run -it --rm gcr.io/tensorflow/tensorflow:latest-devel python -c "import tensorflow as tf; print(tf.__version__)"
gives 0.12.0-rc1
For docker hub
docker run -it --rm tensorflow/tensorflow:latest-devel python -c "import tensorflow as tf; print(tf.__version__)"
gives 0.12.0-rc1
Related
I'm trying to run the Pix2Pix tutorial for Tensorflow. I'm using the official docker container for this. This is how I start my container:
docker run --gpus all -it -p 8888:8888 --rm -v $PWD:/tf -w /tmp tensorflow/tensorflow:latest-gpu-py3-jupyter
I'm not able to get pass by this cell
generator = Generator()
tf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)
# output -> Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.
I have tried also installing the pydot and graphviz using pip and also apt-get. Even if this libraries are installed I get the same error.
I had same problem and follow this link
In short:
run these command on command prompt.
pip install pydot
pip install graphviz
From website, download and install graphviz software
Note: in install time, check "add to system path" option to add bin
folder to path variable otherwise you should do it manually. restart
your windows
screen shot
As Python2.7 will be deprecated on 01/01/2020. I was planning to start using python3. So, I tried to install the tensorflow==1.14.0 on the raspberry pi and it was successful, but when I am loading the Tensorflow for further operations then it throws a load error.
Python - 3.7 (Default installed by Raspbian OS)
Any suggestions why am I facing this issue?
Thanks for your time
You can't install later versions of Tensorflow on the Raspberry Pi using pip. You have to install from source. I made a video doing this: https://youtu.be/GNRg2P8Vqqs
Installing Tensorflow requires some extra steps on the Pi's ARM architecture.
This is how I installed tf 2.0 on my Pi 4:
Make your project directory:
cd Desktop
mkdir tf_pi
cd tf_pi
Make a virtual environment:
python3 -m pip install virtualenv
virtualenv env
source env/bin/activate
Run the commands based on https://github.com/PINTO0309/Tensorflow-bin/#usage:
sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev
python3 -m pip install keras_applications==1.0.8 --no-deps
python3 -m pip install keras_preprocessing==1.1.0 --no-deps
python3 -m pip install h5py==2.9.0
sudo apt-get install -y openmpi-bin libopenmpi-dev
sudo apt-get install -y libatlas-base-dev
python3 -m pip install -U six wheel mock
Pick a tensorflow release from https://github.com/lhelontra/tensorflow-on-arm/releases (I picked 2.0.0). Picking a higher version of Tensorflow (like 2.1.0) requires a higher version of scipy that wasn't compatible with my Raspberry Pi:
wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v2.0.0/tensorflow-2.0.0-cp37-none-linux_armv7l.whl
python3 -m pip uninstall tensorflow
python3 -m pip install tensorflow-2.0.0-cp37-none-linux_armv7l.whl
RESTART YOUR TERMINAL
Reactivate your virtual environment:
cd Desktop
cd tf_pi
source env/bin/activate
Test:
Open a python interpreter by executing:
python3
import tensorflow
tensor.__version__
This should have no errors and output: 2.0.0
I got the same issue today when trying to run the fresh tf installation on my pi 3+
I have some questions about cudnn and tensorflow-gpu docker image. here is what I did on my computer:
I installed docker-CE
I installed nvidia-docker
I pulled tensorflow:latest-gpu-py3-jupyter image from docker hub.
I create one container with the following command:
sudo docker create -ti --rumtime=nvidia -p 21001:22 -v
/home/project:/project tensorflow:latest-gpu-py3-jupyter /bin/bash
I start and get into the container, then I train my model which is built using Keras. but I get one error message---'Segmentation fault (core dumped)'
It seems that the cudnn is not installed in my container. Does anybody know why it happened? Does tensorflow image does not include the cudnn library?
I am trying to run this statement in MacOS.
conda install -c conda-forge tensorflow
It just stuck at the
Solving Environment:
Never finish.
$ conda --version
conda 4.5.12
Nothing worked untill i ran this in conda terminal:
conda upgrade conda
Note that this was for poppler (conda install -c conda-forge poppler)
On win10 I waited about 5-6 minutes but it depends of the number of installed python packages and your internet connection.
Also you can install it via Anaconda Navigator
One can also resolve the "Solving environment" issue by using the mamba package manager.
I installed tensorflow-gpu==2.6.2 on Linux (CentOS Stream 8) using the following commands
conda create --name deeplearning python=3.8
conda activate deeplearning
conda install -c conda-forge mamba
mamba install -c conda-forge tensorflow-gpu
To check the successful usage of GPU, simply run either of the commands
python -c "import tensorflow as tf;print('\n\n\n====================== \n GPU Devices: ',tf.config.list_physical_devices('GPU'), '\n======================')"
python -c "import tensorflow as tf;print('\n\n\n====================== \n', tf.reduce_sum(tf.random.normal([1000, 1000])), '\n======================' )"
References
Conda Forge blog post
mamba install instead of conda install
The same error happens with me .I've tried to install tensorboard with anaconda prompt but it was stuck on the environment solving .So i've added these paths to my environment variables:
C:\Anaconda3
C:\Anaconda3\Library\mingw-w64\bin
C:\Anaconda3\Library\usr\bin
C:\Anaconda3\Library\bin
C:\Anaconda3\Scripts
and it worked well.
Follow the instruction by nekomatic.
I left it running for 1 hour. Yes. it is finally finished.
But now I got the conflicts
Solving environment: failed
UnsatisfiableError: The following specifications were found to be in conflict:
- anaconda==2018.12=py37_0 -> bleach==3.0.2=py37_0
- anaconda==2018.12=py37_0 -> html5lib==1.0.1=py37_0
- anaconda==2018.12=py37_0 -> numexpr==2.6.8=py37h7413580_0
- anaconda==2018.12=py37_0 -> scikit-learn==0.20.1=py37h27c97d8_0
- tensorflow
Use "conda info <package>" to see the dependencies for each package.
I am working through the Tensorflow serving_basic example at:
https://tensorflow.github.io/serving/serving_basic
Setup
Following: https://tensorflow.github.io/serving/setup#prerequisites
Within a docker container based off of ubuntu:latest, I have installed:
bazel:
echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key
sudo apt-get update && sudo apt-get install bazel
sudo apt-get upgrade bazel
grpcio:
pip install grpcio
all packages:
sudo apt-get update && sudo apt-get install -y build-essential curl libcurl3-dev git libfreetype6-dev libpng12-dev libzmq3-dev pkg-config python-dev python-numpy python-pip software-properties-common swig zip zlib1g-dev
tensorflow serving:
git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving
cd tensorflow
./configure
cd ..
I've built the source with bazel and all tests ran successfully:
bazel build tensorflow_serving/...
bazel test tensorflow_serving/...
I can successfully export the mnist model with:
bazel-bin/tensorflow_serving/example/mnist_export /tmp/mnist_model
And I can serve the exported model with:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model/
The problem
When I test the server and try to connect a client to the model server with:
bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
I see this output:
root#dc3ea7993fa9:~/serving# bazel-bin/tensorflow_serving/example/mnist_client --num_tests=2 --server=localhost:9000
Extracting /tmp/train-images-idx3-ubyte.gz
Extracting /tmp/train-labels-idx1-ubyte.gz
Extracting /tmp/t10k-images-idx3-ubyte.gz
Extracting /tmp/t10k-labels-idx1-ubyte.gz
AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output images")
AbortionError(code=StatusCode.NOT_FOUND, details="FeedInputs: unable to find feed output images")
Inference error rate is: 100.0%
The "--use_saved_model" model flag is set to default "true"; use the --use_saved_model=false when starting the server. This should work:
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --use_saved_model=false --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model/
I mentioned this on the tensorflow github, and the solution was to remove the original model that had been created. If you're running into this, run
rm -rf /tmp/mnist_model
and rebuild it