I'm wondering is possible to use both tensorflow 1.0 and tensorflow 2.0 in ROS2
how to run open source(is using tensorflow over 2.0) in tensorflow 1.0 environment? and is it possible?
Related
What is the best way to run TensorFlow 2.0 with AWS Sagemeker?
As of today (Aug 7th, 2019) AWS does not provide TensorFlow 2.0 SageMaker containers, so my understanding is that I need to build my own.
What is the best Base image to use? Example Dockerfile?
EDIT: Amazon SageMaker does now support TF 2.0 and higher.
SageMaker + TensorFlow docs: https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html
Supported Tensorflow versions (and Docker URIs): https://aws.amazon.com/releasenotes/available-deep-learning-containers-images
Original answer
Here is an example Dockerfile that uses the underlying SageMaker Containers library (this is what is used in the official pre-built Docker images):
FROM tensorflow/tensorflow:2.0.0b1
RUN pip install sagemaker-containers
# Copies the training code inside the container
COPY train.py /opt/ml/code/train.py
# Defines train.py as script entrypoint
ENV SAGEMAKER_PROGRAM train.py
For more information on this approach, see https://docs.aws.amazon.com/sagemaker/latest/dg/build-container-to-train-script-get-started.html
Update on 10 nov 2019:
There is now a way to use Tensorflow 2 in SageMaker, event though there is no shortcut to start TF 2 directly from SageMaker console.
Start a conda Python3 Kernel
Make some updates (one in each code cell):
!pip install --upgrade pip # pip 19.0 or higher is required for TF 2
!pip install --upgrade setuptools # Otherwise you'll get annoying warnings about bad installs
Install Tensorflow 2
!pip install --user --upgrade tensorflow
Given the doc, this will install in $HOME.
Nota:
If you are using a GPU based instance of SageMaker, replace tensorflow by tensorflow-gpu.
You now can use TF 2 in your instance. This only needs to be done once, as long as the instance remains up.
To test, just run in the next cell:
import tensorflow as tf
print(tf.__version__)
You should see 2.0.0 or higher.
As of now, the best image that you can use to build Tensorflow 2.0 is 2.0.0b1, which is the latest version of Tensorflow 2.0(image) that is available now.Please find the link here. You also have an image 2.0.0b1-py3, which comes with Python3 (3.5 for Ubuntu 16-based images; 3.6 for Ubuntu 18-based images).
If you feel this answer is useful, kindly accept this answer and/or up vote it. Thanks.
Here from the SageMaker team. To use SageMaker seamlessly, it's recommended that you try out Amazon SageMaker Studio. It's similar to a Jupyter Notebook instance but has a variety of SageMaker services already integrated within it. A huge plus point is that you can select from a variety of kernels and for TensorFlow 2.0 you can select any of these that already have all the required packages installed:
Python 3 (TensorFlow 2.1 Python 3.6 CPU Optimized)
Python 3 (TensorFlow 2.1 Python 3.6 GPU Optimized)
Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)
Python 3 (TensorFlow 2.3 Python 3.7 GPU Optimized)
In Studio, you can select the Kernel drop-down located in top-right corner of your SageMaker Studio instance:
Shown below is a screenshot of the UI interface in SageMaker Studio of the available kernels that are compatible with TensorFlow 2.0. Note that all kernels within SageMaker Studio are continuously tested and work seamlessly with all SageMaker Services.
In the inference mode, Is C++ API faster than Python API? In the some issues, I've seen the C++ slower than python, why? Is this right?
OS : Ubuntu 16.04
Tensorflow : 1.12.0 binary
CUDA/CUDNN : 9.0 / 7.0
Issue 1
Issue 2
I think you just need to do few warm-up predictions. After the second or third prediction, you will notice the time will decrease.
I want to convert my tensorflow 1.1 based model into tensorflow lite in order to serve the model locally and remotely for a PWA. The official guide only offers Python APIs for 1.11 at the earliest. Command line tools only seem to work starting at 1.7. Is it possible to convert a 1.1 model to tensorflow lite? Has anyone had experience with this?
The tf module is an out-of-the-box pre-trained model using BIDAF. I am having difficulty serving the full tf app on Heroku, which is unable to run it. I would like to try a tf lite app to see if hosting it locally will make it faster, and easier to set up as a PWA.
I would like to use Tensorflow 1.3 (and maybe 1.4) on Cloud ML. Im running jobs on multi-GPU machines on Cloud ML
I do that by specifying the tensorflow version in the setup.py as shown below:
from setuptools import setup
REQUIRED_PACKAGES = ['tensorflow==1.3.0']
setup(
name='my-image-classification',
install_requires=REQUIRED_PACKAGES,
version='1.0',
packages=['my_image_classification',
'my_image_classification/foo',
'my_image_classification/bar',
'my_image_classification/utils'],
)
What is the cudnn library that is installed on Cloud ML? Is it compatible with tensorflow 1.3 and tensorflow 1.3+ ?
I was able to start the jobs, but the performance is 10X lower than the expected value, and I'm curious if there is a problem with the underlying linking of Libraries
Edit:
I'm pretty confident now that the Cudnn versions on Cloud ML dont match what is required for Tensorflow 1.3. I noticed that Tensorflow 1.3 jobs are missing the "Creating Tensorflow device (/gpu:0...) " Logs which appear when I run a job with the default available Tensorflow on cloud ml
DISCLAIMER: using anything but 1.0, 1.2 is not officially supported as of 2017/11/01.
You need to specify the GPU-enabled version of TensorFlow:
REQUIRED_PACKAGES = ['tensorflow-gpu==1.3.0']
But the version of pip is out-of-date so you need to force that to update first.
I just upgraded my tf from 1.0 to tf 1.3 (pip install --upgrade tensorflow) . I know keras 2.0 became part of tensorflow since tf version 1.2. However, when I import keras and check its version it still shows 1.2. Am I supposed to upgrade keras also? if so, then what does "the Keras API will now become available directly as part of TensorFlow, starting with TensorFlow 1.2" mean?
Nope, you don't need to install keras 2.0 separately. (See: https://www.tensorflow.org/guide/keras)
Do this:
import tensorflow as tf
model = tf.keras.Sequential()
Don't do this (Unless you really need framework independent code):
import keras
model = keras.Sequential()