I'm trying to run the following google colab:
https://colab.research.google.com/gist/zsyzzsoft/5fbb71b9bf9a3217576bebae5de46fc2/data-efficient-gans.ipynb?authuser=1#scrollTo=Re5R6VX8VNgo
colab no longer recognises gpu's with tensorflow 1.x. so is there any way to get this colab working again??
I have tried reinstalling to tensorflow 1.x and also upgrading the code to tensorflow 2 but nothing seems to work.
Google Colab removed support for Tensorflow 1, and it is not possible to use %tensorflow_version 1.x magic anymore. You have to install a specific version of tensorflow 1.x version using
pip install tensorflow==1.x
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
For more details please refer to this link. Thank You.
Related
Trying to import spacy into Google Colab but getting an error name '_C' is not defined. Is it problem with the torch somewhere inside?
torch version: 1.13.1
spacy version: 3.5.0
I've tried to upgrade spacy and cython, but it didn't work. What should I do?
As I said I've tried solutions from here, here and here
When I'm trying to downgrade numpy I'm getting a dependency error with scipy.
I've also tried to install torch 1.9.0 as in here but it also didn't work.
And I've tried to downgrade to spacy 3.4.3 which runs perfectly on my local machine -- didn't work.
UPD: just discovered that importing torch shows same error
UPD_2: this happened when I've uploaded my own notebook. When I open a new clear notebook -- everything works just fine.
I'm new to kaggle and deep learning. I want to use scikeras on Kaggle, but it seemed that it requires tf version >= 2.7.0 and kaggle is using the version of 2.6.4.
I tried as follows but it didn't work. So how can I use higher version of tf? Thanks for your help!
enter image description here
I am a university professor trying to learn deep learning for a possible class in the future. I have been using google colab with GPU support for the past couple of months. Just recently, the GPU device is not found. But, I am doing everything that I have done in the past. I can't imagine that I have done anything wrong because I am just working through tutorials from books and the tensorflow 2.0 tutorials site.
tensorflow 2 on Colab GPU was broken recently due to an upgrade from CUDA 10.0 to CUDA 10.1. As of this afternoon, the issue should be resolved for the tensorflow builds bundled with Colab. That is, if you run the following magic command:
%tensorflow_version 2.x
then import tensorflow will import a working, GPU-compatible tensorflow 2.0 version.
Note, however, if you attempt to install a version of tensorflow using pip install tensorflow-gpu or similar, the result may not work in Colab due to system incompatibilities.
See https://colab.research.google.com/notebooks/tensorflow_version.ipynb for more information.
What is the best way to run TensorFlow 2.0 with AWS Sagemeker?
As of today (Aug 7th, 2019) AWS does not provide TensorFlow 2.0 SageMaker containers, so my understanding is that I need to build my own.
What is the best Base image to use? Example Dockerfile?
EDIT: Amazon SageMaker does now support TF 2.0 and higher.
SageMaker + TensorFlow docs: https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/using_tf.html
Supported Tensorflow versions (and Docker URIs): https://aws.amazon.com/releasenotes/available-deep-learning-containers-images
Original answer
Here is an example Dockerfile that uses the underlying SageMaker Containers library (this is what is used in the official pre-built Docker images):
FROM tensorflow/tensorflow:2.0.0b1
RUN pip install sagemaker-containers
# Copies the training code inside the container
COPY train.py /opt/ml/code/train.py
# Defines train.py as script entrypoint
ENV SAGEMAKER_PROGRAM train.py
For more information on this approach, see https://docs.aws.amazon.com/sagemaker/latest/dg/build-container-to-train-script-get-started.html
Update on 10 nov 2019:
There is now a way to use Tensorflow 2 in SageMaker, event though there is no shortcut to start TF 2 directly from SageMaker console.
Start a conda Python3 Kernel
Make some updates (one in each code cell):
!pip install --upgrade pip # pip 19.0 or higher is required for TF 2
!pip install --upgrade setuptools # Otherwise you'll get annoying warnings about bad installs
Install Tensorflow 2
!pip install --user --upgrade tensorflow
Given the doc, this will install in $HOME.
Nota:
If you are using a GPU based instance of SageMaker, replace tensorflow by tensorflow-gpu.
You now can use TF 2 in your instance. This only needs to be done once, as long as the instance remains up.
To test, just run in the next cell:
import tensorflow as tf
print(tf.__version__)
You should see 2.0.0 or higher.
As of now, the best image that you can use to build Tensorflow 2.0 is 2.0.0b1, which is the latest version of Tensorflow 2.0(image) that is available now.Please find the link here. You also have an image 2.0.0b1-py3, which comes with Python3 (3.5 for Ubuntu 16-based images; 3.6 for Ubuntu 18-based images).
If you feel this answer is useful, kindly accept this answer and/or up vote it. Thanks.
Here from the SageMaker team. To use SageMaker seamlessly, it's recommended that you try out Amazon SageMaker Studio. It's similar to a Jupyter Notebook instance but has a variety of SageMaker services already integrated within it. A huge plus point is that you can select from a variety of kernels and for TensorFlow 2.0 you can select any of these that already have all the required packages installed:
Python 3 (TensorFlow 2.1 Python 3.6 CPU Optimized)
Python 3 (TensorFlow 2.1 Python 3.6 GPU Optimized)
Python 3 (TensorFlow 2.3 Python 3.7 CPU Optimized)
Python 3 (TensorFlow 2.3 Python 3.7 GPU Optimized)
In Studio, you can select the Kernel drop-down located in top-right corner of your SageMaker Studio instance:
Shown below is a screenshot of the UI interface in SageMaker Studio of the available kernels that are compatible with TensorFlow 2.0. Note that all kernels within SageMaker Studio are continuously tested and work seamlessly with all SageMaker Services.
I have installed the Tensorflow r1.14 and want to use TF-TRT. However, the following error occurs:
"ModuleNotFoundError: No module named 'tensorflow.contrib.tensorrt'"
when running the sample code. The same error occurs with Tensorflow r1.13. So my question is do I need to install the tensorflow.contrib.tensorrt library separately? If yes, how?
Additionally, I can run the sample code of the TensorRT, e.g. sampleINT8, successfully. Click here to see my successful sample code run.
This leads me to believe that TensorRT is installed properly. However, the TF-TRT still doesn't work.
Any help would be greatly appreciated!
In TF 1.14, TF-TRT was moved to the core from contrib.
You need to import it like this: from tensorflow.python.compiler.tensorrt import > trt_convert as trt
https://github.com/tensorflow/tensorrt/blob/master/tftrt/examples/image-classification/image_classification.py#L22
This is the correct answer for Linux.
However, if you're using Windows: the TensorRT Python API (and therefore TF-TRT) is not supported for Windows at the moment, so the TensorFlow python packages aren't built with TensorRT.
In TF 1.14, TF-TRT was moved to the core from contrib.
You need to import it like this:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
https://github.com/tensorflow/tensorrt/blob/master/tftrt/examples/image-classification/image_classification.py#L22
In order to be able to import tensorflow.contrib.tensorrt you need to have tensorflow-gpu version >= 1.7 installed on your system. Maybe you could try installing the tensorflow-gpu library with a:
pip install tensorflow-gpu
Check out the Windows section of the GPU documentation as well. Also, I would try updating your tensorflow version with a:
pip install --upgrade tensorflow
to ensure you're up to date there as well. Check out this section of the TensorFlow documentation for additional support.
Hopefully that helps!
2 possibilities
Have you installed tensorflow-gpu instead of tensorflow?
From your screenshot it looks like you're using Windows. I had the same problem. There seems no tensorrt module under contrib in TF windows distribution however linux has it (I tried 1.13.1).