I can't seem to find exact question about what I am about to ask here. I just started following a Tensorflow tutorial on YouTube and got stuck at the very beginning. I wrote in my spyder IDE the below code:
import tensorflow as tf
a = tf.constant(2)
b = tf.constant(3)
x = tf.add(a,b)
#writer = tf.summary.FileWriter('./graphs', tf.get_default_graph())
with tf.Session() as sess:
writer = tf.summary.FileWriter('./graphs', sess.graph)
print(sess.run(x))
writer.close()
And via anaconda terminal I activated my env (which I newly created, installed all packages required, spyder as well) I typed python tftuts.py and got 2018-10-05 11:50:49.431174: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
5
Then I typed tensorboard --logdir="./graphs" --port 6006 as suggested in tutorial I am watching.
Now, when I go to http://localhost:6006/ the page shows
I am on Win10, using python 3.6.6 in Anaconda env, tensorflow 1.10.0.
How can solve this issue?
Related
I tried
%tensorflow_version 1.15
I used this code a couple days ago, but it doesn't work anymore since today.
The outcomes are
ValueError Traceback (most recent call last)
<ipython-input-6-24c52e77c597> in <module>()
----> 1 get_ipython().magic('tensorflow_version 1.15')
2 frames
/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py in magic(self, arg_s)
2158 magic_name, _, magic_arg_s = arg_s.partition(' ')
2159 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2160 return self.run_line_magic(magic_name, magic_arg_s)
2161
2162 #-------------------------------------------------------------------------
/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line)
2079 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2080 with self.builtin_trap:
-> 2081 result = fn(*args,**kwargs)
2082 return result
2083
/usr/local/lib/python3.7/dist-packages/google/colab/_tensorflow_magics.py in _tensorflow_version(line)
39
40 Your notebook should be updated to use Tensorflow 2.
---> 41 See the guide at https://www.tensorflow.org/guide/migrate#migrate-from-tensorflow-1x-to-tensorflow-2."""
42 ))
43
ValueError: Tensorflow 1 is unsupported in Colab.
Your notebook should be updated to use Tensorflow 2.
See the guide at https://www.tensorflow.org/guide/migrate#migrate-from-tensorflow-1x-to-tensorflow-2.
Is there any method I can try to fix it or does it means that colab will not support tensorflow 1.x anymore?
Google Colab removed support for Tensorflow 1, and it is not possible to use %tensorflow_version 1.x magic anymore. You must remove this instruction from your code if you have it.
Also the default python version as I update this answer is python 3.8 which is not compatible with tensorflow 1.x.
To make everything work you first have to downgrade python. Python 3.6 should work. As suggested by #s-abbaasi here's a guide on how to do so:
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
Then add to path:
import sys
_ = (sys.path.append("/usr/local/lib/python3.6/site-packages"))
At this point you can manually uninstall and re-install tensorflow through pip:
!pip uninstall tensorflow
!pip install tensorflow-gpu==1.15
Doing just so I sometimes encounter some errors due to the Cuda version. If this happens to you, you can execute the following:
!apt install --allow-change-held-packages libcudnn7=7.4.1.5-1+cuda10.0
The most appropriate version of cuda and libcudnn to use with the tensorflow version you want to install can be found here.
The versions available of libcudnn can be found with the following command:
!apt list -a libcudnn7
This will list all libcudnn7 versions available.
I was having the same problems while trying to use StyleGAN2-ADA, which only supports TensorFlow 1.
I found out that unfortunately Google Colab removed support for TensorFlow 1 in their latest release of 2022/8/11.
'Removed support for TensorFlow 1'
You can find more information in their notebook Release-Notes: https://colab.research.google.com/notebooks/relnotes.ipynb
I have a tensor x of shape (4,64,5,5). How can I print (or just visualize) the content of a specific dimension?
What I'm trying to do is
with tf.Session() as sess: print(x[0,:,:,:].eval())
but I got the following error:
FailedPreconditionError: Error while reading resource variable dense_2/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/dense_2/kernel)
I'm using Tensorflow
1.14
I tried the same in colab in Tensorflow version 1.14 using tf.ones(shape=(4,64,5,5)). It is working fine. Here's the code:
! pip install tensorflow==1.14
import tensorflow as tf
print(tf.__version__)
x=tf.ones(shape=(4,64,5,5))
with tf.Session() as sess: print(x[:,0,0,0].eval())
Output:
1.14.0
[1. 1. 1. 1.]
Let us know if the issue still persists. Thanks!
There are some answers to this question in a Python environment, but the solutions did not work for my RStudio environment. Here is my code:
library(keras)
library(tensorflow)
use_condaenv("tf")
train_dir = "C:/training_images/"
train_datagen <- image_data_generator(rescale = 1/255)
validation_datagen <- image_data_generator(rescale = 1/255)
train_generator <- flow_images_from_directory(
train_dir,
train_datagen,
target_size = c(150, 150),
batch_size = 20,
class_mode = "binary"
)
batch <- generator_next(train_generator)
The code works until the last "batch" line where it explodes like this:
Error in py_iter_next(it, completed) :
ImportError: Could not import PIL.Image. The use of `load_img` requires PIL.
Detailed traceback:
File "C:\Users\mory3\ANACON~1\envs\tf\lib\site-packages\keras_preprocessing\image\iterator.py", line 104, in __next__
return self.next(*args, **kwargs)
File "C:\Users\mory3\ANACON~1\envs\tf\lib\site-packages\keras_preprocessing\image\iterator.py", line 116, in next
return self._get_batches_of_transformed_samples(index_array)
File "C:\Users\mory3\ANACON~1\envs\tf\lib\site-packages\keras_preprocessing\image\iterator.py", line 230, in _get_batches_of_transformed_samples
interpolation=self.interpolation)
File "C:\Users\mory3\ANACON~1\envs\tf\lib\site-packages\keras_preprocessing\image\utils.py", line 108, in load_img
raise ImportError('Could not import PIL.Image. '
R version 3.6.1
Conda version 4.7
Python version 3.7
I had this same problem
After a few hours of looking, I came up with a solution that worked for me.
I used this code for solving the PIL problem. I tried using anaconda prompt but this code worked in r for me...
reticulate::py_install("pillow",env=tf)
I came up with this error next...
loaded runtime CuDNN library: 7.4.2 but source was compiled with: 7.6.0.
Make sure you have the correct cudnn version installed. For me it was CUDA 10 with 7.6.0 cudnn with 10. The output of the error will tell you which one to use.
Make sure you have cleaned any extra path variables that are in your environmental variables from installing previous versions.
I'm using windows 10
gpu = GeForce GTX 1060 with Max-Q Design
R - 3.6.1
tensorflow = 1.13
python = 3.7
anaconda = Anaconda3–2019.03-Windows-x86_64.exe
I ended up uninstalling Anaconda altogether, which made troubleshooting the remaining errors in the Python connection to R much simpler.
I had same problem with "Deep Learning with R" CNN example on Win7. I solved it like this:
I added Anaconda3 paths to PATH. In my case it was Windows so paths were like that:
C:\Anaconda3\Scripts;C:\Anaconda3\Library\bin By default there were no paths to conda in $PATH.
installed pillow (it contains PIL) to python with:
pip install pillow
configured r-reticulate.
This answer Could not import PIL.Image even if Pillow already installed? helped me. I had pillow already but conda environment wasn't configured properly so pillow wasn't visible.
Also install Nvidia CUDA if you don't have it - you need it too for tensorflow.
Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If tensorflow-gpu is installed and tensorflow.lite.python.interpreter is imported, will GPU be used automatically?
According to this thread, it is not.
one solution is to convert tflite to onnx and use onnxruntime-gpu
convert to onnx with https://github.com/onnx/tensorflow-onnx:
pip install tf2onnx
python3 -m tf2onnx.convert --opset 11 --tflite path/to/model.tflite --output path/to/model.onnx
then pip install onnxruntime-gpu
and run like:
session = onnxruntime.InferenceSession(('/path/to/model.onnx'))
raw_output = self.detection_session.run(['output_name'], {'input_name': img})
you can get the input and output names by:
for i in range(len(session.get_inputs)):
print(session.get_inputs()[i].name)
and the same but replace 'get_inputs' with 'get_outputs'
You can force the computation to take place on a GPU:
import tensorflow as tf
with tf.device('/gpu:0'):
for i in range(10):
t = np.random.randint(len(x_test) )
...
Hope this helps.
I was installing tensorboard using pip install tensorboard. All worked fine
I run my network and the writer also worked fine
with tf.Session() as sess:
writer = tf.summary.FileWriter("pathtofolder", sess.graph)
print(sess.run(h))
writer.close()
now I wanted to see in tensorboard how it learned
I inserted
import tensorflow, tensorboard
tensorboard --logdir /pathtofolder
and received the error message.
NameError: name 'logdir' is not defined
You should execute the command on terminal/cmd.
Open your terminal/cmd and run:
tensorboard --logdir ./PATH_TO_THE_EVENT_FILES
If you want to run Tensorboard on python file see here: