Unzipping tar.gz in Google Colab - google-colaboratory

I was working in Google Colab and I wanted to unzip
detection checkpoints for tensorflow 2.x
ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
Here is the code:
!tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
And after runing I got these errors:
/content/gdrive/My Drive/customTF2/data
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-35-5f662a73b4f6> in <module>
4
5 # !wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz
----> 6 get_ipython().system('tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz')
4 frames
/usr/lib/python3.8/subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session)
1702 if errno_num != 0:
1703 err_msg = os.strerror(errno_num)
-> 1704 raise child_exception_type(errno_num, err_msg, err_filename)
1705 raise child_exception_type(err_msg)
1706
FileNotFoundError: [Errno 2] No such file or directory: '/bin/bash'
What can I do??
I tried anything but nothing helped.

Related

tensorflow.python.framework.errors_impl.ResourceExhaustedError

I'm using an object detection module for classifying images. My specs are as follows:
OS: Ubuntu 18.04 LTS
Python: 3.6.7
VirtualEnv: Version: 16.4.3
Pip3 version inside virtualenv: 19.0.3
TensorFlow Version: 1.13.1
Protoc Version: 3.0.0-9
I'm working on Windows virtualenv and google-colab. This is the error message I get:
python3 legacy/train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_pets.config
INFO:tensorflow:global step 1: loss = 18.5013 (48.934 sec/step)
INFO:tensorflow:Finished training! Saving model to disk.
/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/summary/writer/writer.py:386: UserWarning: Attempting to use a closed FileWriter. The operation will be a noop unless the FileWriter is explicitly reopened.
warnings.warn("Attempting to use a closed FileWriter. "
Traceback (most recent call last):
File "legacy/train.py", line 184, in <module>
tf.app.run()
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "legacy/train.py", line 180, in main
graph_hook_fn=graph_rewriter_fn)
File "/home/priyank/venv/models-master/research/object_detection/legacy/trainer.py", line 416, in train
saver=saver)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/learning.py", line 785, in train
ignore_live_threads=ignore_live_threads)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/supervisor.py", line 832, in stop
ignore_live_threads=ignore_live_threads)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/home/priyank/venv/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/training/queue_runner_impl.py", line 257, in _run
enqueue_callable()
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1257, in _single_operation_run
self._call_tf_sessionrun(None, {}, [], target_list, None)
File "/home/priyank/venv/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
<b>tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[15,1,1755,2777,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[{{node batch}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.</b>
You can try the following fixes:
1. Reducing the image dimension in case you are using very high image resolution
2. Try reducing the batch size
3. Check if any other process is using up your memory
Could you also please share your config file

How to fix "Input shape axis 0 must equal 4, got shape [5]" in tensorflow?

I am running tensorflow object_detection api in a docker with TiTAN. Using the command python object_detection/model_main.py --"pipeline_config_path object_detection/train_manhole/faster_rcnn_resnet101_coco.config --model_dir object_detection/train_manhole --alsologtostder, I receive an error.
Here is the error information:
root#a358c8644e9c:~/manhole/models/research# python object_detection/model_main.py --pipeline_config_path object_detection/train_manhole/faster_rcnn_resnet101_coco.config --model_dir object_detection/train_manhole --alsologtostder
/root/manhole/models/research/object_detection/utils/visualization_utils.py:26: UserWarning:
This call to matplotlib.use() has no effect because the backend has already
been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.
The backend was *originally* set to 'TkAgg' by the following code:
File "object_detection/model_main.py", line 26, in <module>
from object_detection import model_lib
File "/root/manhole/models/research/object_detection/model_lib.py", line 27, in <module>
from object_detection import eval_util
File "/root/manhole/models/research/object_detection/eval_util.py", line 28, in <module>
from object_detection.metrics import coco_evaluation
File "/root/manhole/models/research/object_detection/metrics/coco_evaluation.py", line 20, in <module>
from object_detection.metrics import coco_tools
File "/root/manhole/models/research/object_detection/metrics/coco_tools.py", line 47, in <module>
from pycocotools import coco
File "/root/manhole/models/research/pycocotools/coco.py", line 49, in <module>
import matplotlib.pyplot as plt
File "/usr/local/lib/python3.5/dist-packages/matplotlib/pyplot.py", line 71, in <module>
from matplotlib.backends import pylab_setup
File "/usr/local/lib/python3.5/dist-packages/matplotlib/backends/__init__.py", line 16, in <module>
line for line in traceback.format_stack()
import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements
WARNING:tensorflow:Forced number of epochs for all eval validations to be 1.
WARNING:tensorflow:Expected number of evaluation epochs is 1, but instead encountered `eval_on_train_input_config.num_epochs` = 0. Overwriting `num_epochs` to 1.
WARNING:tensorflow:Estimator's model_fn (<function create_model_fn.<locals>.model_fn at 0x7f0a65a25e18>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From /root/manhole/models/research/object_detection/builders/dataset_builder.py:80: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.parallel_interleave(...)`.
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/sparse_ops.py:1165: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:From /root/manhole/models/research/object_detection/builders/dataset_builder.py:152: batch_and_drop_remainder (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.batch(..., drop_remainder=True)`.
WARNING:tensorflow:From /root/manhole/models/research/object_detection/predictors/heads/box_head.py:93: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /root/manhole/models/research/object_detection/core/losses.py:345: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See `tf.nn.softmax_cross_entropy_with_logits_v2`.
/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gradients_impl.py:112: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2019-03-29 02:38:24.469848: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-03-29 02:38:24.546573: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-03-29 02:38:24.547117: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 11.91GiB freeMemory: 11.41GiB
2019-03-29 02:38:24.547150: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-03-29 02:38:24.709319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-03-29 02:38:24.709370: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-03-29 02:38:24.709377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-03-29 02:38:24.709670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11036 MB memory) -> physical GPU (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input shape axis 0 must equal 4, got shape [5]
[[{{node Preprocessor/ResizeToRange/cond/resize_images/unstack}} = Unpack[T=DT_INT32, axis=0, num=4, _device="/device:CPU:0"](Preprocessor/ResizeToRange/cond/resize_images/Shape)]]
[[{{node IteratorGetNext}} = IteratorGetNext[output_shapes=[[1], [1,?,?,3], [1,2], [1,3], [1,100], [1,100,4], [1,100,2], [1,100,2], [1,100], [1,100], [1,100], [1]], output_types=[DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_BOOL, DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](IteratorV2)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "object_detection/model_main.py", line 109, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "object_detection/model_main.py", line 105, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 471, in train_and_evaluate
return executor.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 610, in run
return self.run_local()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/training.py", line 711, in run_local
saving_listeners=saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 354, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1207, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1241, in _train_model_default
saving_listeners)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/estimator/estimator.py", line 1471, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 671, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1156, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1255, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.5/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1240, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1312, in run
run_metadata=run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/monitored_session.py", line 1076, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input shape axis 0 must equal 4, got shape [5]
[[{{node Preprocessor/ResizeToRange/cond/resize_images/unstack}} = Unpack[T=DT_INT32, axis=0, num=4, _device="/device:CPU:0"](Preprocessor/ResizeToRange/cond/resize_images/Shape)]]
[[node IteratorGetNext (defined at object_detection/model_main.py:105) = IteratorGetNext[output_shapes=[[1], [1,?,?,3], [1,2], [1,3], [1,100], [1,100,4], [1,100,2], [1,100,2], [1,100], [1,100], [1,100], [1]], output_types=[DT_INT32, DT_FLOAT, DT_INT32, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32, DT_BOOL, DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](IteratorV2)]]
My docker environment is :
== cat /etc/issue ===============================================
Linux a358c8644e9c 4.15.0-46-generic #49~16.04.1-Ubuntu SMP Tue Feb 12 17:45:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
VERSION="16.04.6 LTS (Xenial Xerus)"
VERSION_ID="16.04"
VERSION_CODENAME=xenial
== are we in docker =============================================
Yes
== compiler =====================================================
c++ (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== uname -a =====================================================
Linux a358c8644e9c 4.15.0-46-generic #49~16.04.1-Ubuntu SMP Tue Feb 12 17:45:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
== check pips ===================================================
numpy 1.14.2
protobuf 3.7.1
tensorflow-estimator 1.13.0
tensorflow-gpu 1.12.0
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.VERSION = 1.12.0
tf.GIT_VERSION = v1.12.0-0-ga6d8ffae09
tf.COMPILER_VERSION = 4.8.5
Sanity check: array([1], dtype=int32)
== env ==========================================================
LD_LIBRARY_PATH /usr/local/cuda-9.0/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
DYLD_LIBRARY_PATH is unset
== nvidia-smi ===================================================
Fri Mar 29 02:57:43 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN Xp Off | 00000000:01:00.0 On | N/A |
| 23% 36C P0 69W / 250W | 378MiB / 12192MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
== cuda libs ===================================================
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart.so.9.0.176
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart_static.a
Thanks for help!
Thanks! I fixed it. The reason was these were some datasets which were pictures broken. What's more, the depth of some pictures had more than 3. I used the python script to select those pictures which caused the error above. Finally, it runs normally.
I faced the same issue, the problem was because of the dataset(broken images in the dataset). There were incorrect image shapes like 2D image shapes in the dataset, that the correct shape should be 3D shape.
What was solved is deleting the not 3D images by comparing images' shapes. Done!!!!

Tensorflow build error : Cannot find cudnn.h under ~

I am trying to build tensorflow r1.12 using bazel 0.15 on Redhat 7.5 ppc64le.
I am stuck with the following error.
[u0017649#sys-97184 tensorflow]$ bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
...
ERROR: error loading package 'tensorflow/tools/pip_package': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package
'#local_config_cuda//cuda': Traceback (most recent call last):
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 1447
_create_local_cuda_repository(repository_ctx)
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 1187, in _create_local_cuda_repository
_get_cuda_config(repository_ctx)
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 911, in _get_cuda_config
_cudnn_version(repository_ctx, cudnn_install_base..., ...)
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 582, in _cudnn_version
_find_cudnn_header_dir(repository_ctx, cudnn_install_base...)
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 869, in _find_cudnn_header_dir
auto_configure_fail(("Cannot find cudnn.h under %s" ...))
File
"/home/u0017649/files/tensorflow/third_party/gpus/cuda_configure.bzl", line 317, in auto_configure_fail
fail(("\n%sCuda Configuration Error:%...)))
Cuda Configuration Error: Cannot find cudnn.h under /usr/local/cuda-9.2/targets/ppc64le-linux/lib
I do have a soft link for cudnn.h under /usr/local/cuda-9.2/targets/ppc64le-linux/lib as below.
[u0017649#sys-97184 tensorflow]$ ls -l /usr/local/cuda-9.2/targets/ppc64le-linux/lib/cudnn.h
lrwxrwxrwx. 1 root root 57 Feb 20 10:15 /usr/local/cuda-9.2/targets/ppc64le-linux/lib/cudnn.h -> /usr/local/cuda-9.2/targets/ppc64le-linux/include/cudnn.h
Any comments, pls ?
After reading tensorflow/third_party/gpus/cuda_configure.bzl, I could solve this by the following.
$ sudo ln -sf /usr/local/cuda-9.2/targets/ppc64le-linux/include/cudnn.h /usr/include/cudnn.h

pycharm-selenium-python: Unable to start chromedriver service - [WinError 193] %1 is not a valid Win32 application

This question is not a duplicate of
error using selenium chromedriver on windows 7 64 bit as I have tried all solutions mentioned there.
In directory env\lib\site-packages\selenium\webdriver\common\service.py, considering the following code in function start
cmd = [self.path]
cmd.extend(self.command_line_args())
self.process = subprocess.Popen(cmd, env=self.env,
close_fds=platform.system() != 'Windows',
stdout=self.log_file, stderr=self.log_file)
The value for cmd is: <class 'list'>: ['chromedriver', '--port=58808']
Within ../AppData/Local/Programs/Python/Python35/Lib/subprocess.py function __init__
self._execute_child(args, executable, preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
restore_signals, start_new_session)
args is the only argument passed with value <class 'list'>: ['chromedriver', '--port=58999']
But it raises an exception: [WinError 193] %1 is not a valid Win32 application
This prevents starting of the chromedriver service.
So I changed the args to absolute_path_to_chrome_driver\\chromedriver:
self._execute_child(args, 'absolute_path_to_chrome_driver\\chromedriver', preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
restore_signals, start_new_session)
But it still raises the same exception: [WinError 193] %1 is not a valid Win32 application
This preventing the launch of chromedriver.
I even downloaded the latest version of chromedriver but ChromeDriver 2.43 (https://chromedriver.storage.googleapis.com/2.43/chromedriver_win32.zip) but the error persists.
Any clues on this one?
Ok so the chromedriver.exe needs to be placed in the ..\env\Scripts folder for this to work - specifying any system path entry did not work here.
When I place anything in over here, I can directly access it by process name. But i cannot use a listed path in system environment variables (or maybe I can but not aware how to :( ).

training MNIST with TPU generates errors

Following the Running MNIST on Cloud TPU tutorial:
I get the following error when I try to train:
python /usr/share/models/official/mnist/mnist_tpu.py \
--tpu=$TPU_NAME \
--DATA_DIR=${STORAGE_BUCKET}/data \
--MODEL_DIR=${STORAGE_BUCKET}/output \
--use_tpu=True \
--iterations=500 \
--train_steps=2000
=>
alexryan#alex-tpu:~/tpu$ ./train-mnist.sh
W1025 20:21:39.351166 139745816463104 __init__.py:44] file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect
from . import file_cache
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module>
'file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth')
ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
Traceback (most recent call last):
File "/usr/share/models/official/mnist/mnist_tpu.py", line 173, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/share/models/official/mnist/mnist_tpu.py", line 152, in main
tpu_config=tf.contrib.tpu.TPUConfig(FLAGS.iterations, FLAGS.num_shards),
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_config.py", line 207, in __init__
self._master = cluster.master()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/cluster_resolver/python/training/tpu_cluster_resolver.py", line 223, in master
job_tasks = self.cluster_spec().job_tasks(self._job_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/cluster_resolver/python/training/tpu_cluster_resolver.py", line 269, in cluster_spec
(compat.as_text(self._tpu), response['health']))
RuntimeError: TPU "alex-tpu" is unhealthy: "TIMEOUT"
alexryan#alex-tpu:~/tpu$
The only places where I varied from the instructions were:
Instead of running ctpu in the cloud shell, I ran it on the mac.
>ctpu version
ctpu version: 1.7
The zone where the TPU resided was different than the default zone of my config, so I specified it as an option like so:
>cat ctpu-up.sh
ctpu up --zone us-central1-b --preemptible
I was able to move the MNIST files to the gcs bucket from the vm no problem:
alexryan#alex-tpu:~$ gsutil cp -r ./data ${STORAGE_BUCKET}
Copying file://./data/validation.tfrecords [Content-Type=application/octet-stream]...
Copying file://./data/train-images-idx3-ubyte.gz [Content-Type=application/octet-stream]...
I tried the (Optional) Set up TensorBoard >
Running cloud_tpu_profiler
Go to the Cloud Console > TPUs > and click on the TPU you created.
Locate the service account name for the Cloud TPU and copy it, for
example:
service-11111111118#cloud-tpu.iam.myserviceaccount.com
In the list of buckets, select the bucket you want to use, select Show
Info Panel, and then select Edit bucket permissions. Paste your
service account name into the add members field for that bucket and
select the following permissions:
"Cloud Console > TPUs" does not exist as an option
so I used the service account associate with the VM
"Cloud Console > Compute Engine > alex-tpu"
since the last error message was "RuntimeError: TPU "alex-tpu" is unhealthy: "TIMEOUT", I used ctpu to delete the vm and re-create it and ran it again.
This time I got more errors:
This one seems like it might be just a warning ...
ImportError: file_cache is unavailable when using oauth2client >=
4.0.0 or google-auth
Not sure about this one ...
ERROR:tensorflow:Operation of type Placeholder (reshape_input) is not supported on the TPU. Execution will fail if this op is used in the graph.
this one seemed to kill the training ...
INFO:tensorflow:Error recorded from training_loop: File system scheme '[local]' not implemented (file: '/tmp/tmpaiggRW/model.ckpt-0_temp_9216e11a1368405795d9b5282775f562') [[{{node save/SaveV2}} = SaveV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT64],
_device="/job:worker/replica:0/task:0/device:CPU:0"](save/ShardedFilename, save/SaveV2/tensor_names, save/SaveV2/shape_and_slices, conv2d/bias/Read/ReadVariableOp, conv2d/kernel/Read/ReadVariableOp, conv2d_1/bias/Read/ReadVariableOp, conv2d_1/kernel/Read/ReadVariableOp, dense/bias/Read/ReadVariableOp, dense/kernel/Read/ReadVariableOp, dense_1/bias/Read/ReadVariableOp, dense_1/kernel/Read/ReadVariableOp, global_step/Read/ReadVariableOp)]]
Caused by op u'save/SaveV2', defined at: File "/usr/share/models/official/mnist/mnist_tpu.py", line 173, in <module>
tf.app.run() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv)) File "/usr/share/models/official/mnist/mnist_tpu.py", line 163, in main
estimator.train(input_fn=train_input_fn, max_steps=FLAGS.train_steps) File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2394, in train
saving_listeners=saving_listeners File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 356, in train
loss = self._train_model(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1181, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1215, in _train_model_default
saving_listeners) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/estimator/estimator.py", line 1406, in _train_with_estimator_spec
log_step_count_steps=self._config.log_step_count_steps) as mon_sess: File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 504, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 921, in __init__
stop_grace_period_secs=stop_grace_period_secs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 643, in __init__
self._sess = _RecoverableSession(self._coordinated_creator) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1107, in __init__
_WrappedSession.__init__(self, self._create_session()) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 1112, in _create_session
return self._sess_creator.create_session() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 800, in create_session
self.tf_sess = self._session_creator.create_session() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 557, in create_session
self._scaffold.finalize() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 215, in finalize
self._saver.build() File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1106, in build
self._build(self._filename, build_save=True, build_restore=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1143, in _build
build_save=build_save, build_restore=build_restore) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 778, in _build_internal
save_tensor = self._AddShardedSaveOps(filename_tensor, per_device) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 369, in _AddShardedSaveOps
return self._AddShardedSaveOpsForV2(filename_tensor, per_device) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 343, in _AddShardedSaveOpsForV2
sharded_saves.append(self._AddSaveOps(sharded_filename, saveables)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 284, in _AddSaveOps
save = self.save_op(filename_tensor, saveables) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 202, in save_op
tensors) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 1690, in save_v2
shape_and_slices=shape_and_slices, tensors=tensors, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1768, in __init__
self._traceback = tf_stack.extract_stack()
UnimplementedError (see above for traceback): File system scheme '[local]' not implemented (file: '/tmp/tmpaiggRW/model.ckpt-0_temp_9216e11a1368405795d9b5282775f562')
UPDATE
I get this error ...
INFO:tensorflow:Error recorded from training_loop: File system scheme '[local]' not implemented
... even when --use_tpu=False
alexryan#alex-tpu:~/tpu$ cat train-mnist.sh
python /usr/share/models/official/mnist/mnist_tpu.py \
--tpu=$TPU_NAME \
--DATA_DIR=${STORAGE_BUCKET}/data \
--MODEL_DIR=${STORAGE_BUCKET}/output \
--use_tpu=False \
--iterations=500 \
--train_steps=2000
This stack overflow answer suggests that the tpu is trying to write to a non-existent file system instead of the gcs bucket I specified. It is unclear to me why that would be happening.
In the first scenario, it seems the TPU you created is not in healthy state. So, deleting and recreating the TPU or the entire VM is the right way to resolve this.
I think the error comes in second scenario (where you deleted the vm and re-created it again) is because your ${STORAGE_BUCKET} is either undefined or not a proper GCS bucket. It should be a GCS bucket. Local path won't work and gives the following error.
More information on creating a GCS bucket is in the section "Create a Cloud Storage bucket" at https://cloud.google.com/tpu/docs/tutorials/mnist
Hope this answers your question.
Ran into the same problem and found that there was a typo in the tutorial. If you check mnist_tpu.py you'll find that the params need to be lowercase.
If you change that, it works fine.
python /usr/share/models/official/mnist/mnist_tpu.py \
--tpu=$TPU_NAME \
--data_dir=${STORAGE_BUCKET}/data \
--model_dir=${STORAGE_BUCKET}/output \
--use_tpu=True \
--iterations=500 \
--train_steps=2000