Continuous Machine Learning pipeline broken by tensorflow requirements installation - tensorflow

I am doing Continuous Machine Learning (https://cml.dev/) on my own GitLab server. My goal is to test the basic Continuous Machine Learning pipeline with python script as an example.
My .gitlab-ci.yml file is a basic one:
stages:
- cml_run
cml:
stage: cml_run
image: dvcorg/cml-py3:latest
script:
- pip3 install -r requirements.txt
- python train.py
- cat metrics.txt >> report.md
- cml-publish confusion_matrix.png --md >> report.md
- cml-send-comment report.md
For pandas,sklearn and Keras in the requirements.txt, there is a successful installation. But I receive a pipeline broken by TensorFlow requirements installation
$ pip3 install -r requirements.txt
Collecting pandas
Downloading pandas-1.0.5-cp36-cp36m-manylinux1_x86_64.whl (10.1 MB)
Collecting sklearn
Downloading sklearn-0.0.tar.gz (1.1 kB)
Collecting keras
Downloading Keras-2.4.3-py2.py3-none-any.whl (36 kB)
Collecting tensorflow
Downloading tensorflow-2.2.0-cp36-cp36m-manylinux2010_x86_64.whl (516.2 MB)
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 188, in _main
status = self.run(options, args)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/req_command.py", line 185, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 333, in run
reqs, check_supported_wheels=not options.target_dir
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/resolution/legacy/resolver.py", line 179, in resolve
discovered_reqs.extend(self._resolve_one(requirement_set, req))
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/resolution/legacy/resolver.py", line 362, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/resolution/legacy/resolver.py", line 314, in _get_abstract_dist_for
abstract_dist = self.preparer.prepare_linked_requirement(req)
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 469, in prepare_linked_requirement
hashes=hashes,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 259, in unpack_url
hashes=hashes,
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 130, in get_http_url
link, downloader, temp_dir.path, hashes
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/prepare.py", line 281, in _download_http_url
for chunk in download.chunks:
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/progress_bars.py", line 166, in iter
for x in it:
File "/usr/local/lib/python3.6/dist-packages/pip/_internal/network/utils.py", line 39, in response_chunks
decode_content=False,
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 564, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/urllib3/response.py", line 507, in read
data = self._fp.read(amt) if not fp_closed else b""
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 65, in read
self._close()
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/filewrapper.py", line 52, in _close
self.__callback(self.__buf.getvalue())
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/controller.py", line 309, in cache_response
cache_url, self.serializer.dumps(request, response, body=body)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/cachecontrol/serialize.py", line 72, in dumps
return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)])
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/__init__.py", line 35, in packb
return Packer(**kwargs).pack(o)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 936, in pack
self._pack(obj)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 920, in _pack
len(obj), dict_iteritems(obj), nest_limit - 1
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 1021, in _pack_map_pairs
self._pack(v, nest_limit - 1)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 920, in _pack
len(obj), dict_iteritems(obj), nest_limit - 1
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 1021, in _pack_map_pairs
self._pack(v, nest_limit - 1)
File "/usr/local/lib/python3.6/dist-packages/pip/_vendor/msgpack/fallback.py", line 865, in _pack
return self._buffer.write(obj)
MemoryError
Any ideas on how to overcome his issue with CML pipeline on GitLab?

It is difficult to diagnose without knowing more about your .gitlab-ci.yml file. But based on the MemoryError message, it seems runner does not have enough memory to install Tensorflow in addition to your other project dependencies.
You can try installing TF with the --no-cache-dir flag

Related

Cannot install petsc4py on Ubuntu 20.04

After successfully installing and testing PETSc I went ahead and tried to install petsc4py with:
$ sudo python3 -m pip install petsc4py
but got loads of errors. Here are the messages:
Collecting petsc4py
Using cached petsc4py-3.16.1.tar.gz (2.3 MB)
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-yg5szcfl/petsc4py/setup.py'"'"'; __file__='"'"'/tmp/pip-install-yg5szcfl/petsc4py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info
cwd: /tmp/pip-install-yg5szcfl/petsc4py/
Complete output (54 lines):
running egg_info
creating /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info
writing /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info/dependency_links.txt
writing requirements to /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info/requires.txt
writing top-level names to /tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info/top_level.txt
writing manifest file '/tmp/pip-install-yg5szcfl/petsc4py/pip-egg-info/petsc4py.egg-info/SOURCES.txt'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-yg5szcfl/petsc4py/setup.py", line 289, in <module>
main()
File "/tmp/pip-install-yg5szcfl/petsc4py/setup.py", line 286, in main
run_setup()
File "/tmp/pip-install-yg5szcfl/petsc4py/setup.py", line 135, in run_setup
setup(packages = ['petsc4py',
File "/usr/local/lib/python3.8/dist-packages/setuptools/__init__.py", line 155, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/egg_info.py", line 298, in run
self.find_sources()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/egg_info.py", line 305, in find_sources
mm.run()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/egg_info.py", line 540, in run
self.add_defaults()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/egg_info.py", line 577, in add_defaults
sdist.add_defaults(self)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/sdist.py", line 226, in add_defaults
self._add_defaults_python()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/sdist.py", line 111, in _add_defaults_python
build_py = self.get_finalized_command('build_py')
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/usr/local/lib/python3.8/dist-packages/setuptools/command/build_py.py", line 29, in finalize_options
orig.build_py.finalize_options(self)
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/command/build_py.py", line 43, in finalize_options
self.set_undefined_options('build',
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 287, in set_undefined_options
src_cmd_obj.ensure_finalized()
File "/usr/local/lib/python3.8/dist-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/tmp/pip-install-yg5szcfl/petsc4py/conf/baseconf.py", line 411, in finalize_options
self.petsc_dir = config.get_petsc_dir(self.petsc_dir)
File "/tmp/pip-install-yg5szcfl/petsc4py/conf/baseconf.py", line 349, in get_petsc_dir
petsc_dir = petsc.get_petsc_dir()
AttributeError: module 'petsc' has no attribute 'get_petsc_dir'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
There are so many error messages that I have no idea where from I should start. Any help? The installation should be straightforward, I don't know what's going on. Trying to fix the package (or setup.py thereof) manually is futile. I'm using Python 3.8.10.
Any hints will be much appreciated.
The easiest solution I found to solve this issue in one of my Dockerfile is simply to use apt-get install -y python3-petsc4py-complex.
(If you want/have the petsc real version, use apt-get install -y python3-petsc4py-real instead).
From those who are interested, the same idea can be applied for slepc with apt-get install -y python3-slepc4py-complex.
Feel free to tell me if it works ! If not, I will try on a new Dockerfile to solve this.

How could I use GPU in google colab to train spacy relation extraction model, [E002] Can't find factory for 'transformer' for language English (en)

I am running a relation extraction spacy model on google colab , It works when I use !spacy project run all or !spacy project run train_cpu but when I run !spacy project run train_gpu it returns following error:
================================= train_gpu =================================
Running command: /usr/bin/python3 -m spacy train configs/rel_trf.cfg --output training --paths.train data/train.spacy --paths.dev data/dev.spacy -c ./scripts/custom_functions.py --gpu-id 0
ℹ Saving to output directory: training
ℹ Using GPU: 0
=========================== Initializing pipeline ===========================
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/spacy/__main__.py", line 4, in <module>
setup_cli()
File "/usr/local/lib/python3.7/dist-packages/spacy/cli/_util.py", line 71, in setup_cli
command(prog_name=COMMAND)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.7/dist-packages/spacy/cli/train.py", line 45, in train_cli
train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)
File "/usr/local/lib/python3.7/dist-packages/spacy/cli/train.py", line 72, in train
nlp = init_nlp(config, use_gpu=use_gpu)
File "/usr/local/lib/python3.7/dist-packages/spacy/training/initialize.py", line 41, in init_nlp
nlp = load_model_from_config(raw_config, auto_fill=True)
File "/usr/local/lib/python3.7/dist-packages/spacy/util.py", line 531, in load_model_from_config
validate=validate,
File "/usr/local/lib/python3.7/dist-packages/spacy/language.py", line 1784, in from_config
raw_config=raw_config,
File "/usr/local/lib/python3.7/dist-packages/spacy/language.py", line 794, in add_pipe
validate=validate,
File "/usr/local/lib/python3.7/dist-packages/spacy/language.py", line 652, in create_pipe
raise ValueError(err)
ValueError: [E002] Can't find factory for 'transformer' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `#Language.component` (for function components) or `#Language.factory` (for class components).
Available factories: attribute_ruler, tok2vec, merge_noun_chunks, merge_entities, merge_subtokens, token_splitter, doc_cleaner, parser, beam_parser, entity_linker, ner, beam_ner, entity_ruler, lemmatizer, tagger, morphologizer, senter, sentencizer, textcat, spancat, textcat_multilabel, relation_extractor, en.lemmatizer
I used both following installations (interchangeably) in case the GPU wasn't called correctly:
!pip install -U spacy[cuda101]
#!pip install -U spacy-nightly --pre
You haven't installed spacy-transformers. The easiest way to do this is probably to spacy download en_core_web_trf.
I would recommend you check the install quickstart again - I don't think spacy-nightly has been updated since v3 was released almost a year ago. Also check the Discussions FAQ - it's been a while since we've heard reports of it, but a while ago you had to specifically not install cupy (that is, not use pip install spacy[cuda101]) in order to get GPU support on Colab.

Issues while using Snappy for tensorflow preprocessing using BeamIO

While using Apache beamIO for preprocessing data, snappy library was a good to have module for compression but looks like the file transformation doesnt seems to work as it cannot find the crc32 compress function in the library! Im using snappy-0.5.2 version
the error looks like this -
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
ERROR:root:Exception at bundle <apache_beam.runners.direct.bundle_factory._Bundle object at 0x7f1dd1d60e50>, due to an exception.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/direct/executor.py", line 312, in call
side_input_values)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/direct/executor.py", line 347, in attempt_call
evaluator.process_element(value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/direct/transform_evaluator.py", line 551, in process_element
self.runner.process(element)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/common.py", line 390, in process
self._reraise_augmented(exn)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/common.py", line 388, in process
self.do_fn_invoker.invoke_process(windowed_value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/common.py", line 281, in invoke_process
self._invoke_per_window(windowed_value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/runners/common.py", line 307, in _invoke_per_window
windowed_value, self.process_method(*args_for_process))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/typehints/typecheck.py", line 63, in process
return self.wrapper(self.dofn.process, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/typehints/typecheck.py", line 81, in wrapper
result = method(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/iobase.py", line 965, in process
self.writer.write(element)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filebasedsink.py", line 299, in write
self.sink.write_record(self.temp_handle, value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/filebasedsink.py", line 129, in write_record
self.write_encoded_record(file_handle, self.coder.encode(value))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/tfrecordio.py", line 235, in write_encoded_record
_TFRecordUtil.write_record(file_handle, value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/tfrecordio.py", line 97, in write_record
struct.pack('<I', cls._masked_crc32c(encoded_length)), #
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/tfrecordio.py", line 77, in _masked_crc32c
crc = crc32c_fn(value)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/io/tfrecordio.py", line 43, in _default_crc32c_fn
_default_crc32c_fn.fn = snappy._crc32c # pylint: disable=protected-access
AttributeError: 'module' object has no attribute '_crc32c' [while running 'WriteTrainData/Write/WriteImpl/WriteBundles']
If any one could help me to use snappy with tensorflow correctly!
Thank you
I just hit this issue; I think it is due to Beam being a little careless about versions of optional test-dependencies (in this case, tensorflow and python-snappy).
The problematic code:
import snappy
snappy._crc32c
works in python-snappy version 0.5.1 but not in 0.5.2 (the latest version).
I got these Beam tests passing by installing python-snappy 0.5.1 via:
pip install \
--upgrade --ignore-installed \
python-snappy==0.5.1 \
--global-option=build_ext \
--global-option="-I/usr/local/include" \
--global-option="-L/usr/local/lib"
On OSX I need the three --global-option flags otherwise it doesn't find my snappy headers (symptom: errors about #include <snappy-c.h>) and library files, which brew install snappy placed in /usr/local/include and /usr/local/lib, respectively.
The bits before that seem necessary to override pip's default of wanting to give me the latest version.

Error installing CNTK - MemoryError while installing .whl file

I have Ubuntu 16.04 server environment and was trying to install CNTK on it. While I was trying to install pip install in an environment section I get the following error.
I succesfully ran below 2 steps:
$ conda create --name cntk-py34 python=3.4 numpy scipy h5py jupyter
$ activate cntk-py35
But when I try to install the cntk whl file I get an error:
$ pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.0.beta15.0-cp35-cp35m-linux_x86_64.whl
========error==================
Exception:
Traceback (most recent call last):
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/commands/install.py", line 335, in run
wb.build(autobuilding=True)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/req/req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/download.py", line 821, in unpack_url
hashes=hashes
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/download.py", line 659, in unpack_http_url
hashes)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/download.py", line 853, in _download_http_url
stream=True,
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/requests/sessions.py", line 488, in get
return self.request('GET', url, **kwargs)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/download.py", line 386, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/requests/sessions.py", line 596, in send
r = adapter.send(request, **kwargs)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/cachecontrol/adapter.py", line 37, in send
cached_response = self.controller.cached_request(request)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/cachecontrol/controller.py", line 111, in cached_request
resp = self.serializer.loads(request, cache_data)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/cachecontrol/serialize.py", line 114, in loads
return getattr(self, "_loads_v{0}".format(ver))(request, data)
File "/home/ubuntu/.conda/envs/cntk-py35/lib/python3.5/site-packages/pip/_vendor/cachecontrol/serialize.py", line 176, in _loads_v2
cached = json.loads(zlib.decompress(data).decode("utf8"))
MemoryError
Any ideas???
Thanks in advance!
In your question, you mention two different Python versions (3.4 and 3.5). Also, the Anaconda Environment activation needs to be sourced. Assuming you already satisfied the OpenMPI dependency (cf. https://github.com/Microsoft/CNTK/wiki/Setup-Linux-Python), can you try one of these:
# For a Python 3.4 based setup
conda create --name cntk-py34 python=3.4 numpy scipy h5py jupyter
source activate cntk-py34
pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.0rc1-cp34-cp34m-linux_x86_64.whl
# For a Python 3.5 based setup
conda create --name cntk-py35 python=3.5 numpy scipy h5py jupyter
source activate cntk-py35
pip install https://cntk.ai/PythonWheel/CPU-Only/cntk-2.0rc1-cp35-cp35m-linux_x86_64.whl

Redownloaded custom built tensorflow wheel wont install

I've built Tensorflow with custom SIMD extensions and created a wheel for it. If I simply do pip install /tmp/tensorflow_pkg/tensorflow-1.0.0-cp27-cp27mu-linux_x86_64.wh on the box that I built it on, that works. However if I upload the whl file to cloud storage, and do pip install https://storage.cloud.google.com/path/to/tensorflow-1.0.0-cp27-cp27mu-linux_x86_64.whl I get this error:
Collecting tensorflow==1.0.0 from https://storage.cloud.google.com/path/to/tensorflow-1.0.0-cp27-cp27mu-linux_x86_64.whl
Downloading https://storage.cloud.google.com/path/to/tensorflow-1.0.0-cp27-cp27mu-linux_x86_64.whl
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run
wb.build(autobuilding=True)
File "/usr/local/lib/python2.7/site-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "/usr/local/lib/python2.7/site-packages/pip/download.py", line 821, in unpack_url
hashes=hashes
File "/usr/local/lib/python2.7/site-packages/pip/download.py", line 663, in unpack_http_url
unpack_file(from_path, location, content_type, link)
File "/usr/local/lib/python2.7/site-packages/pip/utils/__init__.py", line 599, in unpack_file
flatten=not filename.endswith('.whl')
File "/usr/local/lib/python2.7/site-packages/pip/utils/__init__.py", line 484, in unzip_file
zip = zipfile.ZipFile(zipfp, allowZip64=True)
File "/usr/local/lib/python2.7/zipfile.py", line 770, in __init__
self._RealGetContents()
File "/usr/local/lib/python2.7/zipfile.py", line 811, in _RealGetContents
raise BadZipfile, "File is not a zip file"
BadZipfile: File is not a zip file
Do I need to configure my build differently somehow?
(capturing the solution as an answer)
The URL used for the download is not correct. The base url needed to be storage.googleapis.com