Anaconda Pandas breaks on reading hdf file on Python 3.6.x - pandas

I am using an Anaconda environment with Python 3.6.8, created with conda create -n temp pandas pytables h5py python=3.6.8. When I try to read a .h5 file like:
f = pd.read_hdf(filename, key)
I get an ValueError exception:
Traceback (most recent call last):
File "read_data.py", line 6, in <module>
f = pd.read_hdf(filename, key)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 394, in read_hdf
return store.select(key, auto_close=auto_close, **kwargs)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 741, in select
return it.get_result()
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 1483, in get_result
results = self.func(self.start, self.stop, where)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 734, in func
columns=columns)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2928, in read
ax = self.read_index('axis%d' % i, start=_start, stop=_stop)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2523, in read_index
_, index = self.read_index_node(getattr(self.group, key), **kwargs)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/pandas/io/pytables.py", line 2621, in read_index_node
data = node[start:stop]
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/tables/vlarray.py", line 685, in __getitem__
return self.read(start, stop, step)
File "/home/fauzanzaid/anaconda3/envs/temp/lib/python3.6/site-packages/tables/vlarray.py", line 821, in read
listarr = self._read_array(start, stop, step)
File "tables/hdf5extension.pyx", line 2155, in tables.hdf5extension.VLArray._read_array
ValueError: cannot set WRITEABLE flag to True of this array
This problem goes away if I use an environment with python 3.7, or 3.5. However, I need to use python 3.6.
How can I resolve this error?

I downgraded numpy to 1.14.3 with below command, and it worked for me:
pip3 install numpy==1.14.3

Related

this commands fig, axs = plt.subplots(2, 2) show error UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

I am running the following command in spyder,
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2, 2)
Traceback (most recent call last):
File "/home/hh/.local/lib/python3.8/site-packages/matplotlib_inline/backend_inline.py", line 41, in show
display(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/IPython/core/display.py", line 327, in display
publish_display_data(data=format_dict, metadata=md_dict, **kwargs)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/IPython/core/display.py", line 119, in publish_display_data
display_pub.publish(
File "/home/hh/.local/lib/python3.8/site-packages/ipykernel/zmqshell.py", line 138, in publish
self.session.send(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 830, in send
to_send = self.serialize(msg, ident)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 704, in serialize
content = self.pack(content)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 95, in json_packer
return jsonapi.dumps(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/zmq/utils/jsonapi.py", line 40, in dumps
s = jsonmod.dumps(o, **kwargs)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/init.py", line 398, in dumps
return cls(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 296, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 378, in iterencode
return _iterencode(o, 0)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 44, in encode_basestring
s = str(s, 'utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Got the same error if I run the same code in jupyter lab. However, if I run the command on terminal, the fig, ax = plt.subplots() works fine.
This only happens recently and I didn't have this issue before. Checked online material, but didn't find a solution. appreciate if only can provide any insights. thanks.
I had the same problem running plain jupyter and within vscode.
Try updating the jupyter installation incl. required libraries. In my case
pip3 install --upgrade jupyter_client pyzmq
fixed the problem.

when i tried running tf_pose im having trouble

im trying to use tf-pose using tensorflow version2.
!git clone https://github.com/gsethi2409/tf-pose-estimation.git > /dev/null
%cd tf-pose-estimation
!pip3 install -r requirements.txt
this from where i have cloned. but when i run the below code. it is showing an error.
!python run.py --model=mobilenet_thin --resize=432x368 --image=./images/p1.jpg
Traceback (most recent call last): File "run.py", line 39, in
e = TfPoseEstimator(get_graph_path(args.model), target_size=(w, h)) File "/content/tf-pose-estimation/tf_pose/estimator.py", line
337, in init
self.tensor_image = self.graph.get_tensor_by_name('TfPoseEstimator/image:0') File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3902, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False) File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3726, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File
"/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py",
line 3768, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name))) KeyError: "The name 'TfPoseEstimator/image:0' refers to a Tensor which does not exist. The
operation, 'TfPoseEstimator/image', does not exist in the graph."
In tf_pose/estimatory.py under the line that imports tensorflow add the following line
tf.compat.v1.disable_eager_execution()
link

Tensorflow in Raspberry Pi - memory error

I am trying to install tensorflow in raspberry pi 4 with the next command:
pip install tensorflow
The next error occurs:
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting tensorflow
Downloading https://www.piwheels.org/simple/tensorflow/tensorflow-1.14.0-cp37-none-linux_armv7l.whl (79.6MB)
100% |████████████████████████████████| 79.6MB 8.8MB/s
Exception:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 143, in main
status = self.run(options, args)
File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 338, in run
resolver.resolve(requirement_set)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 102, in resolve
self._resolve_one(requirement_set, req)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 256, in _resolve_one
abstract_dist = self._get_abstract_dist_for(req_to_install)
File "/usr/lib/python3/dist-packages/pip/_internal/resolve.py", line 209, in _get_abstract_dist_for
self.require_hashes
File "/usr/lib/python3/dist-packages/pip/_internal/operations/prepare.py", line 283, in prepare_linked_requirement
progress_bar=self.progress_bar
File "/usr/lib/python3/dist-packages/pip/_internal/download.py", line 836, in unpack_url
progress_bar=progress_bar
File "/usr/lib/python3/dist-packages/pip/_internal/download.py", line 677, in unpack_http_url
unpack_file(from_path, location, content_type, link)
File "/usr/lib/python3/dist-packages/pip/_internal/utils/misc.py", line 600, in unpack_file
flatten=not filename.endswith('.whl')
File "/usr/lib/python3/dist-packages/pip/_internal/utils/misc.py", line 489, in unzip_file
data = zip.read(name)
File "/usr/lib/python3.7/zipfile.py", line 1429, in read
return fp.read()
File "/usr/lib/python3.7/zipfile.py", line 885, in read
buf += self._read1(self.MAX_N)
File "/usr/lib/python3.7/zipfile.py", line 975, in _read1
data = self._decompressor.decompress(data, n)
MemoryError
I have tried to install it with the next command that I've seen in the internet fixes the error, but it didn't.
pip install --no-cache-dir tensorflow
Any clue on what could I do??
Thanks in advance.

How to make tf.Transform (Apache Beam Preprocessing for TensorFlow) work?

I am trying to utilize tf.Transform lib for doing data preprocessing with TensorFlow via Apache Beam (Google DataFlow).
https://github.com/tensorflow/transform
here is my setup:
conda create -n tftransform python=2.7
source activate tftransform
pip install tensorflow
pip install tensorflow-transform
pip install dill==0.2.6
git clone https://github.com/tensorflow/transform.git
cd transform/
python setup.py install # for good measure ...
I then try to execute simple_example (https://github.com/tensorflow/transform/blob/master/examples/simple_example.py):
python examples/simple_example.py
I get the following error:
AttributeError: 'DType' object has no attribute 'dtype'
(there is also a warning on import No handlers could be found for logger "oauth2client.contrib.multistore_file")
here is the stacktrace:
Traceback (most recent call last):
File "examples/simple_example.py", line 64, in <module>
preprocessing_fn, tempfile.mkdtemp()))
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/transforms/ptransform.py", line 439, in __ror__
result = p.apply(self, pvalueish, label)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/pipeline.py", line 249, in apply
pvalueish_result = self.runner.apply(transform, pvalueish)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 162, in apply
return m(transform, input)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 168, in apply_PTransform
return transform.expand(input)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/beam/impl.py", line 597, in expand
self._output_dir)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/transforms/ptransform.py", line 439, in __ror__
result = p.apply(self, pvalueish, label)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/pipeline.py", line 249, in apply
pvalueish_result = self.runner.apply(transform, pvalueish)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 162, in apply
return m(transform, input)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/apache_beam/runners/runner.py", line 168, in apply_PTransform
return transform.expand(input)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/beam/impl.py", line 328, in expand
self._preprocessing_fn, input_schema)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/impl_helper.py", line 416, in run_preprocessing_fn
inputs = _make_input_columns(schema)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/impl_helper.py", line 218, in _make_input_columns
placeholders = schema.as_batched_placeholders()
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/tf_metadata/dataset_schema.py", line 87, in as_batched_placeholders
for key, column_schema in self.column_schemas.items()}
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/tf_metadata/dataset_schema.py", line 87, in <dictcomp>
for key, column_schema in self.column_schemas.items()}
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/tf_metadata/dataset_schema.py", line 133, in as_batched_placeholder
return self.representation.as_batched_placeholder(self)
File "/Users/XXX/anaconda/envs/tftransform/lib/python2.7/site-packages/tensorflow_transform/tf_metadata/dataset_schema.py", line 330, in as_batched_placeholder
return tf.placeholder(column.domain.dtype,
AttributeError: 'DType' object has no attribute 'dtype'
Is this lib production ready ?
How can I make this work ?
I ran the following:
python setup.py bdist_wheel
pip install ./dist/tensorflow_transform-0.1.6.dev0-py2-none-any.whl
this uninstalls tensorflow-transform-0.1.5 and installs tensorflow-transform-0.1.6.dev0
running python examples/simple_example.py now works - I get the following result:
[{'s_integerized': 0,
'x_centered': -1.0,
'x_centered_times_y_normalized': -0.0,
'y_normalized': 0.0},
{'s_integerized': 1,
'x_centered': 0.0,
'x_centered_times_y_normalized': 0.0,
'y_normalized': 0.5},
{'s_integerized': 0,
'x_centered': 1.0,
'x_centered_times_y_normalized': 1.0,
'y_normalized': 1.0}]
thanks to #elmer-garduno

matplotlib pgf: OSError: No such file or directory in subprocess.py

I try to use matplotlib to create a pgf file for LaTeX:
from matplotlib.pyplot import subplots
from numpy import linspace
x = linspace(0, 100, 30)
fig, ax = subplots(figsize = (10, 6))
ax.scatter(x, x)
fig.tight_layout()
fig.savefig('/home/mark/dicp/python/figure.pgf')
But I get OSError: [Errno 2] No such file or directory:
Traceback (most recent call last):
File "visualize/latex_figs.py", line 32, in <module>
fig.savefig('/home/mark/dicp/python/figure.pgf')
File "/usr/local/lib/python2.7/dist-packages/matplotlib/figure.py", line 1421, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 2220, in print_figure
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 1957, in print_pgf
return pgf.print_pgf(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 818, in print_pgf
self._print_pgf_to_fh(fh, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 797, in _print_pgf_to_fh
RendererPgf(self.figure, fh),
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 409, in __init__
self.latexManager = LatexManagerFactory.get_latex_manager()
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 223, in get_latex_manager
new_inst = LatexManager()
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 305, in __init__
cwd=self.tmpdir)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
It also generates this part of the output file:
%% [whole bunch of comments]
\begingroup%
\makeatletter%
\begin{pgfpicture}%
\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{10.000000in}{6.000000in}}%
\pgfusepath{use as bounding box}%
I do not understand what OSError: No such file or directory in subprocesses.py has to do with anything... The file I'm trying to save is writable. Am I misunderstanding something, or is this a bug I should report?
I also had this problem while trying to run the example scripts. The problem occurs where backend_pgf.py first tries to use the default LaTeX command. It seems that the PGF backend assumes that it should use xelatex by default. If the problem is the same for you as for me, then you have two options:
add the key "pgf.texsystem" : "pdflatex" (or lualatex, whatever) to your matplotlib.rcParams. For example, add the following snippet to the top of your script:
import matplotlib
pgf_with_rc_fonts = {"pgf.texsystem": "pdflatex"}
matplotlib.rcParams.update(pgf_with_rc_fonts)
ensure that you have xelatex, and that it is on your PATH, and use that as the default latex command (i.e. assuming you're on a Mac or Linux system, which xelatex should return a path).