Python 3.5 Trying to plot PCA with sklearn and matplotlib - pandas

Using the following code generates the error: TypeError: float() argument must be a string or a number, not 'Pred':
I am struggling to figure out what is causing this error to be thrown.
self.features is a list composed of three floats ex. [1.1, 1.2, 1.3]
an example of self.features:
[array([-1.67191985, 0.1 , 9.69981494]), array([-0.68486623, 0.05 , 9.99085024]), array([ -1.36 , 0.1 , 10.44720459]), array([-2.46918915, 0. , 3.5483372 ]), array([-0.835 , 0.1 , 4.02740479])]
This is the method where the error is being thrown.
def pca(self):
pca = PCA(n_components=2)
x_np = np.asarray(self.features)
pca.fit(x_np)
X_reduced = pca.transform(x_np)
plt.figure(figsize=(10, 8))
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap='RdBu')
plt.xlabel('First component')
plt.ylabel('Second component')
The full trace back is:
Traceback (most recent call last):
File "/Users/user/PycharmProjects/Post-Translational-Modification-
Prediction/pred.py", line 244, in <module>
y.generate_pca()
File "/Users/user/PycharmProjects/Post-Translational-Modification-
Prediction/pred.py", line 222, in generate_pca
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap='RdBu')
File "/usr/local/lib/python3.5/site-packages/matplotlib/pyplot.py",
line 3435, in scatter
edgecolors=edgecolors, data=data, **kwargs)
File "/usr/local/lib/python3.5/site-packages/matplotlib/__init__.py",
line 1892, in inner
return func(ax, *args, **kwargs)
File "/usr/local/lib/python3.5/site-packages/matplotlib/axes/_axes.py", line 3976, in scatter
c_array = np.asanyarray(c, dtype=float)
File "/usr/local/lib/python3.5/site-packages/numpy/core/numeric.py", line 583, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
TypeError: float() argument must be a string or a number, not 'Pred'

The suggested fix by #WhoIsJack is to add np.arange(len(self.features))
The functional code for those who run into similar issues is:
def generate_pca(self):
y= np.arange(len(self.features))
pca = PCA(n_components=2)
x_np = np.asarray(self.features)
pca.fit(x_np)
X_reduced = pca.transform(x_np)
plt.figure(figsize=(10, 8))
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap='RdBu')
plt.xlabel('First component')
plt.ylabel('Second component')
plt.show()

Related

model.predict(sample) returning TypeError: cannot perform reduce with flexible type

I keep running into this error when I try to predict based on fitted model.
training, testing = train_test_split(gesture, test_size = 0.2, random_state = 0)
x = training.drop('CLASS', axis = 1) # remove the Class column from Training dataframe
y = testing.drop('CLASS', axis = 1) # remove the Class column from Testing dataframe
f_train = x.values.tolist()
l_train = training['CLASS'].values.tolist() # make a list of class identifiers from Training dataframe
f_test = y.values.tolist()
knn = KNeighborsRegressor(n_neighbors = 5)
knn.fit(f_train, l_train)
predictions = knn.predict(f_test)
The error occurs in the last line of the above code and the error message is given below:
Traceback (most recent call last):
File "C:\Users\Umair Khan\Dropbox\`Shift betweeen PCs\Work\EMG Hand Gesture\Codes\ML_on_CSV.py", line 39, in <module>
predictions = knn.predict(f_test)
File "C:\Users\Umair Khan\AppData\Local\Programs\Python\Python37-32\lib\site-packages\sklearn\neighbors\_regression.py", line 185, in predict
y_pred = np.mean(_y[neigh_ind], axis=1)
File "<__array_function__ internals>", line 6, in mean
File "C:\Users\Umair Khan\AppData\Local\Programs\Python\Python37-32\lib\site-packages\numpy\core\fromnumeric.py", line 3335, in mean
out=out, **kwargs)
File "C:\Users\Umair Khan\AppData\Local\Programs\Python\Python37-32\lib\site-packages\numpy\core\_methods.py", line 151, in _mean
ret = umr_sum(arr, axis, dtype, out, keepdims)
TypeError: cannot perform reduce with flexible type
f_test is a list of lists like such [[16, 30, 35, 250, -1, 0.5, 35, 0.03, 0.02], [16, 30, 35, 250, -1, 0.5, 35, 0.03, 0.02]]
I have also tried passing an array in predict(sample) but the issue still remains.
predictions = knn.predict(np.array(f_test).astype(np.float))
We need to see more of the error traceback. And info on the function inputs, particularly shape and dtype.
I've seen this error message when working with structured arrays. But it's not obvious where those might arise in your code.
In [15]: np.ones((2,), dtype='i,i')
Out[15]: array([(1, 1), (1, 1)], dtype=[('f0', '<i4'), ('f1', '<i4')])
In [16]: np.sum(np.ones((2,), dtype='i,i'))
....
---> 87 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
88
89
TypeError: cannot perform reduce with flexible type
Solved:
changed dtype of l_train from string to float and the error disappeared. f_train and f_test were already of type float.

Matplotlib memory leak in Ubuntu, but not in Windows 10

I have a code that reads images from folders and annotates the images. I have about 10K images of size 3K x 3K. I apply cell detection, and I put an annotation on the images. For simplicity, I in the snapshot code provided below, the code generates a random image and random annotations with X and Y location.
The code works fine, in windows 10, with python 3.6 for both multiprocessing and without multiprocessing. However, when I submit to the HPC server with ubuntu 18.04 and Python 3.6, I am getting memory error with Matplotlib. I have tried the following, but no luck in resolving the problem.
* plt.close(fig)
* plt.figure().clear()
* plt.clf()
* plt.close()
'''
import os
import matplotlib.pyplot as plot
import pandas as pd
import multiprocessing as mp
import numpy as np
import seaborn as sns
sns.set_palette('bright')
def mark_points(im, df, file_name):
# this code put scatter plot on the image and saves the the figure
dpi = 100
height, width, nbands = im.shape
# What size does the figure need to be in inches to fit the image?
fig_size = width / float(dpi), height / float(dpi)
# Create a figure of the right size with one axes that takes up the full figure
fig = plt.figure(figsize=fig_size)
ax = fig.add_axes([0, 0, 1, 1])
# Hide spines, ticks, etc.
ax.axis('off')
# Display the image.
ax.imshow(im, interpolation='nearest')
sns.scatterplot(x='X',
y='Y',
data=df,
hue='Class',
ax=ax,
s=2,
edgecolor=None,
legend=False)
ax.set(xlim=[0, width], ylim=[height, 0], aspect=1)
# save image with cell annotation
fig.savefig(file_name, dpi=dpi, transparent=True)
# plt.close(fig)
plt.figure().clear()
plt.clf()
plt.close()
def annotate_images(output_dir,
process_num):
# create an image and random X and Y location for annotation
os.makedirs(output_dir, exist_ok=True)
for j in range(10000):
# this approximately the size of image I am processing
im_height, im_width = 3000, 3000
# print('{}'.format(process_num))
file_name = 'my_file_name_{}.jpg'.format(process_num)
output_file = os.path.join(output_dir, file_name)
# generate dataframe with data to plot
df = pd.DataFrame(columns=['X', 'Y', 'Class'])
num_points = 20000
df['X'] = np.random.randint(0, im_width, size=(num_points), dtype='int32').tolist()
df['Y'] = np.random.randint(0, im_height, size=(num_points), dtype='int32').tolist()
classes_ = ['A', 'B', 'C']
df['Class'] = [classes_[np.random.randint(0, 3)] for _ in range(num_points)]
# generate a random image
im = np.random.randint(low=0, high=255, size=(im_height, im_width, 3), dtype=np.uint8)
mark_points(im=im, df=df, file_name=output_file)
def run(output_dir,
num_processes=1,
multi_process=False
):
# number of folders
n = 100
for i in range(n):
if multi_process is True:
processes = [
mp.Process(target=annotate_images, args=(output_dir, process_num)) for process_num in range(num_processes)]
print('{} processes created'.format(num_processes))
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
print('All Processes finished!!!')
else:
annotate_images(output_dir=output_dir,
process_num=0
)
if __name__ == '__main__':
num_processes = 4
multi_process = True
home_dir = r'Output'
for i in range(3):
params = dict(output_dir=home_dir,
num_processes=num_processes,
multi_process=multi_process)
run(**params)
'''
Error message
image_annotate_cells.py:44: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
plt.figure().clear()
image_annotate_cells.py:22: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
fig = plt.figure(figsize=fig_size)
Process Process-1:
Traceback (most recent call last):
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "image_annotate_cells.py", line 74, in annotate_images
mark_points(im=im, df=df, file_name=output_file)
File "image_annotate_cells.py", line 42, in mark_points
fig.savefig(file_name, dpi=dpi, transparent=True)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/figure.py", line 2203, in savefig
self.canvas.print_figure(fname, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 2126, in print_figure
**kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py", line 358, in wrapper
return func(*args, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py", line 584, in print_jpg
FigureCanvasAgg.draw(self)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py", line 393, in draw
self.figure.draw(self.renderer)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/figure.py", line 1736, in draw
renderer, self, artists, self.suppressComposite)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/image.py", line 137, in _draw_list_compositing_images
a.draw(renderer)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/axes/_base.py", line 2630, in draw
mimage._draw_list_compositing_images(renderer, self, artists)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/image.py", line 137, in _draw_list_compositing_images
a.draw(renderer)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/collections.py", line 894, in draw
Collection.draw(self, renderer)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/artist.py", line 38, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/collections.py", line 369, in draw
self._offset_position)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/site-packages/matplotlib/path.py", line 197, in vertices
#property
KeyboardInterrupt
Traceback (most recent call last):
File "image_annotate_cells.py", line 124, in <module>
run(**params)
File "image_annotate_cells.py", line 97, in run
p.join()
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/multiprocessing/process.py", line 124, in join
res = self._popen.wait(timeout)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/multiprocessing/popen_fork.py", line 50, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/home/DMP/DUDMP/COPAINGE/yhagos/.conda/envs/tf2cpu/lib/python3.6/multiprocessing/popen_fork.py", line 28, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

Conditioning pixelsnail on classes

I am trying to condition a pixelcnn model that I adapted, but there is needed some changes to condition the model on classes (series). I am working with time-series so in fact I would like to know how could I condition the model in some series as well. But the case, when I try to one hot encode my "Y"[batch,] labels that I am giving to it as an array of the same batch length that "X" [batch, sqrt(seq_len), sqrt(seq_len), channels]. To condition the model, I have the next code:
if args.class_conditional:
# raise NotImplementedError
num_labels = train_data.get_num_labels()
y_init = tf.placeholder(tf.int32, shape=(args.init_batch_size,))
h_init = tf.one_hot(y_init, num_labels)
y_sample = np.split(
np.mod(np.arange(args.batch_size), num_labels), args.nr_gpu)
h_sample = [tf.one_hot(tf.Variable(y_sample[i], trainable=False), num_labels)
for i in range(args.nr_gpu)]
ys = [tf.placeholder(tf.int32, shape=(args.batch_size,))
for i in range(args.nr_gpu)]
hs = [tf.one_hot(ys[i], num_labels) for i in range(args.nr_gpu)]
else:
h_init = None
h_sample = [None] * args.nr_gpu
hs = h_sample
The current output of "y_sample" that is where the shell is locating me the error:
[array([0. , 1. , 0.30521799, 1.30521799, 0.61043598,
1.61043598, 0.91565397, 0.22087195, 1.22087195, 0.52608994,
1.52608994, 0.83130793, 0.13652592, 1.13652592, 0.44174391,
1.44174391, 0.7469619 , 0.05217988, 1.05217988, 0.35739787,
1.35739787, 0.66261586, 1.66261586, 0.96783385, 0.27305184,
1.27305184, 0.57826983, 1.57826983, 0.88348781, 0.1887058 ,
1.1887058 , 0.49392379, 1.49392379, 0.79914178, 0.10435977,
1.10435977, 0.40957776, 1.40957776, 0.71479575, 0.02001373,
1.02001373, 0.32523172, 1.32523172, 0.63044971, 1.63044971,
0.9356677 , 0.24088569, 1.24088569, 0.54610368, 1.54610368])]
it is giving me an error in h_sample when it is going to do the one_hot
Traceback (most recent call last):
File "train.py", line 398, in <module>
main(FLAGS)
File "train.py", line 111, in main
for i in range(args.nr_gpu)]
File "train.py", line 111, in <listcomp>
for i in range(args.nr_gpu)]
File "/home/proto/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 2364, in one_hot
name)
File "/home/proto/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2831, in _one_hot
off_value=off_value, axis=axis, name=name)
File "/home/proto/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 609, in _apply_op_helper
param_name=input_name)
File "/home/proto/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 60, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'indices' has DataType float64 not in list of allowed values: uint8, int32, int64
I changed for i in range(args.nr_gpu)] hard-coded to 1 to see if it was the problem but it keeps giving me errors.

Subclass pandas DataFrame with required argument

I'm working on a new data structure that subclasses pandas DataFrame. I want to enforce my new data structure to have new_property, so that it can be processed safely later on.
However, I'm running into error when using my new data structure, because the constructor gets called by some internal pandas function without the required property.
Here is my new data structure.
import pandas as pd
class MyDataFrame(pd.DataFrame):
#property
def _constructor(self):
return MyDataFrame
_metadata = ['new_property']
def __init__(self, data, new_property, index=None, columns=None, dtype=None, copy=True):
super(MyDataFrame, self).__init__(data=data,
index=index,
columns=columns,
dtype=dtype,
copy=copy)
self.new_property = new_property
Here is an example that causes error
data1 = {'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [15, 25, 30], 'd': [1, 1, 2]}
df1 = MyDataFrame(data1, new_property='value')
df1[['a', 'b']]
Here is the error message
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-
packages\IPython\core\interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-33-b630fbf14234>", line 1, in <module>
df1[['a', 'b']]
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2053, in __getitem__
return self._getitem_array(key)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2098, in _getitem_array
return self.take(indexer, axis=1, convert=True)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py", line 1670, in take
result = self._constructor(new_data).__finalize__(self)
TypeError: __init__() missing 1 required positional argument: 'new_property'
Is there a fix to this or an alternative way to design this to enforce my new data structure to have new_property?
Thanks in advance!
This question has been answered by a brilliant pandas developer. See this issue for more details. Pasting the answer here.
class MyDataFrame(pd.DataFrame):
#property
def _constructor(self):
return MyDataFrame._internal_ctor
_metadata = ['new_property']
#classmethod
def _internal_ctor(cls, *args, **kwargs):
kwargs['new_property'] = None
return cls(*args, **kwargs)
def __init__(self, data, new_property, index=None, columns=None, dtype=None, copy=True):
super(MyDataFrame, self).__init__(data=data,
index=index,
columns=columns,
dtype=dtype,
copy=copy)
self.new_property = new_property
data1 = {'a': [1, 2, 3], 'b': [4, 5, 6], 'c': [15, 25, 30], 'd': [1, 1, 2]}
df1 = MyDataFrame(data1, new_property='value')
df1[['a', 'b']].new_property
Out[121]: 'value'
MyDataFrame(data1)
TypeError: __init__() missing 1 required positional argument: 'new_property'
I know this is an old issue, but I wanted to extend on hlu's answer.
When implementing the answer described by hlu, I was getting the following error when just trying to print the subclassed DataFrame: AttributeError: 'internal_constructor' object has no attribute '_from_axes'
To fix this, I have used an object instead of the function used in hlu's answer to be able to implement the _from_axes method on the callable.
There is no classmethod type decorator for the _internal_constructor class, so instead we instantiate it with the callers class so it can be used when the _internal_constructor is called.
class MyDataFrame(pd.DataFrame):
#property
def _constructor(self):
return MyDataFrame._internal_constructor(self.__class__)
class _internal_constructor(object):
def __init__(self, cls):
self.cls = cls
def __call__(self, *args, **kwargs):
kwargs['my_required_argument'] = None
return self.cls(*args, **kwargs)
def _from_axes(self, *args, **kwargs):
return self.cls._from_axes(*args, **kwargs)

Shape must be rank 0 but is rank 1, parse_single_sequence_example

For the past few days I have been having an issue with serializing data to tfrecord format and then subsequently deserializing it using parse_single_sequence example. I am attempting to retrieve data for use with a fairly standard RNN model, however this is my first attempt at using the tfrecords format and the associated pipeline that goes with it.
Here is a toy example to reproduce the issue I am having:
import tensorflow as tf
import tempfile
from IPython import embed
sequences = [[1, 2, 3], [4, 5, 1], [1, 2]]
label_sequences = [[0, 1, 0], [1, 0, 0], [1, 1]]
def make_example(sequence, labels):
ex = tf.train.SequenceExample()
sequence_length = len(sequence)
ex.context.feature["length"].int64_list.value.append(sequence_length)
fl_tokens = ex.feature_lists.feature_list["tokens"]
fl_labels = ex.feature_lists.feature_list["labels"]
for token, label in zip(sequence, labels):
fl_tokens.feature.add().int64_list.value.append(token)
fl_labels.feature.add().int64_list.value.append(label)
return ex
writer = tf.python_io.TFRecordWriter('./test.tfrecords')
for sequence, label_sequence in zip(sequences, label_sequences):
ex = make_example(sequence, label_sequence)
writer.write(ex.SerializeToString())
writer.close()
tf.reset_default_graph()
file_name_queue = tf.train.string_input_producer(['./test.tfrecords'], num_epochs=None)
reader = tf.TFRecordReader()
context_features = {
"length": tf.FixedLenFeature([], dtype=tf.int64)
}
sequence_features = {
"tokens": tf.FixedLenSequenceFeature([], dtype=tf.int64),
"labels": tf.FixedLenSequenceFeature([], dtype=tf.int64)
}
ex = reader.read(file_name_queue)
# Parse the example (returns a dictionary of tensors)
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized=ex,
context_features=context_features,
sequence_features=sequence_features
)
context = tf.contrib.learn.run_n(context_parsed, n=1, feed_dict=None)
print(context[0])
sequence = tf.contrib.learn.run_n(sequence_parsed, n=1, feed_dict=None)
print(sequence[0])
The associated stack trace is:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 594, in call_cpp_shape_fn
status)
File "/usr/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Shape must be rank 0 but is rank 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "my_test.py", line 51, in
sequence_features=sequence_features
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/parsing_ops.py", line 640, in parse_single_sequence_example
feature_list_dense_defaults, example_name, name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/parsing_ops.py", line 837, in _parse_single_sequence_example_raw
name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_parsing_ops.py", line 285, in _parse_single_sequence_example
name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2382, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1783, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/common_shapes.py", line 596, in call_cpp_shape_fn
raise ValueError(err.message)
ValueError: Shape must be rank 0 but is rank 1
I posted this as a potential issue over on github though it seems I may just be using it incorrectly: Tensorflow Github Issue
So with the background information out of the way, I'm just wondering if I am in fact making an error here? Any help in the right direction would be greatly appreciated, its been a few days and my poking around hasn't panned out. Thanks all!
Got it and it was a bad assumption on my part. The tf.TFRecordReader.read(queue, name=None) returns a tuple when I assumed it would have returned just the value not (key, value) which I was directly passing into the example parser.