ValueError: could not broadcast input array from shape (16,18,3) into shape (16) - numpy

I was trying to instance segment my RGB images using pixellib library. However, I encountered the problem from segmentImage function. From stacktrace, I found the issue within init.py, and I have no idea why it needs to broadcast from 3D arrays to 1D. 20 Images from another folder I tried earlier didn't counter any of these.
P.S. This was my first question on StackOverflow. if I miss any necessary details, please let me know.
for file in os.listdir(test_path):
abs_test_path = os.path.join(test_path, file)
if file.endswith('.jpg'):
filename = os.path.splitext(file)[0]
if (os.path.isfile(abs_test_path)):
out_path = out_seg_path + filename
segment_image.segmentImage(abs_test_path, show_bboxes=True,
save_extracted_objects=True,
extract_segmented_objects=True)
im_0 = cv2.imread('segmented_object_1.jpg')
cv2.imwrite(out_path + '_1.jpg', im_0)
im_1 = cv2.imread('segmented_object_2.jpg')
cv2.imwrite(out_path + '_2.jpg', im_1)
This is my error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-835299843033> in <module>
15
16 segment_image.segmentImage(abs_test_path, show_bboxes=True,
---> 17 save_extracted_objects=True, extract_segmented_objects=True)
18
19 # segment_image.segmentImage('segmented_object_1.jpg', show_bboxes=True, output_image_name=out_path + '_1.jpg',
~\anaconda3\envs\mask_rcnn\lib\site-packages\pixellib\instance\__init__.py in segmentImage(self, image_path, show_bboxes, extract_segmented_objects, save_extracted_objects, mask_points_values, output_image_name, text_thickness, text_size, box_thickness, verbose)
762 cv2.imwrite(save_path, extracted_objects)
763
--> 764 extracted_objects = np.array(ex, dtype=object)
765
766 if mask_points_values == True:
ValueError: could not broadcast input array from shape (16,18,3) into shape (16)

There isn't enough information to help you.
I don't know what segment_image.segmentImage is, or what it expects. And I don't have your jpg file to test.
I have an idea of why the problem line raises this error, but since it occurs in an unknown function I can't suggest any fixes.
extracted_objects = np.array(ex, dtype=object)
ex probably is a list of arrays, arrays that match in some some dimensions but not others. It's trying to make an object dtype array of those arrays, but due to the mix of shapes it raises an error.
An simple example that raises the same error:
In [151]: ex = [np.ones((3, 4, 3)), np.ones((3, 5, 3))]
In [152]: np.array(ex, object)
Traceback (most recent call last):
Input In [152] in <module>
np.array(ex, object)
ValueError: could not broadcast input array from shape (3,4,3) into shape (3,)

Related

Market Basket Analysis Association_rules - ValueError: cannot call `vectorize` on size 0 inputs unless `otypes` is set

I am currently running a market basket analysis on my dataset.
When I run my association_rules I get an error.
rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1)
rules.head()
ValueError Traceback (most recent call last)
<ipython-input-47-60252dc62442> in <module>
----> 1 rules = association_rules(frequent_itemsets, metric="lift", min_threshold=1)
2 rules.head()
3 frames
/usr/local/lib/python3.8/dist-packages/numpy/lib/function_base.py in _get_ufunc_and_otypes(self, func, args)
2195 args = [asarray(arg) for arg in args]
2196 if builtins.any(arg.size == 0 for arg in args):
-> 2197 raise ValueError('cannot call `vectorize` on size 0 inputs '
2198 'unless `otypes` is set')
2199
ValueError: cannot call `vectorize` on size 0 inputs unless `otypes` is set
In my dataset their is alot of 0's, I am currently looking into this to see if it is effecting my results.

Linear regression on Tensor flow Google Collab

I am trying to code a linear regression but I am stuck on this cell, as it returns me an error and I donĀ“t understand how to correct it. Would aprecciate some detailed feedback as to how to change my code to avoid this
Here is the cell that raises the error
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
After that I Get:
/usr/local/lib/python3.7/dist-packages/matplotlib/axes/_axes.py:6630: RuntimeWarning: All-NaN slice encountered
xmin = min(xmin, np.nanmin(xi))
/usr/local/lib/python3.7/dist-packages/matplotlib/axes/_axes.py:6631: RuntimeWarning: All-NaN slice encountered
xmax = max(xmax, np.nanmax(xi))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-30-c4d487a4d6e3> in <module>()
1 error = test_predictions - test_labels
----> 2 plt.hist(error, bins = 25)
3 plt.xlabel("Prediction Error [MPG]")
4 _ = plt.ylabel("Count")
5 frames
<__array_function__ internals> in histogram(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/lib/histograms.py in _get_outer_edges(a, range)
322 if not (np.isfinite(first_edge) and np.isfinite(last_edge)):
323 raise ValueError(
--> 324 "autodetected range of [{}, {}] is not finite".format(first_edge, last_edge))
325
326 # expand empty range to avoid divide by zero
ValueError: autodetected range of [nan, nan] is not finite```

tensorflow - ValueError: The shape for decoder/while/Merge_12:0 is not an invariant for the loop

I use tf.contrib.seq2seq.dynamic_decode for decoder training
prediction, final_decoder_state, _ = dynamic_decode(
custom_decoder
)
with custom decoder
custom_decoder = CustomDecoder(decoder_cell, helper, decoder_init_state)
and helper
helper = CustomTrainingHelper(batch_size, targets, stop_targets,
num_outs, outputs_per_step, 1.0, False)
And dynamic_decoder raises error
Traceback (most recent call last):
File "E:/tasks/text_to_speech/tts/tf_seq2seq.py", line 95, in <module>
custom_decoder
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 304, in dynamic_decode
swap_memory=swap_memory)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3224, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2956, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2930, in _BuildLoop
next_vars.append(_AddNextAndBackEdge(m, v))
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 688, in _AddNextAndBackEdge
_EnforceShapeInvariant(m, v)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 632, in _EnforceShapeInvariant
(merge_var.name, m_shape, n_shape))
ValueError: The shape for decoder/while/Merge_12:0 is not an invariant for the loop. It enters the loop with shape (10, 1), but has shape (?, 1) after one iteration. Provide shape invariants using either the `shape_invariants` argument of tf.while_loop or set_shape() on the loop variables.
batch_size is equal to 10. As I understand the issue is in tf.while_loop and batch_size. In what way it is possible to fix this error? Thanks in advance.
Your provided too little information to say anything specific. Please follow (https://stackoverflow.com/help/mcve) in the future.
In general, this error is telling you the following. By default TensorFlow checks that the variables passed from one iteration of the while loop to the next one don't change shape. In your case, the decoder/while/Merge_12:0 tensor originally had a shape of (10, 1) but after one iteration it became (?, 1) meaning that tensorflow can no longer infer the size of the first dimension.
If you know that the first dimension is really 10, you can use Tensor.set_shape to tell this to TensorFlow.

tf.contrib.learn yields error message "module has no attribute 'learn' "

Here is a snippet of my code taken directly from the tf.contrib.learn tutorial on tensorflow.org:
# Load Data Sets
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename = IRIS_TRAINING,
target_dtype = np.int,
features_dtype = np.float32)
Here is the error message:
AttributeError Traceback (most recent call last)
<ipython-input-14-7122d1244c55> in <module>()
11
12 # Load Data Sets
---> 13 training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
14 filename = IRIS_TRAINING,
15 target_dtype = np.int,
AttributeError: 'module' object has no attribute 'learn'
Clearly the module has the attribute learn since tensorflow has a section on learning tf.contrib.learn. What am I doing wrong? All guidance is appreciated.

Computing Edit Distance (feed_dict error)

I've written some code in Tensorflow to compute the edit-distance between one string and a set of strings. I can't figure out the error.
import tensorflow as tf
sess = tf.Session()
# Create input data
test_string = ['foo']
ref_strings = ['food', 'bar']
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return(tf.SparseTensor(indices, chars, [num_words,1,1]))
test_string_sparse = create_sparse_vec(test_string*len(ref_strings))
ref_string_sparse = create_sparse_vec(ref_strings)
sess.run(tf.edit_distance(test_string_sparse, ref_string_sparse, normalize=True))
This code works and when run, it produces the output:
array([[ 0.25],
[ 1. ]], dtype=float32)
But when I attempt to do this by feeding the sparse tensors in through sparse placeholders, I get an error.
test_input = tf.sparse_placeholder(dtype=tf.string)
ref_input = tf.sparse_placeholder(dtype=tf.string)
edit_distances = tf.edit_distance(test_input, ref_input, normalize=True)
feed_dict = {test_input: test_string_sparse,
ref_input: ref_string_sparse}
sess.run(edit_distances, feed_dict=feed_dict)
Here is the error traceback:
Traceback (most recent call last):
File "<ipython-input-29-4e06de0b7af3>", line 1, in <module>
sess.run(edit_distances, feed_dict=feed_dict)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 597, in _run
for subfeed, subfeed_val in _feed_fn(feed, feed_val):
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 558, in _feed_fn
return feed_fn(feed, feed_val)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 268, in <lambda>
[feed.indices, feed.values, feed.shape], feed_val)),
TypeError: zip argument #2 must support iteration
Any idea what is going on here?
TL;DR: For the return type of create_sparse_vec(), use tf.SparseTensorValue instead of tf.SparseTensor.
The problem here comes from the return type of create_sparse_vec(), which is tf.SparseTensor, and is not understood as a feed value in the call to sess.run().
When you feed a (dense) tf.Tensor, the expected value type is a NumPy array (or certain objects that can be converted to an array). When you feed a tf.SparseTensor, the expected value type is a tf.SparseTensorValue, which is similar to a tf.SparseTensor but its indices, values, and shape properties are NumPy arrays (or certain objects that can be converted to arrays, like the lists in your example.
The following code should work:
def create_sparse_vec(word_list):
num_words = len(word_list)
indices = [[xi, 0, yi] for xi,x in enumerate(word_list) for yi,y in enumerate(x)]
chars = list(''.join(word_list))
return tf.SparseTensorValue(indices, chars, [num_words,1,1])