can I know how to solve this error.
AttributeError: 'tuple' object has no attribute 'flatten'
for i in indexes.flatten():
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
confidence = str(round(confidences[i],2))
color = colors[i]
cv2.rectangle(img, (x, y), (x+w, y+h), color, 2)
cv2.putText(img, label + " " + confidence, (x, y+20), font, 2, (255, 255, 255),2)
I don't know if you solved this already.
You have to check for 0 length indexes before the for loop like this:
if len(indexes) > 0:
for i in indexes.flatten():
....
That way it won't enter the for loop when indexes length is 0 which is causing the error.
Regards!
Related
I'm using the quadprog module to set up an SVM for speech recognition. I took a QP implementation from here: https://github.com/stephane-caron/qpsolvers/blob/master/qpsolvers/quadprog_.py
Here is their implementation:
def quadprog_solve_qp(P, q, G=None, h=None, A=None, b=None, initvals=None,
verbose=False):
if initvals is not None:
print("quadprog: note that warm-start values ignored by wrapper")
qp_G = P
qp_a = -q
if A is not None:
if G is None:
qp_C = -A.T
qp_b = -b
else:
qp_C = -vstack([A, G]).T
qp_b = -np.insert(h, 0, 0, axis=0)
meq = A.shape[0]
else: # no equality constraint
qp_C = -G.T if G is not None else None
qp_b = -h if h is not None else None
meq = 0
try:
return solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
except ValueError as e:
if "matrix G is not positive definite" in str(e):
# quadprog writes G the cost matrix that we write P in this package
raise ValueError("matrix P is not positive definite")
raise
Shapes:
P: (127, 127)
h: (254, 1)
q: (127, 1)
A: (1, 127)
G: (254, 127)
I also had that qp_b was initially assigned to an hstack of an array arr = array([0]) with h but the shape: (1,) prevented numpy from concatenating the arrays. I fixed this error by inserting a [0] instead.
When I try quadprog_solve_qp(P, q, G, h, A) I get a:
File "----------------------------.py", line 95, in quadprog_solve_qp
return solve_qp(qp_G, qp_a, qp_C, qp_b, meq)[0]
File "quadprog/quadprog.pyx", line 12, in quadprog.solve_qp
ValueError: Buffer has wrong number of dimensions (expected 1, got 2)
And I have no idea where it's coming from, nor what I can do. If anyone has any idea how the quadprog module works or simply what I might be doing wrong I would be pleased to hear.
I'm trying to have a layer in keras that takes a flat tensor x (doesn't have zero value in it and shape = (batch_size, units)) multiplied by a mask (of the same shape), and it will sort it in the way that masked values will be placed first in the output (the order of the elements value doesn't matter). For clarity here is an example (batch_size = 1, units = 8):
It seems simple but the problem is that I can't find a good solution. Any code or idea is appreciated.
My current code is as below, If you know a more efficient way please let me know.
class Sort(keras.layers.Layer):
def call(self, inputs):
x = inputs.numpy()
nonx, nony = x.nonzero() # idxs of nonzero elements
zero = [np.where(x == 0)[0][0], np.where(x == 0)[1][0]] # idx of first zero
x_shape = tf.shape(inputs)
result = np.zeros((x_shape[0], x_shape[1], 2), dtype = 'int') # mapping matrix
result[:, :, 0] += zero[0]
result[:, :, 1] += zero[1]
p = np.zeros((x_shape[0]), dtype = 'int')
for i, j in zip(nonx, nony):
result[i, p[i]] = [i, j]
p[i] += 1
y = tf.gather_nd(inputs, result)
return y
I'm trying some exercise.
I looked for this problem before, but didn't find one for my problem.
This code seems to work with trainX, but not with trainY.
I have 1672 data for trainY in 1D for one neuron output.
batch_dim = trainX.shape[0]
input_dim = windowSize
hidden_dim = 6
output_dim = 1
O: batch_dim=1 with value "1672"
X = trainX[index:index+batch_dim,:]
Y = trainY[index:index+batch_dim,:]
index = index+batch_dim
The problem seems to be in the dimension. So I try to reshape it
Y = np.reshape(trainY[index:index+batch_dim,:],-1,1)
but it doesn't solve anything. The output still work, but error still there.
I just wanted the error to go away.
The variable size output:
batch_dim = 1 (value = 1672)
index = 1 (value = 0)
X : (1672,3)
Y : (1672,)
Y = trainY[index:index+batch_dim,:]
IndexError: too many indices for array
I modified code of the wide & deep tutorial for reading large input from file using tf.contrib.learn.read_batch_examples. For speeding up the training process, I set the read_batch_size and got an error ValueError: All shapes must be fully defined: [TensorShape([]), TensorShape([Dimension(None)])]
My piece of codeļ¼
def input_fn_pre(batch_size, filename):
examples_op = tf.contrib.learn.read_batch_examples(
filename,
batch_size=5000,
reader=tf.TextLineReader,
num_epochs=5,
num_threads=5,
read_batch_size=2500,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant(['0'], dtype=tf.string)] * len(COLUMNS) * 2500, use_quote_delim=False))
examples_dict = {}
for i, col in enumerate(COLUMNS):
examples_dict[col] = examples_op[:, i]
feature_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32) for k in CONTINUOUS_COLUMNS}
feature_cols.update({k: dense_to_sparse(examples_dict[k]) for k in CATEGORICAL_COLUMNS})
label = tf.string_to_number(examples_dict[LABEL_COLUMN], out_type=tf.int32)
return feature_cols, label
while using the default parameter setting is ok:
def input_fn_pre(batch_size, filename):
examples_op = tf.contrib.learn.read_batch_examples(
filename,
batch_size=5000,
reader=tf.TextLineReader,
num_epochs=5,
num_threads=5,
parse_fn=lambda x: tf.decode_csv(x, [tf.constant(['0'], dtype=tf.string)] * len(COLUMNS), use_quote_delim=False))
examples_dict = {}
for i, col in enumerate(COLUMNS):
examples_dict[col] = examples_op[:, i]
feature_cols = {k: tf.string_to_number(examples_dict[k], out_type=tf.float32) for k in CONTINUOUS_COLUMNS}
feature_cols.update({k: dense_to_sparse(examples_dict[k]) for k in CATEGORICAL_COLUMNS})
label = tf.string_to_number(examples_dict[LABEL_COLUMN], out_type=tf.int32)
return feature_cols, label
There is not enough explanation in the tensorflow doc.
I did not see any difference between your two code snippets. Could you update your code?
I have the following code:
dotp = np.dot(X[i], w)
mult = -Y[i] * dotp
lhs = Y[i] * X[i]
rhs = logistic(mult)
s += lhs * rhs
And it throws me the following error (truncated for brevity):
File "/Users/leonsas/Projects/temp/learners/learners.py", line 26, in log_likelihood_grad
s += lhs * rhs
File "/usr/local/lib/python2.7/site-packages/numpy/matrixlib/defmatrix.py", line 341, in __mul__
return N.dot(self, asmatrix(other))
`ValueError: matrices are not aligned`
I was expecting lhs to be a column vector and rhs to be a scalar and so that operation should work.
To debug, I printed out the dimensions:
print "lhs", np.shape(lhs)
print "rhs", rhs, np.shape(rhs)
Which outputs:
lhs (1, 18209)
rhs [[ 0.5]] (1, 1)
So it seems that they are compatible for a multiplication. Any thoughts as to what am I doing wrong?
EDIT: More information of what I'm trying to do.
This code is to implement a log-likehood gradient to estimate coefficients.
Where z is the dot product of the weights with the x values.
My attempt at implementing this:
def log_likelihood_grad(X, Y, w, C=0.1):
K = len(w)
N = len(X)
s = np.zeros(K)
for i in range(N):
dotp = np.dot(X[i], w)
mult = -Y[i] * dotp
lhs = Y[i] * X[i]
rhs = logistic(mult)
s += lhs * rhs
s -= C * w
return s
You have a matrix lhs of shape (1, 18209) and rhs of shape (1, 1) and you are trying to multiply them. Since they're of matrix type (as it seems from the stack trace), the * operator translates to dot. Matrix product is defined only for the cases where the number of columns in the first matrix and the number of rows in the second one are equal, and in your case they're not (18209 and 1). Hence the error.
How to fix it: check the maths behind the code and fix the formula. Perhaps you forgot to transpose the first matrix or something like that.
vectors' shape on numpy lib are like (3,). when you try to multiply them with np.dot(a,b) func it gives dim error. np.outer(a,b) func should be used at this point.