Related
I have the first 3D array of size (50,250,250) that includes data points (1,2,3,4,5). I set up a threshold that is 3, where the data points above should equal to 1 and below it equal to 0. the only exception is when the data points are equal to 3, it has to test the second threshold (threshold1=50) that is based on the second 3D array of size (50,250,250). my equation is how to include the two thresholds in my code! In other words, the for loop will check every datapoint in array 1 and perform the first threshold testing, if the datapoint is equal to 3, the for loop should check the counterpart of that datapoint in the second array for the second threshold testing! I have tried the below code, but the results did not make sense
res1=[]
f1=numpy.ones((250, 250))
threshold=3
threshold1=30
for i in array1:
i = i.data
ii= f1*i
ii[ii < threshold] = 0
ii[ii > threshold] = 1
res1.append(ii)
if ii[ii == threshold]:
for j in array2:
j = j.data
jj[jj < threshold1] = 0
jj[jj > threshold1] = 1
res1.append(jj)
Array1:
array([[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[3., 3., 3., ..., 0., 0., 0.],
[3., 3., 3., ..., 0., 0., 0.],
[3., 3., 3., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 1.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[3., 3., 3., ..., 0., 0., 0.],
[3., 3., 3., ..., 0., 0., 0.],
[3., 3., 3., ..., 0., 0., 0.]],
Array2:[[ nan, nan, nan, ..., nan,
0.9839769, 1.7042577],
[ nan, nan, nan, ..., nan,
nan, nan],
[ nan, nan, nan, ..., 3.2351596,
2.0924768, 1.7604152],
...,
[ nan, nan, nan, ..., 158.48865 ,
158.48865 , 125.888 ],
[ nan, nan, nan, ..., 158.48865 ,
158.48865 , 158.48865 ],
[ nan, nan, nan, ..., 125.88556 ,
158.48865 , 158.48865 ]],
the produced list (rest1)
`[array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[1., 1., 1., ..., 0., 0., 0.],
[1., 1., 1., ..., 0., 0., 0.],
[1., 1., 1., ..., 0., 0., 0.]]),
array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[1., 1., 1., ..., 0., 0., 0.],
[1., 1., 1., ..., 0., 0., 0.],
[1., 1., 1., ..., 0., 0., 0.]]),
array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],`
IIUC, for your second if condition, you are trying to see whether there is at least a 3 value in that array1, and then you will choose that 2D array of the same position. In that case, you should use in operator.
for i in range(len(array1)):
if threshold in array1[i]:
array2[i][array2[i] < threshold1] = 0
array2[i][array2[i] > threshold1] = 1
res1.append(array2[i])
else:
array1[i][array1[i] < threshold] = 0
array1[i][array1[i] > threshold] = 1
res1.append(array1[i])
The above method is a bit lengthy for numpy. There's a numpy way to do this, too.
array1[array1 < threshold] = 0
array1[array1 > threshold] = 1
array2_condition = np.unique(np.argwhere(array1 == 3)[:,0]) # return the index of array1 if 3 in array1
chosen_array2 = array2[array2_condition]
chosen_array2[chosen_array2 < threshold1] = 0
chosen_array2[chosen_array2 > threshold1] = 1
array2[array2_condition] = chosen_array2 # if you still want array2 values to be changed
res1 = array1
res1[array2_condition] = chosen_array2 # Final result
Update
As was mentioned by the OP, every 2D array has at least a 3 in it. So, the array2_condition is not applicable. Instead, we will modify the array2_condition and use a for loop to change the elements.
res1 = array1
res1[res1 < threshold] = 0
res1[res1 > threshold] = 1
array2_condition = np.argwhere(array1 == 3)
for data in array2_condition:
if array2[tuple(data)] > threshold1:
res1[tuple(data)] = 1
elif array2[tuple(data)] < threshold1:
res12[tuple(data)] = 0
I have a set of integers from a label column in a CSV file - [1,2,4,3,5,2,..]. The number of classes is 5 ie range of 1 to 6. I want to one-hot encode them using the below code.
y = df.iloc[:,10].values
y = tf.keras.utils.to_categorical(y, num_classes = 5)
y
But this code gives me an error
IndexError: index 5 is out of bounds for axis 1 with size 5
How can I fix this?
If you use tf.keras.utils.to_categorical to one-hot the label vector, the integers should start from 0 to num_classes, source. In your case, you should do as follows
import tensorflow as tf
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
y_tf = tf.keras.utils.to_categorical(a-1, num_classes = 5)
y_tf
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)
or, you can use pd.get_dummies,
import pandas as pd
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
a_pd = pd.get_dummies(a).astype('float32').values
a_pd
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)
So I am trying to figure out how to train my matrix in a way that I will get a BandRNN.
BandRnn is a diagonalRNN model with a different number of connections per neuron.
For example:
C is the number of connections per neuron.
I found out that there is a way to turn off some of the gradients in a for loop, in a way that prevents them from being trained as follows:
for p in model.input.parameters():
p.requires_grad = False
But I can't find a proper way to do so, in a way that will make my matrix become a BandRNN.
Hopefully, someone will be able to help me with this issue.
As far as I know you can only activate/deactivate requires_grad on a tensor, and not on distinct components of that tensor. Instead what you could do is zero out the values outside the band.
First create a mask for the band, you could use torch.ones with torch.diagflat:
>>> torch.diagflat(torch.ones(5), offset=1)
By setting the right dimension for torch.ones as well as the right offset you can generate offset diagonal matrices with consistent shapes.
>>> N = 10; i = -1
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.]])
>>> N = 10; i = 0
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.]])
>>> N = 10; i = 1
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0.]])
You get the point, summing these matrices element-wise allows use to get a mask:
>>> N = 10; b = 3
>>> mask = sum(torch.diagflat(torch.ones(N-abs(i)), i) for i in range(-b//2,b//2+1))
>>> mask
tensor([[1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0.],
[1., 1., 1., 1., 0.],
[0., 1., 1., 1., 1.],
[0., 0., 1., 1., 1.]])
Then you can zero out the values outside the band on your nn.Linear:
>>> m = nn.Linear(N, N)
>>> m.weight.data = m.weight * mask
>>> m.weight
Parameter containing:
tensor([[-0.3321, -0.3377, -0.0000, -0.0000, -0.0000],
[-0.4197, 0.1729, 0.2101, 0.0000, 0.0000],
[ 0.3467, 0.2857, -0.3919, -0.0659, 0.0000],
[ 0.0000, -0.4060, 0.0908, 0.0729, -0.1318],
[ 0.0000, -0.0000, -0.4449, -0.0029, -0.1498]], requires_grad=True)
Note, you might need to perform this on each forward pass as the parameters outside the band might get updated to non-zero values during the training. Of course, you can initialize mask once and keep it in memory.
It would be more convenient to wrap everything into a custom nn.Module.
How is the IoU metric calculated for multiple bounding box predictions in Tensorflow Object Detection API ?
Not sure exactly how TensorFlow does it but here is one way that I recently got it to work since I didn't find a good solution online. I used numpy matrices to get the IoU, & other metrics (TP, FP, TN, FN) for multi-object detection.
Lets say for this example that your image is 6x6.
import cv2
empty_array = np.zeros(36).reshape([6, 6])
array([[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]])
And you have the ground truth for 2 objects, one in the bottom left of the image and one smaller one in the top right.
bbox_actual_obj1 = [[0, 3], [2, 5]] # top left coord & bottom right coord
bbox_actual_obj2 = [[4, 0], [5, 1]]
Using OpenCV, you can add these objects to a copy of the empty image array.
actual = empty.copy()
actual = cv2.rectangle(
actual,
bbox_actual_obj1[0],
bbox_actual_obj1[1],
1,
-1
)
actual = cv2.rectangle(
actual,
bbox_actual_obj2[0],
bbox_actual_obj2[1],
1,
-1
)
array([[0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.]])
Now let's say that below are our predicted bounding boxes:
bbox_pred_obj1 = [[1, 3], [3, 5]] # top left coord & bottom right coord
bbox_pred_obj2 = [[3, 0], [5, 2]]
Now we do the same thing as above but change the value we assign within the array.
pred = empty.copy()
pred = cv2.rectangle(
pred,
bbox_person2_car1[0],
bbox_person2_car1[1],
2,
-1
)
pred = cv2.rectangle(
pred,
bbox_person2_car2[0],
bbox_person2_car2[1],
2,
-1
)
array([[0., 0., 0., 2., 2., 2.],
[0., 0., 0., 2., 2., 2.],
[0., 0., 0., 2., 2., 2.],
[0., 2., 2., 2., 0., 0.],
[0., 2., 2., 2., 0., 0.],
[0., 2., 2., 2., 0., 0.]])
If we convert these arrays to matrices and add them, we get the following result
actual_matrix = np.matrix(actual)
pred_matrix = np.matrix(pred)
combined = actual_matrix + pred_matrix
matrix([[0., 0., 0., 2., 3., 3.],
[0., 0., 0., 2., 3., 3.],
[0., 0., 0., 2., 2., 2.],
[1., 3., 3., 2., 0., 0.],
[1., 3., 3., 2., 0., 0.],
[1., 3., 3., 2., 0., 0.]])
Now all we need to do is count the amount of each number in the combined matrix to get the TP, FP, TN, FN rates.
combined = np.squeeze(
np.asarray(
pred_matrix + actual_matrix
)
)
unique, counts = np.unique(combined, return_counts=True)
zipped = dict(zip(unique, counts))
{0.0: 15, 1.0: 3, 2.0: 8, 3.0: 10}
Legend:
True Negative: 0
False Negative: 1
False Positive: 2
True Positive/Intersection: 3
Union: 1 + 2 + 3
IoU: 0.48 10/(3 + 8 + 10)
Precision: 0.56 10/(10 + 8)
Recall: 0.77 10/(10 + 3)
F1: 0.65 10/(10 + 0.5 * (3 + 8))
Each bounding box around an object has an IoU (intersection over union) with the ground-truth box of that object. It is calculated by dividing the common area (overlap) between the predicted bounding box and the actual correct (ground-truth box) by the cumulative area of the two boxes. After calculating all the IoUs for the boxes around an object, the ones with the highest IoU are selected as the result. Here it is explained better.
Also you can print the IoU value after this line.
I would like to create a square numpy array such that it starts counting from the diagonal.
Do you know a one-liner for that?
Example with 5x5:
array([[ 1., 2., 3., 4., 5.],
[ 0., 1., 2., 3., 4.],
[ 0., 0., 1., 2., 3.],
[ 0., 0., 0., 1., 2.],
[ 0., 0., 0., 0., 1.]])
In [49]: np.identity(5).cumsum(axis=1).cumsum(axis=1)
Out[49]:
array([[ 1., 2., 3., 4., 5.],
[ 0., 1., 2., 3., 4.],
[ 0., 0., 1., 2., 3.],
[ 0., 0., 0., 1., 2.],
[ 0., 0., 0., 0., 1.]]
>>> mat = np.vstack((np.concatenate((np.zeros(i),np.arange(1,5-i+1))) for i in range(0,5)))
>>> mat
array([[1., 2., 3., 4., 5.],
[0., 1., 2., 3., 4.],
[0., 0., 1., 2., 3.],
[0., 0., 0., 1., 2.],
[0., 0., 0., 0., 1.]])