Implementing BandRNN with pytorch and tensorflow - tensorflow

So I am trying to figure out how to train my matrix in a way that I will get a BandRNN.
BandRnn is a diagonalRNN model with a different number of connections per neuron.
For example:
C is the number of connections per neuron.
I found out that there is a way to turn off some of the gradients in a for loop, in a way that prevents them from being trained as follows:
for p in model.input.parameters():
p.requires_grad = False
But I can't find a proper way to do so, in a way that will make my matrix become a BandRNN.
Hopefully, someone will be able to help me with this issue.

As far as I know you can only activate/deactivate requires_grad on a tensor, and not on distinct components of that tensor. Instead what you could do is zero out the values outside the band.
First create a mask for the band, you could use torch.ones with torch.diagflat:
>>> torch.diagflat(torch.ones(5), offset=1)
By setting the right dimension for torch.ones as well as the right offset you can generate offset diagonal matrices with consistent shapes.
>>> N = 10; i = -1
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.]])
>>> N = 10; i = 0
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.]])
>>> N = 10; i = 1
>>> torch.diagflat(torch.ones(N-abs(i)), offset=i)
tensor([[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.],
[0., 0., 0., 0., 0.]])
You get the point, summing these matrices element-wise allows use to get a mask:
>>> N = 10; b = 3
>>> mask = sum(torch.diagflat(torch.ones(N-abs(i)), i) for i in range(-b//2,b//2+1))
>>> mask
tensor([[1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0.],
[1., 1., 1., 1., 0.],
[0., 1., 1., 1., 1.],
[0., 0., 1., 1., 1.]])
Then you can zero out the values outside the band on your nn.Linear:
>>> m = nn.Linear(N, N)
>>> m.weight.data = m.weight * mask
>>> m.weight
Parameter containing:
tensor([[-0.3321, -0.3377, -0.0000, -0.0000, -0.0000],
[-0.4197, 0.1729, 0.2101, 0.0000, 0.0000],
[ 0.3467, 0.2857, -0.3919, -0.0659, 0.0000],
[ 0.0000, -0.4060, 0.0908, 0.0729, -0.1318],
[ 0.0000, -0.0000, -0.4449, -0.0029, -0.1498]], requires_grad=True)
Note, you might need to perform this on each forward pass as the parameters outside the band might get updated to non-zero values during the training. Of course, you can initialize mask once and keep it in memory.
It would be more convenient to wrap everything into a custom nn.Module.

Related

Defining a 2-d numpy array from values in 3-d numpy array

I have a 3-D numpy array representing a model domain of 39 layers, 279 rows, 153 columns. The values in the array are either 0 or 1 and signify if the cell in the domain is inactive or active, respectively. I am trying to create a 2-D array of shape 279 rows and 153 columns where the array values equal the layer number for the uppermost active layer in the grid. Essentially, at each row, col location I want to loop through the layers to find the first one that is a 1 and not a 0 and then put that layer number in the 2-D array at that row, col location. For example:
If a four layer (layers 0-3) array looks like this:
array([[[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 1., 0., 0., 0.]],
[[ 0., 1., 1., 0.],
[ 1., 1., 0., 0.],
[ 1., 1., 0., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 0.],
[ 0., 1., 1., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 1.],
[ 0., 1., 1., 1.]]])
The 2-D array should look like this:
array([[[ 0., 0., 1., 2.],
[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.]],
If the row-col location is not active (not equal to 1) in any layer , the value in the resulting array should be 0 (like at 1,1), the same as if it were active in layer 0.
I have tried modifying a couple of solutions where the z-axis values are summed, or averaged, but can't seem to figure out how to get exactly what I am looking for.
You could try numpy.argmax:
import numpy as np
a = np.array([[[ 0., 1., 0., 0.],
[ 1., 0., 0., 0.],
[ 1., 0., 0., 0.]],
[[ 0., 1., 1., 0.],
[ 1., 1., 0., 0.],
[ 1., 1., 0., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 0.],
[ 0., 1., 1., 0.]],
[[ 0., 0., 1., 1.],
[ 0., 1., 1., 1.],
[ 0., 1., 1., 1.]]])
print(np.argmax(a,0))
array([[0, 0, 1, 2],
[0, 1, 2, 3],
[0, 1, 2, 3]])
This works because argmax returns the first max value when searching over the defined axis (in this case the 0th axis).

One-hot encode labels in keras

I have a set of integers from a label column in a CSV file - [1,2,4,3,5,2,..]. The number of classes is 5 ie range of 1 to 6. I want to one-hot encode them using the below code.
y = df.iloc[:,10].values
y = tf.keras.utils.to_categorical(y, num_classes = 5)
y
But this code gives me an error
IndexError: index 5 is out of bounds for axis 1 with size 5
How can I fix this?
If you use tf.keras.utils.to_categorical to one-hot the label vector, the integers should start from 0 to num_classes, source. In your case, you should do as follows
import tensorflow as tf
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
y_tf = tf.keras.utils.to_categorical(a-1, num_classes = 5)
y_tf
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)
or, you can use pd.get_dummies,
import pandas as pd
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
a_pd = pd.get_dummies(a).astype('float32').values
a_pd
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)

how to fill particular indices of a dimension of tensor with a constant value k in tensorflow?

I am trying to find a way to replace certain indices of a dimension in tensor with a constant value k. Something similar to index_fill_ in PyTorch.
I have checked tensor_scatter_nd_update, but that requires the entire tensor along with the indices and values to be replaced. Which requires the indices to be with respect to the whole tensor but not just one particular dimension and also requires the values to be in the form of a tensor rather than just a single constant. I am looking for something simpler?
If anyone knows any of this, can you please provide some solution or a direction in which i should be looking into? Thank you
You haven't given an example but you could slice and assign like this.
import tensorflow as tf
aa = tf.Variable(tf.zeros([10, 4]))
tensor = tf.constant(10,shape=(4,3))
aa[0:4, 1:4 ].assign(tf.ones_like(tensor, dtype=tf.float32))
print(aa)
aa[0:4, 1:4 ].assign([[1,1,1],[1,1,1],[1,1,1],[1,1,1]])
print(aa)
Both formats of assign print this.
array([[0., 1., 1., 1.],
[0., 1., 1., 1.],
[0., 1., 1., 1.],
[0., 1., 1., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]], dtype=float32)>

How is the IoU calculated for multiple bounding box predictions in Tensorflow Object Detection API?

How is the IoU metric calculated for multiple bounding box predictions in Tensorflow Object Detection API ?
Not sure exactly how TensorFlow does it but here is one way that I recently got it to work since I didn't find a good solution online. I used numpy matrices to get the IoU, & other metrics (TP, FP, TN, FN) for multi-object detection.
Lets say for this example that your image is 6x6.
import cv2
empty_array = np.zeros(36).reshape([6, 6])
array([[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0.]])
And you have the ground truth for 2 objects, one in the bottom left of the image and one smaller one in the top right.
bbox_actual_obj1 = [[0, 3], [2, 5]] # top left coord & bottom right coord
bbox_actual_obj2 = [[4, 0], [5, 1]]
Using OpenCV, you can add these objects to a copy of the empty image array.
actual = empty.copy()
actual = cv2.rectangle(
actual,
bbox_actual_obj1[0],
bbox_actual_obj1[1],
1,
-1
)
actual = cv2.rectangle(
actual,
bbox_actual_obj2[0],
bbox_actual_obj2[1],
1,
-1
)
array([[0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 1., 1.],
[0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 0., 0.]])
Now let's say that below are our predicted bounding boxes:
bbox_pred_obj1 = [[1, 3], [3, 5]] # top left coord & bottom right coord
bbox_pred_obj2 = [[3, 0], [5, 2]]
Now we do the same thing as above but change the value we assign within the array.
pred = empty.copy()
pred = cv2.rectangle(
pred,
bbox_person2_car1[0],
bbox_person2_car1[1],
2,
-1
)
pred = cv2.rectangle(
pred,
bbox_person2_car2[0],
bbox_person2_car2[1],
2,
-1
)
array([[0., 0., 0., 2., 2., 2.],
[0., 0., 0., 2., 2., 2.],
[0., 0., 0., 2., 2., 2.],
[0., 2., 2., 2., 0., 0.],
[0., 2., 2., 2., 0., 0.],
[0., 2., 2., 2., 0., 0.]])
If we convert these arrays to matrices and add them, we get the following result
actual_matrix = np.matrix(actual)
pred_matrix = np.matrix(pred)
combined = actual_matrix + pred_matrix
matrix([[0., 0., 0., 2., 3., 3.],
[0., 0., 0., 2., 3., 3.],
[0., 0., 0., 2., 2., 2.],
[1., 3., 3., 2., 0., 0.],
[1., 3., 3., 2., 0., 0.],
[1., 3., 3., 2., 0., 0.]])
Now all we need to do is count the amount of each number in the combined matrix to get the TP, FP, TN, FN rates.
combined = np.squeeze(
np.asarray(
pred_matrix + actual_matrix
)
)
unique, counts = np.unique(combined, return_counts=True)
zipped = dict(zip(unique, counts))
{0.0: 15, 1.0: 3, 2.0: 8, 3.0: 10}
Legend:
True Negative: 0
False Negative: 1
False Positive: 2
True Positive/Intersection: 3
Union: 1 + 2 + 3
IoU: 0.48 10/(3 + 8 + 10)
Precision: 0.56 10/(10 + 8)
Recall: 0.77 10/(10 + 3)
F1: 0.65 10/(10 + 0.5 * (3 + 8))
Each bounding box around an object has an IoU (intersection over union) with the ground-truth box of that object. It is calculated by dividing the common area (overlap) between the predicted bounding box and the actual correct (ground-truth box) by the cumulative area of the two boxes. After calculating all the IoUs for the boxes around an object, the ones with the highest IoU are selected as the result. Here it is explained better.
Also you can print the IoU value after this line.

Logical addressing numpy mess up with other matrices

I have just found a problem and I don't know if it is meant to be this way or I am just doing it wrong. When I use logical addressing in a numpy matrix to change all the values of a matrix that are, say, equal to a 1. All other matrices that somehow have something to do with this matrix will also be modified.
In [1]: import numpy as np
In [2]: from numpy import matrix as mtx
In [3]: A=mtx(np.eye(6))
In [4]: A
Out[4]:
matrix([[ 1., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
In [5]: B=A
In [6]: C=B
In [7]: D=C
In [8]: A[A==1]=5
In [9]: A
Out[9]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [10]: B
Out[10]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [11]: C
Out[11]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
In [12]: D
Out[12]:
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
Can anyone tell me what am I doing wrong? is this a bug?
This is not a bug. Saying B=A in python means that both B and A point to the same object. You need to copy the matrix.
>>> import numpy as np
>>> from numpy import matrix as mtx
>>> A = mtx(np.eye(6))
>>> B = A.copy()
>>> C = A
#Check memory locations.
>>> id(A)
19608352
>>> id(C)
19608352 #Same object as A
>>> id(B)
19607992 #Different object then A
>>> A[A==1] = 5
>>> B #B is a different object then A
matrix([[ 1., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
>>> C #C is the same object as A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 5., 0., 0., 0., 0.],
[ 0., 0., 5., 0., 0., 0.],
[ 0., 0., 0., 5., 0., 0.],
[ 0., 0., 0., 0., 5., 0.],
[ 0., 0., 0., 0., 0., 5.]])
The same issue can be seen with python list:
>>> A = [5,3]
>>> B = A
>>> B[0] = 10
>>> A
[10, 3]
Note that this is different then returning a numpy view as in this case:
>>> A = mtx(np.eye(6))
>>> B = A[0] #B is a view and now points to the first row of A
>>> id(A)
28088720
>>> id(B) #Different objects!
28087568
#B still points to the memory location of A's first row, but through numpy trickery
>>> B
matrix([[ 1., 0., 0., 0., 0., 0.]])
>>> B *= 5 #In place multiplication, updates B which is the same as A's first row
>>> A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
As the view B points to the first row of A, A is changed. Now lets force a copy.
>>> B = B*10 #Assigns B*10 to a different chunk of memory
>>> A
matrix([[ 5., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 1.]])
>>> B
matrix([[ 50., 0., 0., 0., 0., 0.]])