How to reconfigure YOLO2 output layer for a specific volume? - yolo

Hi dl4j stackers is it possible to get a yolo variant other than the 5 default params [x, y, w, h, p] here's what I have from the default dl4j-examples repo on github for a 13 x 13 grid-locked image
...
graphBuilder.addLayer("convolution2d_23",
new ConvolutionLayer.Builder(1,1)
.nIn(1024)
.nOut(nBoxes* (5 +nClasses))//don't want the 5 default params always
.weightInit(WeightInit.XAVIER)
.stride(1,1)
.convolutionMode(ConvolutionMode.Same)
.weightInit(WeightInit.RELU)
.activation(Activation.IDENTITY)
.cudnnAlgoMode(cudnnAlgoMode)
.build(),
"activation_22")
.addLayer("outputs",
new Yolo2OutputLayer.Builder()
.boundingBoxPriors(priors)
.build(),
"convolution2d_23")
.setOutputs("outputs");
graphBuilder.build();
...
I need a re-configuration for the Yolo2OutputLayer or a custom Yolo2OutputLayer class for an
output layer capable of outputing any specific volume. currently
I need to output a volume 13 x 13 x 80 in which a unit slice 1 x 1x 80 = 1 x 1 x 2[x, y, w, h, p, c, a0, a1,..., a31]
where 2 equals number of bounding boxes per cell
x - bounding box x coordinate - 1
y - bounding box y coordinate - 1
w - bounding box width - 1
h - bounding box height - 1
p - bounding box object confidence - 1
c - bounding box class(3 classes) - 3
a0...a31 - my custom parameters - 32

Related

Percentage weighting given two variables to equal a target

I have a target of target = 11.82 with two variables
x = 9
y = 15
How do I find the percentage weighting that would blend x & y to equal my target? i.e. 55% of x and 45% of y - what function is most efficient way to calc a weighting to obtain my target?
Looking at it again, what I think you want is really two equations:
9x + 15y = 11.82
x + y = 1
Solving that system of equations is pretty fast on pen and paper (just do linear combination). Or you could use sympy to solve the system of linear equations:
>>> from sympy import *
>>> from sympy.solvers.solveset import linsolve
>>> x, y = symbols('x, y')
>>> linsolve([x + y - 1, 9 * x + 15 * y - 11.82], (x, y)) # make 0 on right by subtraction
FiniteSet((0.53, 0.47))
We can confirm this by substitution:
>>> 9 * 0.53 + 15 * 0.47
11.82

Fast R-Cnn and problems with spatialScale in BrainScript

I have the following code:
model (features, rois) = {
convOut = convLayers (features)
roiOut = ROIPooling (convOut, rois, (9:9),spatialScale=64.0/196.0)
z = fcLayers (roiOut)
}.z
Original taken from: cntk\Examples\Image\Detection\FastRCNN\BrainScript
What is spatialScale in ROIPooling and how do I calculate it?
If have found this in the output from the cntk.exe.
Validating --> z.convOut.z.rn3.r.r = RectifiedLinear (z.convOut.z.rn3.r.r._) : [49 x 49 x 64 x *] -> [49 x 49 x 64 x *]
Validating --> rois = InputValue() : -> [4 x 1000 x *]
Validating --> z.roiOut = ROIPooling (z.convOut.z.rn3.r.r, rois) : [49 x 49 x 64 x *], [4 x 1000 x *] -> [9 x 9 x 64 x 1000 x *]
spatial scale is the ratio of the spatial resolution of the input to the ROI and the spatial resolution of the input image to the network. 1/16.0 is the value used in the original Fast and Faster R-CNN implementation, this value depend on the network.
Pretty much, spatial scale is the scale of the input to ROI relative to the original image.
Thanks,
Emad

How can I find (generate) data points form a shape in 2D in MATLAB ? For example, the letter A , B ,and C. Thanks

How can I find or generate data points form a shape in 2D in MATLAB ? For example, the letters A, B, and C.
You can use fill()
An example for an octogon, provided by
See https://www.mathworks.com/help/matlab/ref/fill.html
% Generate the points required for the fill.
t = (1/16:1/8:1)'*2*pi; % using 1/8 steps we get an 8 sided object.
x = cos(t);
y = sin(t);
% fill the data
fill(x,y,'r')
axis square % prevent skewing the result.
An example of generating the x y coordinates of a rectangle with an offset of (5,5):
x=[5 5 25 25 5]
y=[5 15 15 5 5]
You have 5 points because you need to include the final point to complete the path ( I believe ) Follow the blue path when collecting the x coordinates and the y coordinates. You can see we start at 5,5 then move to 5,15 --- so the first part of the path is
x=[5 5 ...
y=[5 15 ...
If you want to generate the coordinates automatically, you could use a program like InkScape (vector program) to help you convert a character to paths, but here is a simple example drawn with the pen tool:
The points are given by
m 0,1052.3622 5,-10 5,0 5,10 z
which 1052.3622 is VERY large, but is ultimately because I placed my shape at the bottom of the page. if we set this to be 0,0 it would go to the top of the page.

Transform a vector to another frame of reference

I have a green vehicle which will shortly collide with a blue object (which is 200 away from the cube)
It has a Kinect depth camera D at [-100,0,200] which sees the corner of the cube (grey sphere)
The measured depth is 464 at 6.34° in the X plane and 12.53° in the Y plane.
I want to calculate the position of the corner as it would appear if there was a camera F at [150,0,0], which would see this:
in other words transform the red vector into the yellow vector. I know that this is achieved with a transformation matrix but I can't find out how to compute the matrix from the D-F vector [250,0,-200] or how to use it; my high-school maths dates back 40 years.
math.se has a similar question but it doesn't cover my problem and I can't find anything on robotices.se either.
I realise that I should show some code that I've tried, but I don't know where to start. I would be very grateful if somebody could help me to solve this.
ROS provides the tf library which allows you to transform between frames. You can simply set a static transform between the pose of your camera and the pose of your desired location. Then, you can get the pose of any point detected by your camera in the reference frame of your desired point on your robot. ROS tf will do everything you need and everything I explain below.
The longer answer is that you need to construct a transformation tree. First, compute the static transformation between your two poses. A pose is a 7-dimensional transformation including a translation and orientation. This is best represented as a quaternion and a 3D vector.
Now, for all poses in the reference frame of your kinect, you must transform them to your desired reference frame. Let's call this frame base_link and your camera frame camera_link.
I'm going to go ahead and decide that base_link is the parent of camera_link. Technically these transformations are bidirectional, but because you may need a transformation tree, and because ROS cares about this, you'll want to decide who is the parent.
To convert rotation from camera_link to base_link, you need to compute the rotational difference. This can be done by multiplying the quaternion of base_link's orientation by the conjugate of camera_link's orientation. Here's a super quick Python example:
def rotDiff(self,q1: Quaternion,q2: Quaternion) -> Quaternion:
"""Finds the quaternion that, when applied to q1, will rotate an element to q2"""
conjugate = Quaternion(q2.qx*-1,q2.qy*-1,q2.qz*-1,q2.qw)
return self.rotAdd(q1,conjugate)
def rotAdd(self, q1: Quaternion, q2: Quaternion) -> Quaternion:
"""Finds the quaternion that is the equivalent to the rotation caused by both input quaternions applied sequentially."""
w1 = q1.qw
w2 = q2.qw
x1 = q1.qx
x2 = q2.qx
y1 = q1.qy
y2 = q2.qy
z1 = q1.qz
z2 = q2.qz
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return Quaternion(x,y,z,w)
Next, you need to add the vectors. The naive approach is to simply add the vectors, but you need to account for rotation when calculating these. What you really need is a coordinate transformation. The position of camera_link relative to base_link is some 3D vector. Based on your drawing, this is [-250, 0, 200]. Next, we need to reproject the vectors to your points of interest into the rotational frame of base_link. I.e., all the points your camera sees at 12.53 degrees that appear at the z = 0 plane to your camera are actually on a 12.53 degree plane relative to base_link and you need to find out what their coordinates are relative to your camera as if your camera was in the same orientation as base_link.
For details on the ensuing math, read this PDF (particularly starting at page 9).
To accomplish this, we need to find your vector's components in base_link's reference frame. I find that it's easiest to read if you convert the quaternion to a rotation matrix, but there is an equivalent direct approach.
To convert a quaternion to a rotation matrix:
def Quat2Mat(self, q: Quaternion) -> rotMat:
m00 = 1 - 2 * q.qy**2 - 2 * q.qz**2
m01 = 2 * q.qx * q.qy - 2 * q.qz * q.qw
m02 = 2 * q.qx * q.qz + 2 * q.qy * q.qw
m10 = 2 * q.qx * q.qy + 2 * q.qz * q.qw
m11 = 1 - 2 * q.qx**2 - 2 * q.qz**2
m12 = 2 * q.qy * q.qz - 2 * q.qx * q.qw
m20 = 2 * q.qx * q.qz - 2 * q.qy * q.qw
m21 = 2 * q.qy * q.qz + 2 * q.qx * q.qw
m22 = 1 - 2 * q.qx**2 - 2 * q.qy**2
result = [[m00,m01,m02],[m10,m11,m12],[m20,m21,m22]]
return result
Now that your rotation is represented as a rotation matrix, it's time to do the final calculation.
Following the MIT lecture notes from my link above, I'll arbitrarily name the vector to your point of interest from the camera A.
Find the rotation matrix that corresponds with the quaternion that represents the rotation between base_link and camera_link and simply perform a matrix multiplication. If you're in Python, you can use numpy to do this, but in the interest of being explicit, here is the long form of the multiplication:
def coordTransform(self, M: RotMat, A: Vector) -> Vector:
"""
M is my rotation matrix that represents the rotation between my frames
A is the vector of interest in the frame I'm rotating from
APrime is A, but in the frame I'm rotating to.
"""
APrime = []
i = 0
for component in A:
APrime.append(component * M[i][0] + component * M[i][1] + component * m[i][2])
i += 1
return APrime
Now, the vectors from camera_link are represented as if camera_link and base_link share an orientation.
Now you may simply add the static translation between camera_link and base_link (or subtract base_link -> camera_link) and the resulting vector will be your point's new translation.
Putting it all together, you can now gather the translation and orientation of every point your camera detects relative to any arbitrary reference frame to gather pose data relevant to your application.
You can put all of this together into a function simply called tf() and stack these transformations up and down a complex transformation tree. Simply add all the transformations up to a common ancestor and subtract all the transformations down to your target node in order to find the transformation of your data between any two arbitrary related frames.
Edit: Hendy pointed out that it's unclear what Quaternion() class I refer to here.
For the purposes of this answer, this is all that's necessary:
class Quaternion():
def __init__(self, qx: float, qy: float, qz: float, qw: float):
self.qx = qx
self.qy = qy
self.xz = qz
self.qw = qw
But if you want to make this class super handy, you can define __mul__(self, other: Quaternion and __rmul__(self, other: Quaternion) to perform quaternion multiplication (order matters, so make sure to do both!). conjugate(self), toEuler(self), toRotMat(self), normalize(self) may also be handy additions.
Note that due to quirks in Python's typing, the above other: Quaternion is only for clarity. You'll need a longer-form if type(other) != Quaternion: raise TypeError('You can only multiply quaternions with other quaternions) error handling block to make that into valid python :)
The following definitions are not necessary for this answer, but they may prove useful to the reader.
import numpy as np
def __mul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = self.qw
r2 = other.qw
v1 = [self.qx,self.qy,self.qz]
v2 = [other.qx,other.qy,other.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def __rmul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = other.qw
r2 = self.qw
v1 = [other.qx,other.qy,other.qz]
v2 = [self.qx,self.qy,self.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def conjugate(self):
return Quaternion(self.qx*-1,self.qy*-1,self.qz*-1,self.qw)

How determine if point is within rectangle given all latitude/longitude coordinates?

If given x/y coordinates for all 4 corners of rectangle, and then another x/y, it's easy to determine if the point is within the rectangle if top left is 0,0.
But what if the coordinates are latitude/longitude where they can be negative (please see attached). Is there a formula that can work in this case?
Mathematicaly, you could use inequations to determine that.
edit: When doing the example, i've noticed you put the coordinates in the inverse format (y,x) instead of (x,y). In my example I use (x,y) format, so I just inverted the order to easy my explanation.
Let's say
A = (-130,10)
B = (-100,20)
C = (-125,-5)
D = (-100,5)
You build an inequation from your rectangle edges :
if( (x,y) < AB && (x,y) > AC && (x,y) > CD && (x,y) < BD) then
(x,y) belongs to rectangle ABCD
end if
If all inequations are true, then your point belongs to the rectangle
Concrete example :
AB represent the segment but can be represented by a formula : y = ax + b
to determine a (the slope of the formula, not the point A) you get the difference of
(Ay - By) / (Ax - Bx)
Ay means Y component of point A wich is 10 in that case
That formula gives us
(10 - 20) / (-130 - -100) = -10 / -30 = 1/3
Now we have
y = x/3 + b
We now determine b. We now that both point A and B belongs to that formula. So we take any of them to replace the x,y values in the formula. Let's take point B :
20 = -100/3 + b
We isolate b giving us :
b = -100 / 60 = -10/6
We have now
y = x/3 - (6/10)
So if we want to determine if Point Z (10, 15) belongs to your retangle, you check firstly if
y > x/3 - (10/6)
Then in the case of Z(10, 15) :
15 > 10/3 - (10/6)
15 > 10/6
15 > 1.66 is true
So condition is met for this edge. You need to this same logic for each edges.
Note that to determine if you use > or <, you need to tell if at a certain x value, our point has a bigger y or smaller y value than our rectangle edge.
You can use < and > if you want a point to be strictly inside the rectangle; <= and >= if a point on the rectangle's edge belongs to the rectangle too. You decide.
I hope that my explanation is clear. Feel free to ask more if some points are unclear.