Related
I have detected blob keypoints in opencv c++. The centroid displays fine. How do I then draw a bounding box around the detected blob if I only have the blob center coordinates? I can't work backwards from center because of too many unknowns(or so I believe).
threshold(imageUndistorted, binary_image, 30, 255, THRESH_BINARY);
Ptr<SimpleBlobDetector> detector = SimpleBlobDetector::create(params);
// Detect blob
detector->detect(binary_image, binary_keypoints);
drawKeypoints(binary_image, binary_keypoints, bin_image_keypoints, Scalar(0, 0, 255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS);
//draw BBox ?
What am I overlooking to draw the bounding box around the single blob?
I said:
I can't work backwards from center because of too many unknowns(or so I believe).
There is not limited information if blob size is used: keypoints.size which returns the diameter of the blob in question. Though there might be some inaccurate results with highly asymmetric or lopsided targets, this worked well for me b/c I used spheroid objects. Moments/ is probably the better approached for the asymmetrical targets.
keypoints.size should not be confused with keypoints.size(). The latter does a count in the vector of objects in my case the former is the diameter. Using both.
Using the diameter I can then calculate the rest with no problem:
float TLx = (ctr_x - r);
float TLy = (ctr_y - r);
float BRx = (ctr_x + r);
float Bry = (ctr_y + r);
Point TLp(TLx-10, TLy-10); //works fine without but more visible with enhancement
Point BRp(BRx+10, Bry+10); //same here
std::cout << "Top Left: " << TLp << std::endl << "Right Lower:" << BRp << std::endl;
cv::rectangle(bin_with_keypoints, TLp, BRp, cv::Scalar(0, 255, 0));
imshow("With Green Bounding Box:", bin_with_keypoints);
TLp = top left point with 10px adjustments to make box bigger.
BRp = bottom right point
TLx, TLy are calculated from blob center coordinates as well as BRps. If you are going to use multiple targets would suggest contours approach (with the moments). I have 1 - 2 blobs to keep track of which is a lot easier but keeps resource usage down.
Rectangle drawing function can also work with Rect (diameter = keypoint.size)
Rect r(TLp, BRp, center_x + diameter/2, center_y+diamter/2) // r(TLc, BRc, width, heigth)
cv::rectangle(bin_with_keypoints, rect, cv::Scalar(0, 255, 0));
i want to show image RGB colour histogram in cocoa application. Please suggest possible way to do it with objective c or any third party library available to achieve this.
well this is a problem as RGB colors are 3D space so their histogram would lead to 4D plot which is something we do not really comprehend.
So the solution to this is to convert the 4D plot to 3D plot somehow. This can be done by sorting the colors by something that has some meaning. I will not speculate and describe what I am using. I use HSV color space and ignore the V value. This way I lose a lot of color shade info but it is still enough to describe colors for my purposes. This is how it looks like:
You can also use more plots with different V to cover more colors. For more info see:
HSV histogram
Anyway you can use any gradient sorting or any shape of your plot that is completely on you.
If you want pure RGB then you could adapt this and use RGB cube surface or map it on sphere and ignore the length from (0,0,0) (use unit vectors) something like this:
So if you R,G,B are in <0,1> you convert that to <-1,+1> then compute the spherical coordinates (ignoring radius) and you got your 2 variables instead of 3 which you can use as a plot (either as 2D globe base or 3D sphere ...).
Here C++ code how to do this (made from the HSV histogram):
picture pic0,pic1,pic2,zed;
const int na=360,nb=180,nb2=nb>>1; // size of histogram table
int his[na][nb];
DWORD w;
int a,b,r,g,x,y,z,l,i,n;
double aa,bb,da,db,dx,dy,dz,rr;
color c;
pic2=pic0; // copy input image pic0 to pic2
for (a=0;a<na;a++) // clear histogram
for (b=0;b<nb;b++)
his[a][b]=0;
for (y=0;y<pic2.ys;y++) // compute it
for (x=0;x<pic2.xs;x++)
{
c=pic2.p[y][x];
r=c.db[picture::_r]-128;
g=c.db[picture::_g]-128;
b=c.db[picture::_b]-128;
l=sqrt(r*r+g*g+b*b); // convert RGB -> spherical a,b angles
if (!l) { a=0; b=0; }
else{
a=double(double(na)*acos(double(b)/double(l))/(2.0*M_PI));
if (!r) b=0; else b=double(double(nb)*atan(double(g)/double(r))/(M_PI)); b+=nb2;
while (a<0) a+=na; while (a>=na) a-=na;
if (b<0) b=0; if (b>=nb) b=nb-1;
}
his[a][b]++; // update color usage count ...
}
for (n=0,a=0;a<na;a++) // max probability
for (b=0;b<nb;b++)
if (n<his[a][b]) n=his[a][b];
// draw the colored RGB sphere and histogram
zed =pic1; zed .clear(9999); // zed buffer for 3D
pic1.clear(0); // image of histogram
da=2.0*M_PI/double(na);
db=M_PI/double(nb);
for (aa=0.0,a=0;a<na;a++,aa+=da)
for (bb=-M_PI,b=0;b<nb;b++,bb+=db)
{
// normal
dx=cos(bb)*cos(aa);
dy=cos(bb)*sin(aa);
dz=sin(bb);
// color of surface (darker)
rr=75.0;
c.db[picture::_r]=double(rr*dx)+128;
c.db[picture::_g]=double(rr*dy)+128;
c.db[picture::_b]=double(rr*dz)+128;
c.db[picture::_a]=0;
// histogram center
x=pic1.xs>>1;
y=pic1.ys>>1;
// surface position
rr=64.0;
z=rr;
x+=double(rr*dx);
y+=double(rr*dy);
z+=double(rr*dz);
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; }
// ignore lines if zero color count
if (!his[a][b]) continue;
// color of lines (bright)
rr=125.0;
c.db[picture::_r]=double(rr*dx)+128;
c.db[picture::_g]=double(rr*dy)+128;
c.db[picture::_b]=double(rr*dz)+128;
c.db[picture::_a]=0;
// line length
l=(xs*his[a][b])/(n*3);
for (double xx=x,yy=y,zz=z;l>=0;l--)
{
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; }
xx+=dx; yy+=dy; zz+=dz; x=xx; y=yy; z=zz;
if (x<0) break; if (x>=xs) break;
if (y<0) break; if (y>=ys) break;
}
}
input image is pic0, output image is pic1 (histogram graph)
pic2 is copy of pic0 (remnant of old code)
zed is the Zed buffer for 3D display avoiding Z sorting ...
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
As the sphere is a 3D object you should add rotation to it so all the surface is visible in time (or rotate with mouse or whatever) ...
I want to convert GPS location (latitude, longitude) into x,y coordinates.
I found many links about this topic and applied it, but it doesn't give me the correct answer!
I am following these steps to test the answer:
(1) firstly, i take two positions and calculate the distance between them using maps.
(2) then convert the two positions into x,y coordinates.
(3) then again calculate distance between the two points in the x,y coordinates
and see if it give me the same result in point(1) or not.
one of the solution i found the following, but it doesn't give me correct answer!
latitude = Math.PI * latitude / 180;
longitude = Math.PI * longitude / 180;
// adjust position by radians
latitude -= 1.570795765134; // subtract 90 degrees (in radians)
// and switch z and y
xPos = (app.radius) * Math.sin(latitude) * Math.cos(longitude);
zPos = (app.radius) * Math.sin(latitude) * Math.sin(longitude);
yPos = (app.radius) * Math.cos(latitude);
also i tried this link but still not work with me well!
any help how to convert from(latitude, longitude) to (x,y) ?
Thanks,
No exact solution exists
There is no isometric map from the sphere to the plane. When you convert lat/lon coordinates from the sphere to x/y coordinates in the plane, you cannot hope that all lengths will be preserved by this operation. You have to accept some kind of deformation. Many different map projections do exist, which can achieve different compromises between preservations of lengths, angles and areas. For smallish parts of earth's surface, transverse Mercator is quite common. You might have heard about UTM. But there are many more.
The formulas you quote compute x/y/z, i.e. a point in 3D space. But even there you'd not get correct distances automatically. The shortest distance between two points on the surface of the sphere would go through that sphere, whereas distances on the earth are mostly geodesic lengths following the surface. So they will be longer.
Approximation for small areas
If the part of the surface of the earth which you want to draw is relatively small, then you can use a very simple approximation. You can simply use the horizontal axis x to denote longitude λ, the vertical axis y to denote latitude φ. The ratio between these should not be 1:1, though. Instead you should use cos(φ0) as the aspect ratio, where φ0 denotes a latitude close to the center of your map. Furthermore, to convert from angles (measured in radians) to lengths, you multiply by the radius of the earth (which in this model is assumed to be a sphere).
x = r λ cos(φ0)
y = r φ
This is simple equirectangular projection. In most cases, you'll be able to compute cos(φ0) only once, which makes subsequent computations of large numbers of points really cheap.
I want to share with you how I managed the problem. I've used the equirectangular projection just like #MvG said, but this method gives you X and Y positions related to the globe (or the entire map), this means that you get global positions. In my case, I wanted to convert coordinates in a small area (about 500m square), so I related the projection point to another 2 points, getting the global positions and relating to local (on screen) positions, just like this:
First, I choose 2 points (top-left and bottom-right) around the area where I want to project, just like this picture:
Once I have the global reference area in lat and lng, I do the same for screen positions. The objects containing this data are shown below.
//top-left reference point
var p0 = {
scrX: 23.69, // Minimum X position on screen
scrY: -0.5, // Minimum Y position on screen
lat: -22.814895, // Latitude
lng: -47.072892 // Longitude
}
//bottom-right reference point
var p1 = {
scrX: 276, // Maximum X position on screen
scrY: 178.9, // Maximum Y position on screen
lat: -22.816419, // Latitude
lng: -47.070563 // Longitude
}
var radius = 6371; //Earth Radius in Km
//## Now I can calculate the global X and Y for each reference point ##\\
// This function converts lat and lng coordinates to GLOBAL X and Y positions
function latlngToGlobalXY(lat, lng){
//Calculates x based on cos of average of the latitudes
let x = radius*lng*Math.cos((p0.lat + p1.lat)/2);
//Calculates y based on latitude
let y = radius*lat;
return {x: x, y: y}
}
// Calculate global X and Y for top-left reference point
p0.pos = latlngToGlobalXY(p0.lat, p0.lng);
// Calculate global X and Y for bottom-right reference point
p1.pos = latlngToGlobalXY(p1.lat, p1.lng);
/*
* This gives me the X and Y in relation to map for the 2 reference points.
* Now we have the global AND screen areas and then we can relate both for the projection point.
*/
// This function converts lat and lng coordinates to SCREEN X and Y positions
function latlngToScreenXY(lat, lng){
//Calculate global X and Y for projection point
let pos = latlngToGlobalXY(lat, lng);
//Calculate the percentage of Global X position in relation to total global width
pos.perX = ((pos.x-p0.pos.x)/(p1.pos.x - p0.pos.x));
//Calculate the percentage of Global Y position in relation to total global height
pos.perY = ((pos.y-p0.pos.y)/(p1.pos.y - p0.pos.y));
//Returns the screen position based on reference points
return {
x: p0.scrX + (p1.scrX - p0.scrX)*pos.perX,
y: p0.scrY + (p1.scrY - p0.scrY)*pos.perY
}
}
//# The usage is like this #\\
var pos = latlngToScreenXY(-22.815319, -47.071718);
$point = $("#point-to-project");
$point.css("left", pos.x+"em");
$point.css("top", pos.y+"em");
As you can see, I made this in javascript, but the calculations can be translated to any language.
P.S. I'm applying the converted positions to an HTML element whose id is "point-to-project". To use this piece of code on your project, you shall create this element (styled as position absolute) or change the "usage" block.
Since this page shows up on top of google while i searched for this same problem, I would like to provide a more practical answers. The answer by MVG is correct but rather theoratical.
I have made a track plotting app for the fitbit ionic in javascript. The code below is how I tackled the problem.
//LOCATION PROVIDER
index.js
var gpsFix = false;
var circumferenceAtLat = 0;
function locationSuccess(pos){
if(!gpsFix){
gpsFix = true;
circumferenceAtLat = Math.cos(pos.coords.latitude*0.01745329251)*111305;
}
pos.x:Math.round(pos.coords.longitude*circumferenceAtLat),
pos.y:Math.round(pos.coords.latitude*110919),
plotTrack(pos);
}
plotting.js
plotTrack(position){
let x = Math.round((this.segments[i].start.x - this.bounds.minX)*this.scale);
let y = Math.round(this.bounds.maxY - this.segments[i].start.y)*this.scale; //heights needs to be inverted
//redraw?
let redraw = false;
//x or y bounds?
if(position.x>this.bounds.maxX){
this.bounds.maxX = (position.x-this.bounds.minX)*1.1+this.bounds.minX; //increase by 10%
redraw = true;
}
if(position.x<this.bounds.minX){
this.bounds.minX = this.bounds.maxX-(this.bounds.maxX-position.x)*1.1;
redraw = true;
};
if(position.y>this.bounds.maxY){
this.bounds.maxY = (position.y-this.bounds.minY)*1.1+this.bounds.minY; //increase by 10%
redraw = true;
}
if(position.y<this.bounds.minY){
this.bounds.minY = this.bounds.maxY-(this.bounds.maxY-position.y)*1.1;
redraw = true;
}
if(redraw){
reDraw();
}
}
function reDraw(){
let xScale = device.screen.width / (this.bounds.maxX-this.bounds.minX);
let yScale = device.screen.height / (this.bounds.maxY-this.bounds.minY);
if(xScale<yScale) this.scale = xScale;
else this.scale = yScale;
//Loop trough your object to redraw all of them
}
For completeness I like to add my python adaption of #allexrm code which worked really well. Thanks again!
radius = 6371 #Earth Radius in KM
class referencePoint:
def __init__(self, scrX, scrY, lat, lng):
self.scrX = scrX
self.scrY = scrY
self.lat = lat
self.lng = lng
# Calculate global X and Y for top-left reference point
p0 = referencePoint(0, 0, 52.526470, 13.403215)
# Calculate global X and Y for bottom-right reference point
p1 = referencePoint(2244, 2060, 52.525035, 13.405809)
# This function converts lat and lng coordinates to GLOBAL X and Y positions
def latlngToGlobalXY(lat, lng):
# Calculates x based on cos of average of the latitudes
x = radius*lng*math.cos((p0.lat + p1.lat)/2)
# Calculates y based on latitude
y = radius*lat
return {'x': x, 'y': y}
# This function converts lat and lng coordinates to SCREEN X and Y positions
def latlngToScreenXY(lat, lng):
# Calculate global X and Y for projection point
pos = latlngToGlobalXY(lat, lng)
# Calculate the percentage of Global X position in relation to total global width
perX = ((pos['x']-p0.pos['x'])/(p1.pos['x'] - p0.pos['x']))
# Calculate the percentage of Global Y position in relation to total global height
perY = ((pos['y']-p0.pos['y'])/(p1.pos['y'] - p0.pos['y']))
# Returns the screen position based on reference points
return {
'x': p0.scrX + (p1.scrX - p0.scrX)*perX,
'y': p0.scrY + (p1.scrY - p0.scrY)*perY
}
pos = latlngToScreenXY(52.525607, 13.404572);
pos['x] and pos['y] contain the translated x & y coordinates of the lat & lng (52.525607, 13.404572)
I hope this is helpful for anyone looking like me for the proper solution to the problem of translating lat lng into a local reference coordinate system.
Best
Its better to convert to utm coordinates, and treat that as x and y.
import utm
u = utm.from_latlon(12.917091, 77.573586)
The result will be (779260.623156606, 1429369.8665238516, 43, 'P')
The first two can be treated as x,y coordinates, the 43P is the UTM Zone, which can be ignored for small areas (width upto 668 km).
I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow:
uniform sampler2D sampler0;
uniform vec2 tc_offset[9];
void blur()
{
vec4 sample[9];
for(int i = 0; i < 9; ++i)
sample[i] = texture2D(sampler0, gl_TexCoord[0].st + tc_offset[i]);
gl_FragColor = (sample[0] + (2.0 * sample[1]) + sample[2] +
(2.0 * sample[3]) + sample[4] + 2.0 * sample[5] +
sample[6] + 2.0 * sample[7] + sample[8] ) / 13.0;
}
How does the sample[i] = texture2D(sample0, ...) line work?
It seems like to blur an image, I have to first generate the image, yet here, I'm somehow trying to query the very iamge I'm generating. How does this work?
It applies a blur kernel to the image. tc_offset needs to be properly initialized by the application to form a 3x3 area of sampling points around the actual texture coordinate:
0 0 0
0 x 0
0 0 0
(assuming x is the original coordinate). The offset for the upper-left sampling point would be -1/width,-1/height. The offset for the center point needs to be carefully aligned to texel center (the off-by-0.5 problem). Also, the hardware bilinear filter can be used to cheaply increase the amount of blur (by sampling between texels).
The rest of the shader scales the samples by their distance. Usually, this is precomputed as well:
for(int i = 0; i < NUM_SAMPLES; ++i) {
result += texture2D(sampler,texcoord+offsetscaling[i].xy)*offsetscaling[i].z;
}
One way is to generate your original image to render to a texture, not to the screen.
And then you draw a full screen quad using this shader and the texture as it's input to post-process the image.
As you note, in order to make a blurred image, you first need to make an image, and then blur it. This shader does (just) the second step, taking an image that was generated previously and blurring it. There needs to be additional code elsewhere to generate the original non-blurred image.