This is my click listener on the map:
#Override
public boolean onSingleTap(MotionEvent point) {
Point mapPoint=view.toMapPoint(point.getX(), point.getY());
SpatialReference sp = SpatialReference.create(SpatialReference.WKID_WGS84);
Point p1Done=(Point) GeometryEngine.project(mapPoint, view.getSpatialReference(), sp);
UtilsMapa.addPoint(p1Done, view, "Marker",view.getGraphicLayer());
return super.onSingleTap(point);
}
The addPoint method:
public static void addPoint(Point point, MapViewExt mView, String textMarker, GraphicsLayer gLayer){
SimpleMarkerSymbol simpleMarker = new SimpleMarkerSymbol(Color.RED, 10, SimpleMarkerSymbol.STYLE.CIRCLE);
TextSymbol bassRockTextSymbol = new TextSymbol(10, textMarker, BLUE, TextSymbol.HorizontalAlignment.LEFT,
TextSymbol.VerticalAlignment.BOTTOM);
//When reaching this line, point has 0=-3.0246720297389555 1=40.83564363734672, which is where I tapped the screen
Graphic graphic=new Graphic(point,simpleMarker);
gLayer.addGraphic(graphic);
}
The graphic IS added (a red spot) but it is drawn at 0,0 coords... Why is that? How can I get the point drawn in the proper place?
Thank you.
Answer
Don't project the point into WGS 1984. Use the point that MapView.toMapPoint(double, double) returns to you.
Detailed explanation
In ArcGIS Runtime 10.2.x for Android, a Point object doesn't keep track of its spatial reference. It's really just an x-value and a y-value. If you give a Point to the MapView, the MapView will assume that the Point is in the same spatial reference as the map, which by default is usually in Web Mercator.
Here's what your code is doing:
Point mapPoint=view.toMapPoint(point.getX(), point.getY());
mapPoint is now a point in MapView's spatial reference, which is probably Web Mercator. The point is probably some number of millions of meters from (0, 0), but since the Point class is blissfully unaware of spatial references, it just contains an x-value in the millions and a y-value in the millions, with no units.
SpatialReference sp = SpatialReference.create(SpatialReference.WKID_WGS84);
Point p1Done=(Point) GeometryEngine.project(mapPoint, view.getSpatialReference(), sp);
p1Done is now a point in WGS 1984, which is some number of degrees from (0, 0). You have a comment that says the x-value is -3.0246720297389555 and the y-value is 40.83564363734672, or maybe the other way around, but the important point is that the x-value is between -180 and 180 and the y-value is between -90 and 90, with no units.
UtilsMapa.addPoint(p1Done, view, "Marker",view.getGraphicLayer());
Your helper method is called with p1Done, which is called point in your helper method.
Graphic graphic=new Graphic(point,simpleMarker);
A graphic is created with a point whose x-value is -3.0246720297389555 and y-value is 40.83564363734672.
gLayer.addGraphic(graphic);
The graphic is added to a graphics layer, which has the spatial reference of the map that contains it, which is probably Web Mercator. The map correctly displays the point -3.024672029738955 meters west of (0, 0) and 40.83564363734672 meters north of (0, 0).
Side note: upgrade to ArcGIS Runtime 100.x
If you had used Runtime 100.x, you might have avoided this problem, since Runtime 100.x geometries, including Point, do keep track of their spatial reference.
Related
Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.
I'm a libgdx NOOB - apologies if this is an obvious question..
Using libgdx I have set up a Perspective camera, looking at origin (camera near = 1f, far = 300f). Viewport extends across the entire screen.
Basically, I would like to know how to convert a 2D screen coordinate (x,y) to the 3D world coordinate (x, y, z) where the Z value is clamped to the camera's near plane.
I think I should use the camera.getPickRay method to get a picking ray for the screen coordinate. I should then get the intersection point of this ray and the camera's near plane to get the world coordinate of the point on the near plane.
I thought that the resulting Ray object's origin property was the near-plane intersection point I was after, but this doesn't seem to be the case.
Am I on the right track?
Camera#unproject() returns, depending on the z-value, the value between the near (z=0) and far plane (z=1), see the javadocs. Camera#getPickRay() sets the origin member to the unprojected value at z=0, thus on the near plane, see the code. If you don't need the ray (including the direction) then you don't have to calculate the pick ray, instead you can call the unproject method directly.
Vector3 pointOnNearPlane = camera.unproject(new Vector3(touchX, touchY, 0f));
Likewise, for the point on the far plane:
Vector3 pointOnFarPlane = camera.unproject(new Vector3(touchX, touchY, 1f));
camera.rotate.y pans left or right in a predictable manner.
camera.rotate.x looks up or down predictably when camera.rotate.y is at 180 degrees.
but when I change the value of camera.rotate.y to some new value, and then I change the value of camera.rotate.x, the horizon rotates.
I've looked for an algorithm to adjust for horizon rotation after camera.rotate.x is changed, but haven't found it.
In three.js, an object's orientation can be specified by its Euler rotation vector object.rotation. The three components of the rotation vector represent the rotation in radians around the object's internal x-axis, y-axis, and z-axis respectively.
The order in which the rotations are performed is specified by object.rotation.order. The default order is "XYZ" -- rotation around the x-axis occurs first, then the y-axis, then the z-axis.
Rotations are performed with respect to the object's internal coordinate system -- not the world coordinate system. This is important. So, for example, after the x-rotation occurs, the object's y- and z- axes will generally no longer be aligned with the world axes. Rotations specified in this way are not unique.
So, for example, if in code you specify,
camera.rotation.y = y_radians; // Y first
camera.rotation.x = x_radians; // X second
camera.rotation.z = 0;
the rotations are applied in the object's rotation.order, not in the order you specified them.
In your case, you may find it more intuitive to set rotation.order to "YXZ", which is equivalent to "heading, pitch, and roll".
For more information about Euler angles, see the Wikipedia article. Three.js follows the Tait–Bryan convention, as explained in the article.
three.js r.61
I've been looking for the same info for few days now, the trick is: use regular rotateX to look up/down, but use rotateOnWorldAxis(new THREE.Vector3(0.0, 1.0, 0.0), angle) for horiz turn (https://discourse.threejs.org/t/vertical-camera-rotation/15334).
I've used Nick Vellios' tutorial to create radial gravity with a Box2D object. I am aware of Make a Vortex here on SO, but I couldn't figure out how to implement it in my project.
I have made a vortex object, which is a Box2D circleShape sensor that rotates with a consistent angular velocity. When other Box2D objects contact this vortex object I want them to rotate around at the same angular velocity as the vortex, gradually getting closer to the vortex's centre. At the moment the object is attracted to the vortex's centre but it will head straight for the centre of the vortex, rather than spinning around it slowly like I want it to. It will also travel in the opposite direction than the vortex as well as with the vortex's rotation.
Given a vortex and a box2D body, how can I set the box2d body to rotate with the vortex as it gets 'sucked in'.
I set the rotation of the vortex when I create it like this:
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.angle = 2.0f;
bodyDef.angularVelocity = 2.0f;
Here is how I'm applying the radial gravity, as per Nick Vellios' sample code.
-(void)applyVortexForcesOnSprite:(CCSpriteSubclass*)sprite spriteBody:(b2Body*)spriteBody withVortex:(Vortex*)vortex VortexBody:(b2Body*)vortexBody vortexCircleShape:(b2CircleShape*)vortexCircleShape{
//From RadialGravity.xcodeproj
b2Body* ground = vortexBody;
b2CircleShape* circle = vortexCircleShape;
// Get position of our "Planet" - Nick
b2Vec2 center = ground->GetWorldPoint(circle->m_p);
// Get position of our current body in the iteration - Nick
b2Vec2 position = spriteBody->GetPosition();
// Get the distance between the two objects. - Nick
b2Vec2 d = center - position;
// The further away the objects are, the weaker the gravitational force is - Nick
float force = 1 / d.LengthSquared(); // 150 can be changed to adjust the amount of force - Nick
d.Normalize();
b2Vec2 F = force * d;
// Finally apply a force on the body in the direction of the "Planet" - Nick
spriteBody->ApplyForce(F, position);
//end radialGravity.xcodeproj
}
Update I think iForce2d has given me enough info to get on my way, now it's just tweaking. This is what I'm doing at the moment, in addition to the above code. What is happening is the body gains enough velocity to exit the vortex's gravity well - somewhere I'll need to check that the velocity stays below this figure. I'm a little concerned I'm not taking into account the object's mass at the moment.
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint(spriteBody->GetPosition() );
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, spriteBody->GetLinearVelocity() ) * vortexVelNormal;
//Using a force
b2Vec2 vel = bodyVelocity;
float forceCircleX = .6 * bodyVelocity.x;
float forceCircleY = .6 * bodyVelocity.y;
spriteBody->ApplyForce( b2Vec2(forceCircleX,forceCircleY), spriteBody->GetWorldCenter() );
It sounds like you just need to apply another force according to the direction of the vortex at the current point of the body. You can use b2Body::GetLinearVelocityFromWorldPoint to find the velocity of the vortex at any point in the world. From Box2D source:
/// Get the world linear velocity of a world point attached to this body.
/// #param a point in world coordinates.
/// #return the world velocity of a point.
b2Vec2 GetLinearVelocityFromWorldPoint(const b2Vec2& worldPoint) const;
So that would be:
b2Vec2 vortexVelocity = vortexBody->GetLinearVelocityFromWorldPoint( suckedInBody->GetPosition() );
Once you know the velocity you're aiming for, you can calculate how much force is needed to go from the current velocity, to the desired velocity. This might be helpful: http://www.iforce2d.net/b2dtut/constant-speed
The topic in that link only discusses a 1-dimensional situation. For your case it is also essentially 1-dimensional, if you project the current velocity of the sucked-in body onto the vortexVelocity vector:
b2Vec2 vortexVelNormal = vortexVelocity;
vortexVelNormal.Normalize();
b2Vec2 bodyVelocity = b2Dot( vortexVelNormal, suckedInBody->GetLinearVelocity() ) * vortexVelNormal;
Now bodyVelocity and vortexVelocity will be in the same direction and you can calculate how much force to apply. However, if you simply apply enough force to match the vortex velocity exactly, the sucked in body will probably go into orbit around the vortex and never actually get sucked in. I think you would want to make the force quite a bit less than that, and I would scale it down according to the gravity strength as well, otherwise the sucked-in body will be flung away sideways as soon as it contacts the outer edge of the vortex. It could take a lot of tweaking to get the effect you want.
EDIT:
The force you apply should be based on the difference between the current velocity (bodyVelocity) and the desired velocity (vortexVelocity), ie. if the body is already moving with the vortex then you don't need to apply any force. Take a look at the last code block in the sub-section titled 'Using forces' in the link I gave above. The last three lines there do pretty much what you need if you replace 'vel' and 'desiredVel' with the sizes of your bodyVelocity and vortexVelocity vectors:
float desiredVel = vortexVelocity.Length();
float currentVel = bodyVelocity.Length();
float velChange = desiredVel - currentVel;
float force = body->GetMass() * velChange / (1/60.0); //for a 1/60 sec timestep
body->ApplyForce( b2Vec2(force,0), body->GetWorldCenter() );
But remember this would probably put the body into orbit, so somewhere along the way you would want to reduce the size of the force you apply, eg. reduce 'desiredVel' by some percentage, reduce 'force' by some percentage etc. It would probably look better if you could also scale the force down so that it was zero at the outer edge of the vortex.
I had a project where I had asteroids swirling around a central point (there are things jumping between them...which is a different point).
They are connected to the "center" body via b2DistanceJoints.
You can control the joint length to make them slowly spiral inward (or outward). This gives you find grain control instead of balancing force control, which may be difficult.
You also apply tangential force to make them circle the center.
By applying different (or randomly changing) tangential forces, you can make the
crash into each other, etc.
I posted a more complete answer to this question here.
Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).