Draw a scatterplot matrix using glut, opengl - glut

I am new to GLUT and opengl. I need to draw a scatterplot matrix for n dimensional array.
I have saved the data from csv to a vector of vectors and each vector corresponds to a row. I have plotted just one scatterplot. And used GL_LINES to draw the grid. My questions
1. How do I draw points in a particular grid? Using GL_POINTS I can only draw points in the entire window.
Please let me know need any further info to answer this question
Thanks

What you need to do is be able to transform your data's (x,y) coordinates into screen coordinates. The most straightforward way to do it actually does not rely on OpenGL or GLUT. All you have to do is use a little math. Determine the screen (x,y) coordinates of the place where you want a datapoint for (0,0) to be on the screen, and then determine how far apart you want one increment to be on the screen. Simply take your original data points, apply the offset, and then scale them, to get your screen coordinates, which you then pass into glVertex2f() (or whatever function you are using to specify points in your API).
For instance, you might decide you want point (0,0) in your data to be at location (200,0) on your screen, and the distance between 0 and 1 in your data to be 30 pixels on the screen. This operation will look like this:
int x = 0, y = 0; //Original data points
int scaleX = 30, scaleY = 30; //Scaling values for each component
int offsetX = 100, offsetY = 100; //Where you want the origin of your graph to be
// Apply the scaling values and offsets:
int screenX = x * scaleX + offsetX;
int screenY = y * scaleY + offsetY;
// Calls to your drawing functions using screenX and screenY as your coordinates
You will have to determine values that make sense for the scalaing and offsets. You can also have your program use different values for different sets of data, so you can display multiple graphs on the same screen. But this is a simple way to do it.
There are also other ways you can go about this. OpenGL has very powerful coordinate transformation functions and matrix math capabilities. Those may become more useful when you develop increasingly elaborate programs. They're most useful if you're going to be moving things around the screen in real-time, or operating on incredibly large data sets, as they allow you to perform these mathematical calculations very quickly using your graphics hardware (which is able to do them much faster than the CPU). However, the time it takes for the CPU to do simple calculations like those where you only are going to do them once or very infrequently on limited sets of data is not a problem for computers today.

Related

Moving player on Y axis in Godot 2D

I'm new to Godot.
I'm trying to make my player move vertically just like when it's moving horizontally.
I've tried a couple of thoughts, but unfortunately, I couldn't move him the I want him to move.
I want to code my vertical movement in a similar way to my following horizontal movement code if possible:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"), 0.0
)
velocity = speed * direction
velocity = move_and_slide(velocity)
And if it's not possible, how can I code it?
Once upon a time there were vectors. I'm not in the mood to make yet another Introduction to Vector Algebra or to explain How to Work With Arbitrarily Oriented Vectors. Perhaps you might be interested in Math for Game Devs.
In this case, what you need to know is that 2D Vectors have an horizontal an a vertical component (usually called x and y respectively). And you are leaving your vertical component at zero, here:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"), 0.0
)
So… Er… Don't do that. You say you want it to be like the horizontal, so something like this:
var direction: = Vector2(
Input.get_action_strength("move_right") - Input.get_action_strength("move_left"),
Input.get_action_strength("move_down") - Input.get_action_strength("move_up")
)
In computer graphics the vertical component in 2D often goes downwards, due to historical reasons. There are different conventions for 3D, but that is not the issue at hand, pun intended.
The other lines you have already work with arbitrary vectors. You don't need to change them, nor repeat them.

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

What is the right way to resize using NVIDIA NPP to exact destination dimensions?

I'm trying to use NVIDIA NPP to experiment with some image resizing routines. I want to resize to an exact dimension. I've been looking at image resizing using NVIDIA NPP but all of its resize functions take scale factors for X and Y Dimensions, and I could not see any API taking direct destination dimensions.
As an example, this is one API:
NppStatus nppiResizeSqrPixel_8u_C1R(const Npp8u * pSrc, NppiSize oSrcSize, int nSrcStep, NppiRect oSrcROI, Npp8u * pDst, int nDstStep, NppiRect oDstROI, double nXFactor, double nYFactor, double nXShift, double nYShift, int eInterpolation);
I realize one way could be to find the appropriate scale factor the destination dimension, but we don't exactly know how the API decides destination ROI based on scale factor (since it is floating point math). We could reverse the calculation in the jpegNPP sample to find the scale factor, but the API itself does not make any guarantees so I'm not sure how safe it is. Any ideas what are the possibilities?
As a side question, the API also takes two params, nXShift and nYShift, but just says "Source pixel shift in x-direction". I'm not exactly clear what shift is being talked about here. Do you have an idea?
If I wanted to map the whole SRC image to the smaller rectangle in the DST image as shown in the image below I would use xFactor = yFactor = 0.5 and xShift = 0.5*DST.width and yShift = 0.
Mapping src to half size destination image
In other words, the pixel at (x,y) in the SRC is mapped to the pixel (x',y') in the DST as
x' = xFactor * x + xShift
y' = yFactor * y + yShift
In this case, both the source and dest ROI could be the entire support of the respective images.

Detect if a quad is actually visible 2D in OpenGL

I currently have 16 tiles, with individual images that make up 1 big map. I pan by transforming right at the beginning before any actual drawing with this:
GL.Translate(G_.Pan(0), G_.Pan(1), 0)
Then I zoom by doing this:
GL.Ortho(-G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, G_.Size * 1.5 ^ G_.ZoomFactor, -G_.Size * 1.5 ^ G_.ZoomFactor, -1, 1)
G_.Size is a constant that only varies on startup depending on parameters, zoom factor ranges from -1 to -13
What I want to be able to do is check if 1 of the 16 tiles is within the visible area, so then I stop them drawing when they are not on screen. I had found some quite complex methods for doing it, but it was 3D and seemed like a lot of work for something that should be simple. I would of thought it would of been something like just checking if a point is within the bounds of visible area, but I have no idea on how to get the visible area.
Andon M Coleman already suggested you to implement projection volume culling (a generalized form of frustum culling). This is however outside the scope of OpenGL. You must understand that OpenGL is not a "magical" scene graph that does scene management and the likes. It's mere drawing API; what it does is putting shaded, textured points, lines or triangles on the screen and that's it. The rest is up to you, or the libraries you choose to implement it.
In the case of projection volume culling you're testing if a given piece of geometry intersects with the volume defined by the planes that form the borders of the volume. Your projection matrix defines such planes, specifically it transform the view space vertex position volume into the range [-1;1]×[-1;1]×[0;1] of perspective divided clip space. So by inverting the projection matrix and unprojection the corners of the [-1;1]×[-1;1]×[0;1] cube through that you determine the limiting planes of the projection volume in view space.
You then use that information to intersect your quads with the volume to see if they cross it, i.e. are in any way visible.

Gravitational Pull

Does anyone know of a tutorial that would deal with gravitational pull of two objects? Eg. a satellite being drawn to the moon (and possibly sling shot past it).
I have a small Java game that I am working on and I would like to implement his feature in it.
I have the formula for gravitational attraction between two bodies, but when I try to use it in my game, nothing happens?
There are two object on the screen, one of which will always be stationary while the other one moves in a straight line at a constant speed until it comes within the detection range of the stationary object. At which point it should be drawn to the stationary object.
First I calculate the distance between the two objects, and depending on their mass and this distance, I update the x and y coordinates.
But like I said, nothing happens. Am I not implementing the formula correctly?
I have included some code to show what I have so far.
This is the instance when the particle collides with the gates detection range, and should start being pulled towards it
for (int i = 0; i < particle.length; i++)
{
// **************************************************************************************************
// GATE COLLISION
// **************************************************************************************************
// Getting the instance when a Particle collides with a Gate
if (getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()) <=
sumOfRadii(particle[i].getRadius(), barrier.getRadius()))
{
particle[i].calcGravPull(particle[i].getMass(), barrier.getMass(),
getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()));
}
And the method in my Particle class to do the movement
// Calculate the gravitational pull between objects
public void calcGravPull(int mass1, int mass2, double distBetweenObjects)
{
double gravityPull;
gravityPull = GRAV_CONSTANT * ((mass1 * mass2) / (distBetweenObjects * distBetweenObjects));
x += gravityPull;
y += gravityPull;
}
Your formula has problems. You're calculating the gravitational force, and then applying it as if it were an acceleration. Acceleration is force divided by mass, so you need to divide the force by the small object's mass. Therefore, GRAV_CONSTANT * ((mass1) / (distBetweenObjects * distBetweenObjects)) is the formula for acceleration of mass2.
Then you're using it as if it were a positional adjustment, not a velocity adjustment (which an acceleration is). Keep track of the velocity of the moving mass, use that to adjust its position, and use the acceleration to change that velocity.
Finally, you're using acceleration as a scalar when it's really a vector. Calculate the angle from the moving mass to the stationary mass, and if you're representing it as angle from the positive x-axis multiply the x acceleration by the cosine of the angle, and the y acceleration by the sine of the angle.
That will give you a correct representation of gravity.
If it does nothing, check the coordinates to see what is happening. Make sure the stationary mass is large enough to have an effect. Gravity is a very weak force, and you'll have no significant effect with much smaller than a planetary mass.
Also, make sure you're using the correct gravitational constant for the units you're using. The constant you find in the books is for the MKS system - meters, kilograms, and seconds. If you're using kilometers as units of length, you need to multiply the constant by a million, or alternately multiply the length by a thousand before plugging it into the formula.
Your algorithm is correct. Probably the gravitational pull you compute is too small to be seen. I'd remove GRAV_CONSTANT and try again.
BTW if you can gain a bit of speed moving the result of getDistanceBetweenObjects() in a temporary variable.