I want my program to draw a polygon when mouse clicks on a button I have created on my screen. Where should I put the draw polygon commands? I understand that I can't put in in my mouse function because its effect is lost the next time my display callback runs and it has to be in my display function. But can I set an if condition in my display function?
Hope this helps you like it helped me. Once you translate the mouse coordinates into something your 3D object can use (see code below) and store it somewhere, you can draw your object in the loop at the stored coordinates using a transform. I initialized my shape, color, etc. in the init function of my OpenGL program but only draw it when I have selected a number on the keyboard and then clicked somewhere in my viewport. Repeating these steps moves/transforms that object to the new coordinates.
void MouseButton(int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON)
{
leftmousebutton_down = (state == GLUT_DOWN) ? TRUE : FALSE;
if(leftmousebutton_down)
{
cout << "LEFT BUTTON DOWN" << endl;
//pTransInfo[0].vTranslate = Vector3( 0.5f, 0.5f, 0.5f ); // center of scene
GLint viewport[4]; //var to hold the viewport info
GLdouble modelview[16]; //var to hold the modelview info
GLdouble projection[16]; //var to hold the projection matrix info
GLfloat winX, winY, winZ; //variables to hold screen x,y,z coordinates
GLdouble worldX, worldY, worldZ; //variables to hold world x,y,z coordinates
glGetDoublev( GL_MODELVIEW_MATRIX, modelview ); //get the modelview info
glGetDoublev( GL_PROJECTION_MATRIX, projection ); //get the projection matrix info
glGetIntegerv( GL_VIEWPORT, viewport ); //get the viewport info
winX = (float)x;
winY = (float)viewport[3] - (float)y;
winZ = 0;
//get the world coordinates from the screen coordinates
gluUnProject( winX, winY, winZ, modelview, projection, viewport, &worldX, &worldY, &worldZ);
cout << "coordinates: worldX = " << worldX << " worldY = " << worldY << " worldZ = " << worldZ << endl; //THIS IS WHAT YOU WANT, STORE IT IN AN ARRAY OR OTHER STRUCTURE
}
}
}
Related
There are other posts on Stack Overflow on pinch zooming, but I haven't found any helpful ones for OpenGL that do what I'm looking for. I am currently using the orthoM function to change the camera position and to do scaling in OpenGL. I have gotten the camera to move around, and have gotten pinch zooming to work, but the zooming always zooms into the center of the OpenGL surface view coordinate system at 0,0. After trying different things, I haven't found a way yet that allows the camera to move around, while also allowing pinch zooming to the user's touch point (as an example, the touch controls in Clash of Clans is similar to what I am trying to make).
(The method I'm currently using to get the scale value is based on this post.)
My first attempt:
// mX and mY are the movement offsets based on the user's touch movements,
// and can be positive or negative
Matrix.orthoM(mProjectionMatrix, 0, ((-WIDTH/2f)+mX)*scale, ((WIDTH/2f)+mX)*scale,
((-HEIGHT/2f)+mY)*scale, ((HEIGHT/2f)+mY)*scale, 1f, 2f);
In the above code, I realize that the camera moves towards the coordinate 0,0 because as scale gets increasingly smaller, the values for the camera edges decrease towards 0. So although the zoom goes towards the coordinate system center, the movement of the camera moves at the right speeds at any scale level.
So, I then edited the code to this:
Matrix.orthoM(mProjectionMatrix, 0, (-WIDTH/2f)*scale+mX, (WIDTH/2f)*scale+mX,
(-HEIGHT/2f)*scale+mY, (HEIGHT/2f)*scale+mY, 1f, 2f);
The edited code now makes the zoom go toward the center of the screen no matter where in the surface view coordinate system the camera is (although that isn't the full goal), but the camera movement is off, as the offset isn't adjusted for the different scale levels.
I'm still working to find a solution myself, but if anyone has any advice or ideas on how this could be implemented, I would be glad to hear.
Note, I don't think it matters, but I'm doing this in Android and using Java.
EDIT:
Since I first posted this question, I have made some changes to my code. I found this post, which explains the logic of how to pan the camera to the correct position based on the scale, so that the zoompoint remains in the same position.
My updated attempt:
// Only do the following if-block if two fingers are on the screen
if (zooming) {
// midPoint is a PointF object that stores the coordinate of the midpoint between
//two fingers
float scaleChange = scale - prevScale; // scale is the same as in my previous code
float offsetX = -(midPoint.x*scaleChange);
float offsetY = -(midPoint.y*scaleChange);
cameraPos.x += offsetX;
cameraPos.y += offsetY;
}
// cameraPos is a PointF object that stores the coordinate at the center of the screen,
// and replaces the previous values mX and mY
left = cameraPos.x-(WIDTH/2f)*scale;
right = cameraPos.x+(WIDTH/2f)*scale;
bottom = cameraPos.y-(HEIGHT/2f)*scale;
top = cameraPos.y+(HEIGHT/2f)*scale;
Matrix.orthoM(mProjectionMatrix, 0, left, right, bottom, top, 1f, 2f);
The code does work quite a bit better now, but it still isn't completely accurate. I tested how the code worked when panning was disabled, and the zooming worked sort of better. However, when the panning is enabled, the zooming doesn't focus in on the zoompoint at all.
I finally found a solution while working on another project, so I'll post (in simplest form possible) what worked for me in case this could help anyone by chance.
final float currentPointersDistance = this.calculateDistance(pointer1CurrentX, pointer1CurrentY, pointer2CurrentX, pointer2CurrentY);
final float zoomFactorMultiplier = currentPointersDistance/initialPointerDistance; //> Get an initial distance between two pointers before calling this
final float newZoomFactor = previousZoomFactor*zoomFactorMultiplier;
final float zoomFactorChange = newZoomFactor-previousZoomFactor; //> previousZoomFactor is the current value of the zoom
//> The x and y values of the variables are in scene coordinate form (not surface)
final float distanceFromCenterToMidpointX = camera.getCenterX()-currentPointersMidpointX;
final float distanceFromCenterToMidpointY = camera.getCenterY()-currentPointersMidpointY;
final float offsetX = -(distanceFromCenterToMidpointX*zoomFactorChange/newZoomFactor);
final float offsetY = -(distanceFromCenterToMidpointY*zoomFactorChange/newZoomFactor);
camera.setZoomFactor(newZoomFactor);
camera.translate(offsetX, offsetY);
initialPointerDistance = currentPointersDistance; //> Make sure to do this
Method used to calculate the distance between two pointers:
public float calculateDistance(float pX1, float pY1, float pX2, float pY2) {
float x = pX2-pX1;
float y = pY2-pY1;
return (float)Math.sqrt((x*x)+(y*y));
}
Camera class methods used above:
public float getXMin() {
return centerX-((centerX-xMin)/zoomFactor);
}
public float getYMin() {
return centerY-((centerY-yMin)/zoomFactor);
}
public float getXMax() {
return centerX+((xMax-centerX)/zoomFactor);
}
public float getYMax() {
return centerY+((yMax-centerY)/zoomFactor);
}
public void setZoomFactor(float pZoomFactor) {
zoomFactor = pZoomFactor;
}
public void translate(float pX, float pY) {
xMin += pX;
yMin += pY;
xMax += pX;
yMax += pY;
}
The orthoM() function is called like the following:
Matrix.orthoM(projectionMatrix, 0, camera.getXMin(), camera.getXMax(), camera.getYMin(), camera.getYMax(), near, far);
the rect should be alpha=0.1 once the circle touches the rect . but if statement not working . it becomes 0.1 opacity without hitting
/* js
var circle = new lib.mycircle();
stage.addChild(circle);
var rect = new lib.myrect();
stage.addChild(rect);
rect.x=200;
rect.y=300;
circle.addEventListener('mousedown', downF);
function downF(e) {
stage.addEventListener('stagemousemove', moveF);
stage.addEventListener('stagemouseup', upF);
};
function upF(e) {
stage.removeAllEventListeners();
}
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
}
if(circle.hitTest(rect))
{
rect.alpha = 0.1;
}
stage.update();
*/
The way you have used hitTest is incorrect. The hitTest method does not check object to object. It takes an x and y coordinate, and determines if that point in its own coordinate system has a filled pixel.
I modified your example to make it more correct, though it doesn't actually do what you are expecting:
circle.addEventListener('pressmove', moveF);
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
if (rect.hitTest(circle.x, circle.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
stage.update();
}
Key points:
Reintroduced the pressmove. It works fine.
Moved the circle update above the hitTest check. Otherwise you are checking where it was last time
Moved the stage update to last. It should be the last thing you update. Note however that you can remove it completely, because you have a Ticker listener on the stage in your HTML file, which constantly updates the stage.
Added the else statement to turn the alpha back to 1 if the hitTest fails.
Then, the most important point is that I changed the hitTest to be on the rectangle instead. This essentially says: "Is there a filled pixel at the supplied x and y position inside the rectangle?" Since the rectangle bounds are -49.4, -37.9, 99, 76, this will be true when the circle's coordinates are within those ranges - which is just when it is at the top left of the canvas. If you replace your code with mine, you can see this behaviour.
So, to get it working more like you want, you can do a few things.
Transform your coordinates. Use localToGlobal, or just cheat and use localToLocal. This takes [0,0] in the circle, and converts that coordinate to the rectangle's coordinate space.
Example:
var p = rect.localToLocal(0, 0, circle);
if (rect.hitTest(p.x, p.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
Don't use hitTest. Use getObjectsUnderPoint, pass the circle's x/y coordinate, and check if the rectangle is in the returned list.
Hope that helps. As I mentioned in a comment above, you can not do full shape collision, just point collision (a single point on an object).
How do i make so that positions adapts to the new window position when i resize my window in SDL2 and with SDL_RenderSetLogicalSize?
I want to be able to hover a text and make it change color but whenever i resize the window its still in the same window cords. Is there a way to adapt the mouse?
void MainMenu::CheckHover()
{
for (std::list<MenuItem>::iterator it = menuItems.begin(); it != menuItems.end(); it++)
{
Text* text = (*it).text;
float Left = text->GetX();
float Right = text->GetX() + text->GetWidth();
float Top = text->GetY();
float Bottom = text->GetY() + text->GetHeight();
if (mouseX < Left ||
mouseX > Right ||
mouseY < Top ||
mouseY > Bottom)
{
//hover = false
text->SetTextColor(255, 255, 255);
}
else
{
//hover = true
text->SetTextColor(100, 100, 100);
}
}
}
I had a similar problem some time ago, and it was due to multiple updates of my mouse position in one SDL eventloop. I wanted to move a SDL_Texture around by dragging with the mouse but it failed after resizing, because somehow the mouse coordinates were messed up.
What I did was rearrange my code to have only one event handling the mouse position update. Also I'm not using any calls to SDL_SetWindowSize(). When the user resizes the window the renderer is resized appropriately due to SDL_RenderSetLogicalSize().
The relevant code parts look like this - some stuff is adapted to your case. I would also suggest to use a SDL_Rect to detect if the mouse is inside your text area, because the SDL_Rects will be resized internally if the the window/renderer changes size.
//Declarations
//...
SDL_Point mousePosRunning;
// Picture in picture texture I wanted to move
SDL_Rect pipRect;
// Init resizable sdl window
window = SDL_CreateWindow(
"Window",
SDL_WINDOWPOS_CENTERED_DISPLAY(displayIndex),
SDL_WINDOWPOS_CENTERED_DISPLAY(displayIndex),
defaultW, defaultH,
SDL_WINDOW_SHOWN | SDL_WINDOW_RESIZABLE );
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear"); // This one is optional
SDL_RenderSetLogicalSize(renderer, defaultW, defaultH);
// SDL main loop
while(SDL_PollEvent(&event) && running)
{
switch (event.type)
{
// Some event handling here
// ...
// Handle mouse motion event
case SDL_MOUSEMOTION:
// Update mouse pos
mousePosRunning.x = event.button.x;
mousePosRunning.y = event.button.y;
// Check if mouse is inside the pip region
if (SDL_EnclosePoints(&mousePosRunning, 1, &pipRect, NULL))
{
// Mouse is inside the pipRect
// do some stuff... i.e. change color
}
else
{
// Mouse left rectangle
}
break;
}
}
I've put together a custom top-down camera logic script based on Unity3D's ThirdPersonCamera.js script. Everything appears to be working properly, the camera follows the target player on the XZ plane and even moves along the Y-axis as appropriate when the player jumps.
Only the camera isn't looking at the player. So I tried using Transform.LookAt() on the cameraTransform to have the camera looking directly down on the player. This does cause the camera to correctly look directly down on the player, but then movement via WASD no longer works. The player just sits there. Using Spacebar for jumping does still work though.
This doesn't make sense to me, how should the orientation of the camera's transform be affecting the movement of the player object?
The code for my script is below:
// The transform of the camera that we're manipulating
var cameraTransform : Transform;
// The postion that the camera is currently focused on
var focusPosition = Vector3.zero;
// The idle height we are aiming to be above the target when the target isn't moving
var idleHeight = 7.0;
// How long should it take the camera to focus on the target on the XZ plane
var xzSmoothLag = 0.3;
// How long should it take the camera to focus on the target vertically
var heightSmoothLag = 0.3;
private var _target : Transform;
private var _controller : ThirdPersonController;
private var _centerOffset = Vector3.zero;
private var _headOffset = Vector3.zero;
private var _footOffset = Vector3.zero;
private var _xzVelocity = 0.0;
private var _yVelocity = 0.0;
private var _cameraHeightVelocity = 0.0;
// ===== UTILITY FUNCTIONS =====
// Apply the camera logic to the camera with respect for the target
function process()
{
// Early out if we don't have a target
if ( !_controller )
return;
var targetCenter = _target.position + _centerOffset;
var targetHead = _target.position + _headOffset;
var targetFoot = _target.position + _footOffset;
// Determine the XZ offset of the focus position from the target foot
var xzOffset = Vector2(focusPosition.x, focusPosition.z) - Vector2(targetFoot.x, targetFoot.z);
// Determine the distance of the XZ offset
var xzDistance = xzOffset.magnitude;
// Determine the Y distance of the focus position from the target foot
var yDistance = focusPosition.y - targetFoot.y;
// Damp the XZ distance
xzDistance = Mathf.SmoothDamp(xzDistance, 0.0, _xzVelocity, xzSmoothLag);
// Damp the XZ offset
xzOffset *= xzDistance;
// Damp the Y distance
yDistance = Mathf.SmoothDamp(yDistance, 0.0, _yVelocity, heightSmoothLag);
// Reposition the focus position by the dampened distances
focusPosition.x = targetFoot.x + xzOffset.x;
focusPosition.y = targetFoot.y + yDistance;
focusPosition.z = targetFoot.z + xzOffset.y;
var minCameraHeight = targetHead.y;
var targetCameraHeight = minCameraHeight + idleHeight;
// Determine the current camera height with respect to the minimum camera height
var currentCameraHeight = Mathf.Max(cameraTransform.position.y, minCameraHeight);
// Damp the camera height
currentCameraHeight = Mathf.SmoothDamp( currentCameraHeight, targetCameraHeight, _cameraHeightVelocity, heightSmoothLag );
// Position the camera over the focus position
cameraTransform.position = focusPosition;
cameraTransform.position.y = currentCameraHeight;
// PROBLEM CODE - BEGIN
// Have the camera look at the focus position
cameraTransform.LookAt(focusPosition, Vector3.forward);
// PROBLEM CODE - END
Debug.Log("Camera Focus Position: " + focusPosition);
Debug.Log("Camera Transform Position: " + cameraTransform.position);
}
// ===== END UTILITY FUNCTIONS =====
// ===== UNITY FUNCTIONS =====
// Initialize the script
function Awake( )
{
// If the camera transform is unassigned and we have a main camera,
// set the camera transform to the main camera's transform
if ( !cameraTransform && Camera.main )
cameraTransform = Camera.main.transform;
// If we don't have a camera transform, report an error
if ( !cameraTransform )
{
Debug.Log( "Please assign a camera to the TopDownThirdPersonCamera script." );
enabled = false;
}
// Set the target to the game object transform
_target = transform;
// If we have a target set the controller to the target's third person controller
if ( _target )
{
_controller = _target.GetComponent( ThirdPersonController );
}
// If we have a controller, calculate the center offset and head offset
if ( _controller )
{
var characterController : CharacterController = _target.collider;
_centerOffset = characterController.bounds.center - _target.position;
_headOffset = _centerOffset;
_headOffset.y = characterController.bounds.max.y - _target.position.y;
_footOffset = _centerOffset;
_footOffset.y = characterController.bounds.min.y - _target.position.y;
}
// If we don't have a controller, report an error
else
Debug.Log( "Please assign a target to the camera that has a ThirdPersonController script attached." );
// Apply the camera logic to the camera
process();
}
function LateUpdate( )
{
// Apply the camera logic to the camera
process();
}
// ===== END UNITY FUNCTIONS =====
I've marked the problem code section with PROBLEM CODE comments. If the problem code is removed, it allows WASD movement to work again, but then the camera is no longer looking at the target.
Any insight into this issue is very much appreciated.
I figured it out, the issue was with the ThirdPersonController.js script that I was using. In the function UpdateSmoothedMovementDirection(), the ThirdPersonController uses the cameraTransform to determine the forward direction along the XZ plane based on where the camera is looking at. In doing so, it zeros out the Y axis and normalizes what's left.
The cameraTransform.LookAt() call I perform in my custom TopDownCamera.js script has the camera looking directly down the Y-axis. So when the ThirdPersonController gets a hold of it and zeros out the Y-axis, I end up with zero forward direction, which causes the XZ movement to go nowhere.
Copying ThirdPersonController.js and altering the code so that:
var forward = cameraTransform.TransformDirection(Vector3.forward);
forward.y = 0;
forward = forward.normalized;
becomes:
forward = Vector3.forward;
fixed the issue.
I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.