Generating relative mouse events on macOS using Core Graphics - objective-c

I'm using the Core Graphics framework to generate mouse events. I want to be able to move the mouse around normally and also move the mouse inside of an FPS game. The issue I'm having is that these two situations seem to require different events.
I can use the following to move the mouse outside of a game.
// Event source was created with this:
// CGEventSourceCreate(kCGEventSourceStateCombinedSessionState)
CGPoint mouse_location(CGEventSourceRef source) {
CGEventRef event = CGEventCreate(source);
CGPoint loc = CGEventGetLocation(event);
CFRelease(event);
}
void mouse_move_relative(CGEventSourceRef source, int x, int y) {
CGPoint pos = mouse_location(source);
pos.x += x;
pos.y += y;
CGEventRef event = CGEventCreateMouseEvent(source, kCGEventMouseMoved, loc, kCGMouseButtonLeft);
CGEventSetIntegerValueField(event, kCGMouseEventDeltaX, x);
CGEventSetIntegerValueField(event, kCGMouseEventDeltaY, y);
CGEventPost(kCGHIDEventTap, event);
CFRelease(event);
}
This works outside of a game but inside a game, the mouse seems to hit the edge of the screen and stop. So I tried using deltas without setting the position.
void mouse_move_relative(CGEventSourceRef source, int x, int y) {
CGEventRef event = CGEventCreate(source);
CGEventSetType(event, kCGEventMouseMoved);
CGEventSetIntegerValueField(event, kCGMouseEventButtonNumber, kCGMouseButtonLeft);
CGEventSetIntegerValueField(event, kCGMouseEventDeltaX, x);
CGEventSetIntegerValueField(event, kCGMouseEventDeltaY, y);
CGEventPost(kCGHIDEventTap, event);
CFRelease(event);
}
This works inside of a game and the mouse doesn't hit the edge of the screen but the mouse doesn't move when outside of the game. If I'm inside a game and open a menu screen, I can't move the mouse anymore. I need two separate code paths or two separate modes for these two situations. I'm trying to avoid having to switch between modes and wondering if this is at all possible.
Can I generate a mouse event that works in both situations?
Maybe there's some way that I tell when the system has gone into relative mouse mode and change the emitted events accordingly? I'm not sure if Core Graphics can do that (or if that even makes sense). Also, I don't fully understand what kCGHIDEventTap and kCGEventSourceStateCombinedSessionState mean. The docs are talking about taps and tables and I don't really get it. Maybe all I need to do is change one of those...?
Note that the above snippets might not be quite right because I'm using a wrapper library in Rust but that's besides the point.

Turned out to be quite simple. All I needed to do was limit the mouse coordinates within the display.
CGDirectDisplayID display = CGMainDisplayID();
size_t width = CGDisplayPixelsWide(display);
size_t height = CGDisplayPixelsHigh(display);
pos.x = min(max(pos.x, 0), width - 1);
pos.y = min(max(pos.y, 0), height - 1);

Related

XNA 2D object collision (without Tiles/Grid)

First time posting here. Tried to look for topics previously to help.
I'm using Visual Basic, but so far I've been able to follow C# and just translate into VB.
I would like collision without tiles. Smooth movement without any sort of snapping. I already have the movement down, and my sprites stop at the edges of the screen.
I've read I could use Bounds and Intersects, which I have tried. When I apply an IF statement to the arrow keys each time they are pressed, using Bounds and Intersects (I just prevent sprite movement if it is intersecting), it works for ONE key. I move left into an object, and I stop. If I apply the IF to all keys, it will work the first time. Say I move left into an object, the IF statement checks if the Intersects is true or not and acts accordingly.
Now I want to move right, away from the object. I can't since my sprite is ALREADY colliding with the object, since each arrow key is programmed to NOT move if there is Intersection. I see perfectly why this happens.
The code I currently have: (Each arrow key has the same code, altered to it)
If Keyboard.GetState(PlayerIndex.One).IsKeyDown(Keys.Right) And rBlockBounds.X <=
graphics.GraphicsDevice.Viewport.Width - rBlockBounds.Width = True Then
If rBlockBoundBoxBounds.Intersects(rObstructBounds) Then
rBlockBounds.X += 0
rBlockBoundBoxBounds.X = rBlockBounds.X - 1
Else
rBlockBounds.X += 1
rBlockBoundBoxBounds.X = rBlockBounds.X - 1
End If
End If
rBlockBounds is my sprite As Rectangle
rBlockBoundBoxBounds is another Rectangle (1 pixle bigger than rBlockBounds) used as a Hit Box more or less that moves with rBlockBounds, and is the thing doing the collision checking
rObstructBounds is the stationary object that I'm moving my Sprite into.
Anyone have suggestions on how I can make this work?
Since I myself program in C#, not VB I can not code your solution but instead I can explain a better way of approaching it.
What you want to do is prevent the two rectangles from ever intersecting. To do this you will need to implement a move method into your code which can check if the two tiles are colliding. Here is a C# example:
public bool MoveX(float distance) // Move Player Horizontally in this example
{
rBlockBounds.X += distance;
if(rBlockBoundBoxBounds.Intersects(rObstructBounds)
{
rBlockBounds.X -= distance;
return false;
}
return true;
}
Which essentially means that if you run into an object you will be pushed out of it. Since it occurs in one tick you won't get any jutty back-and-front animations.
And that should do what you want. You can test this out and then implement it for y-coordinates as well.
Also, you might notice I've made the function return a bool. This is optional but allows you to check if your player has moved or not.
Note if you teleport an object into another one it will cause problems so remember to implement this every time you move anything.
But that should do what you want.
Edit
Note since your objects are not in a tiled grid, you will need to move lots of time in very small steps.
Something like this:
public bool MoveX(float distance) // Move Player Horizontally in this example
{
rBlockBounds.X += distance;
if(rBlockBoundBoxBounds.Intersects(rObstructBounds)
{
rBlockBounds.X -= distance;
return false;
}
return true;
}
public bool MoveX(float distance, int repeat)
{
for(int i=0; i < repeat; i++)
{
rBlockBounds.X += distance;
if(rBlockBoundBoxBounds.Intersects(rObstructBounds)
{
rBlockBounds.X -= distance;
return false;
}
}
return true;
}
Where the second one will take multiple steps. Here is why you would use it:
MoveX(500); // Will move 500 right. Could easily skip over objects!
MoveX(5, 100); // Will move 5 right one hundred times
// ^ This will take more time but will not skip object
Similarly for yours you could do this:
MoveX(3); // If contact object will be max 3 pixels away
MoveX(1, 3); // If contact object will be max 1 pixels away
MoveX(0.5f, 6); // If contact object will be max 0.5 pixels away
Now I am guessing all your x, y positions are integers. If so you could get away doing the second call and come exactly next to each other. If not you would do the third call.
Hope this helped.

How to determine if an object is clicked?

Suppose I have an array of objects Ball that are floating around in the canvas, and if an object is clicked, it will disappear. I am having a hard time thinking how to know if an object is clicked. Should I use for loop to loop through if the mouse position is within the area of those objects? But I am afraid that will slow down the progress. What is a plausible algorithm to achieve this?
Keep track of the various centre points and radius of the Balls, and whenever a mouse click happens, calculate the distance of the mouse co-ordinates to the other balls centres. If any distance comes out to be within the radius of the particular ball, that means that, that particular ball was clicked.
public class Ball {
private Point centre;
private int radius;
public boolean isInVicinityOf(int x, int y)
{
// There are faster ways to write the following condition,
// but it drives the point I'm making.
if(Math.hypot(centre.getX() - x, centre.getY() - y) < radius)
return true;
return false;
}
// ... other stuff
}
Here's a code for checking if mouse click happened on any ball:
// Returns the very first ball object which was clicked.
// And returns null if none was clicked.
public Ball getBallClicked(Ball[] balls, MouseEvent event)
{
for (Ball ball : balls)
{
if(ball.isInVicinityOf(event.getX(), event.getY()))
{
return ball;
}
}
return null;
}
There are many other ways to go about implementing the same thing, like by using Observer pattern and others, but above is one of those approach.
Hope it helps.
Use void mouseClicked() it then specify its cordinates on the screen. You can specify what you want to do with the object in an if-statement in that void.

Resizing desktop application with ligdx issues

I'm trying to make an app using libgdx. When I first launch it on my desktop, my map print well. But if I resize the window, the map is no longer completely in the screen.
But if I change the size on the beginning, the map print well. So there might be an issue with my resizing function.
In my GameScreen class I've got :
public void resize(int width, int height)
{
renderer.setSize(width, height);
}
And my renderer got this :
public void setSize (int w, int h)
{
ppuX = (float)w / (float)world.getWidth();
ppuY = (float)h / (float)world.getHeight();
}
For example, if I reduce my window, the map is too small for the window. If I extend it, the map doesn't in it.
If you have a fixed viewport, which is the case most of the time. You don't need to do anything in your resize method. The camera will fill the window to draw everything even if you resize it at runtime.
I, myself, never put anything on resize.

How can you implement non-standard Leap Motion gestures?

The Leap Motion API only supports four standard gestures: circle, swipe, key tap and screen tap. In my application I need other gestures, but I do not know how can I add them or if it is even possible to add more gestures. I read the API and it was no help.
In my application I want to allow the user to hold an object and drag it. Is that possible using the Leap Motion API? If so, how can I do this?
You will need to create your own methods to recognize the starting point, the process, and the ending point of a gesture.
Starting point:
How will your program recognize you are trying to hold something? A simple gesture I can think of is 2 fingers linked with 1 palm. So in the frame, if you see 2 fingers linked with 1 palm and the fingers are maybe 10-20 mm apart, you can recognize it as a gesture to hold something. When these conditions are fulfilled, the program will recognize the gesture and you can write some code inside these conditions.
For a very ugly example in C#:
Starting point:
Boolean gesture_detected = false;
Frame frame = controller.Frame();
HandList hands = controller.Hands;
if (hands.Count == 1)
{
foreach (Hand hand in hands)
{
if (hand.fingers.Count == 2)
{
int fingerA_x,fingerB_x;
foreach (Finger finger in hand.fingers)
{
if(fingerA_x == 0)
{
fingerA_x = finger.x;
} else
{
fingerB_x = finger.x;
}
}
}
}
if((fingerA_x - fingerB_x) < 20)
{
//Gesture is detected. Do something...
gesture_detected = true;
}
}
Process:
What is your gesture trying to do? If you want to move around, you will have to call a mouse method to do a drag. Search for the method mouse_event() in C++ under PInvoke using the event MOUSEEVENTF_LEFTDOWN.
Ending point:
After you finish dragging, you need to call a mouse method event like MOUSEEVENTF_LEFTUP to simulate a mouse drag that has finished. But how will your program detect when you should stop the drag? Most logical way is if the gesture is no longer being detected in the frame. So write an else condition to process the alternate scenario.
if (!gesture_detected)
{
// Do something
}
Did wonder myself at a point in time if there was a need to derive some Leap Motion classe(s) in order to define custom classes as with other API's, turns out you do not have to. Here below is a short example in C++, the Leap Motion native language, of a custom gesture definition, the Lift gesture which lifts in the air all kinematic objects in your world.
The gesture definition calls for 2 visible hands, both open flat with palms up and moving up slower than the pre-defined (currently deprecated) Leap Motion Swipe gesture, so there is no confusion between the 2 gestures in case the Swipe gesture has been enabled during start-up in the Leap Motion callback function void GestureListener::onConnect(const Leap::Controller& controller).
As seen from the example, the gesture definition imposes constraints on both hands normals and velocities, so it won't be detected at random but not too many constraints, so it can still be executed with some reasonable effort.
// lift gesture: 2 hands open flat, both palms up and moving up slowly
void LeapMotion::liftGesture(int numberHands, std::vector<glm::vec3> palmNormals, std::vector<glm::vec3> palmVelocities) {
if ((numberHands == 2) &&
(palmNormals[0].x < 0.4f) && (palmNormals[1].x < 0.4f) &&
(palmNormals[0].y > 0.9f) && (palmNormals[1].y > 0.9f) &&
(palmNormals[0].z < 0.4f) && (palmNormals[1].z < 0.4f) &&
(palmVelocities[0].z > 50.0f) && (palmVelocities[1].z > 50.0f) &&
(palmVelocities[0].z < 300.0f) && (palmVelocities[1].z < 300.0f)) {
m_gesture = LIFT;
logFileStderr(VERBOSE, "\nLift gesture...\n");
}
}

(C++/CLI) Testing Mouse Position in a Rectangle Relevent to Parent

I've been messing around with the Graphics class to draw some things on a panel. So far to draw, I've just been using the Rectangle Structure. On a panel, by clicking a button, it makes a rectangle in a random place and adds it to an array of other rectangles (They're actually a class called UIElement, which contains a Rectangle member). When this panel is clicked, it runs a test with all the elements to see if the mouse is inside any of them, like this:
void GUIDisplay::checkCollision()
{
Point cursorLoc = Cursor::Position;
for(int a = 0; a < MAX_CONTROLS; a++)
{
if(elementList[a] != nullptr)
{
if(elementList[a]->bounds.Contains(cursorLoc))
{
elementList[a]->Select();
//MessageBox::Show("Click!", "Event");
continue;
}
elementList[a]->Deselect();
}
}
m_pDisplay->Refresh();
}
The problem is, when I click the rectangle, nothing happens.
The UIElement class draws its rectangles in the following bit of code. However, I've modified it a bit, because in this example it uses the DrawReversibleFrame method to do the actually drawing, as I was using Graphics.FillRectangle method. When I changed it, I noticed DrawReversibleFrame drew in a different place than FillRectangle. I believe this is because DrawReversibleFrame draws with its positions relative to the window, while FillRectangle does it relative to whatever Paint event its in (Mines in a panel's Paint method.) So let me just show the code:
void UIElement::render(Graphics^ g)
{
if(selected)
{
Pen^ line = gcnew Pen(Color::Black, 3);
//g->FillRectangle(gcnew SolidBrush(Color::Red), bounds);
ControlPaint::DrawReversibleFrame(bounds, SystemColors::Highlight, FrameStyle::Thick);
g->FillRectangle(gcnew SolidBrush(Color::Black), bounds);
//g->DrawLine(line, bounds.X, bounds.Y, bounds.Size.Width, bounds.Size.Height);
}
else
{
ControlPaint::DrawReversibleFrame(bounds, SystemColors::ControlDarkDark, FrameStyle::Thick);
//g->FillRectangle(gcnew SolidBrush(SystemColors::ControlDarkDark), bounds);
}
}
I add in both DrawReverisbleFrame and FillRectangle so that way I could see the difference. This is what it looked like when I clicked the frame drawn by DrawReversibleFrame:
The orange frame is where I clicked, the black is where its rendering. This shows me that the Rectangle's Contains() method is look for the rectangle relevant to the window, and not the panel. That's what I need fixed :)
I'm wondering if this is happening because the collision is tested outside of the panels Paint method. But I don't see how I could implement this collision testing inside the Paint method.
UPDATE:
Ok, so I just discovered that it appears that what DrawReversibleFrame and FillRectangle draw are always a certain distance apart. I don't quite understand this, but someone else might.
Both Cursor::Position and DrawReversableFrame operate in screen coordinates. That is for the entire screen, everything on your monitor, and not just your window. FillRectangle on the other hand operates on window coordinates, that is the position within your window.
If you take your example where you were drawing with both and the two boxes are always the same distance apart, and move your window on the screen then click again, you will see that the difference between the two boxes changes. It will be the difference between the top left corner of your window and the top left corner of the screen.
This is also why when you check to see what rectangle you clicked isn't hitting anything. You are testing the cursor position in screen coordinates against the rectangle coordinates in window space. It is possible that it would hit one of the rectangles, but it probably won't be the one you actually clicked on.
You have to always know what coordiante systems your variables are in. This is related to the original intention of Hungarian Notation which Joel Spolsky talks about in his entry Making Wrong Code Look Wrong.
Update:
PointToScreen and PointToClient should be used to convert coordinates between screen and window coordinates.