Godot: How would I get InputEventMouseMotion in the _process function? - input

I was working on a game in Godot and want to get this:
func _input(event):
if event is InputEventMouseMotion:
pass
in the _process function (without using _input or _unhandled_input or anything related or defining a new function)
Is there a way to do this, and if so, how?

You could use
Input.get_last_mouse_speed()
But this looks tricky to get right. From the official documentation:
Returns the mouse speed for the last time the cursor was moved, and this until the next frame where the mouse moves. This means that even if the mouse is not moving, this function will still return the value of the last motion.
Using the _input function is a better solution. If you want to handle mouse movement in _process, you can use _input to store the movement in a variable, which is then read in _process.
Note that this is only a problem for the motion. You can easily get the state of the mouse buttons from Input (get_mouse_button_mask).

Related

Can you activate a method on an Area2D node with the player's raycast2D in Godot?

I created a sort of old style jrpg movement system with a 3 character party. The first character (the player) uses a raycast for checking of surrounding tiles and movement.
I want the player to come close to a shop tile and hit a button to open a dialog.
Player is in the middle, shop is above him, like so:
How would I activate a custom function on the shop tile?
I tried using raycasts and signals. The shop is an Area2D tile.
Is it even possible to trigger an Area2D on_entered signal with a raycast?
The default signals seem to be incorrect for this use, but so do custom signals.
There is probably something very simple I'am missing.
Thanks for any help.
You could check what the raycast is colliding with. Since it looks like you are placing an Area2D on top of a TileMap, why not make the shop a StaticBody2D instead of a tile, and attach the shop script. It would be typical for game objects other than the walls to not be in the TileMap.
Somewhere with your player's movement code:
if Input.is_action_just_pressed("interact"):
var collider = raycast.get_collider()
if collider != null:
if collider.has_method("open_shop"):
collider.call_deferred("open_shop")
I am assuming here that the raycast is properly updated already in your movement code, either by the raycast property enabled being true, or having a call to force_raycast_update().
Here, raycast should be replaced with any reference to the player's Raycast2D, and "open_shop" should be replaced with the function name (still as a string literal) from the shop script you wanted to call.

PyDirectInput Mouse Movement Is Very Sensitive

I have been playing around with pyautogui before switching to pydirectinput in order to automate things in Minecraft. I'm making a mining bot and I'm running into some issues involving automated mouse movement in the game. I'm using the moveRel() function, although I have used move() and moveTo(), they produced the same result as moveRel(), to move the player's head up and down. However, even when I put the Offsets to a really low amount like 1, the player's head rotates in a full range of motion. To help you visualize this, in Minecraft, picture your character staring off into the horizon. Now imagine what would happen if you suddenly jerked the mouse back. The player would face down right? Well, every time I try moving the mouse a little bit using pydirectinput, the player always ends up facing down. What is causing the player to look down as if its camera were anchored when I use the mouse moving function in pydirectinput?
I Solved My Problem. It turns out that I need to turn on Raw Input so that the mouse wouldn't accelerate so much. Raw Input uses the raw mouse movement from your computer meaning that it does not accelerate or deaccelerate the mouse input to match the game sensitivity. I think that's how it works. By the way, raw input is in the mouse control settings in Minecraft. Anyway, because of the acceleration of my mouse input, the simulated mouse movement from my pydirectinput script was too sensitive for the game, so that's why the player always looked downwards no matter what numbers I put into the moveRel() function.

Libgdx - How to spawn particles only when I hold mouse button?

So I slowly got to know how to manipulate particle system and emitter in-game through the code, but there is one simple task I can't get to know how... How can I spawn particles ONLY when I hold the mouse button? I tried a work-around by setting the maxCount of emmiter to 0 when its not pressed but then it either doesnt emit particles at all, or just makes the existing ones disappear immidiately, which looks very unnatural and I don't want it. Is there a way to emit them "manually" in render method?
You probably want to do set the Emission scaled value on the particle emitter. You can leave the max count at whatever maximum particle number you want.
To turn off the creation of particles:
emitter.getEmission().setLow(0);
emitter.getEmission().setHigh(0);
To turn it back on:
emitter.getEmission().setLow(10);
emitter.getEmission().setHigh(10);
Try using a Pool combined with your listeners:
gitHub link
Ok this is what I got to make it work. "blowing" is basically a boolean that is true when holding mouse button and false when not.
if (blowing) {
effectEmitter.start();
} else {
effectEmitter.allowCompletion();
}

How would I go about creating levels in a game that scroll automatically?

I'm working on a 2D shmup, and the idea is that the level continuously scrolls automatically, and your character can move around the screen.
Now, I'm having trouble figuring out how I would implement this and Google hasn't been any help. Right now I have a scrolling background (the background position is simply decremented for each frame) and the player can move around freely in the window, but how would I go about creating the objects in the level? Would I just use a timer to trigger objects and enemies or is there a way to do it based on the position/width of the background (I'd prefer the second method...But I have no clue how that would be done)?
Since this is a general question and doesn't really pertain to any of my code that I've already written as far as I know, I don't think I need to include any of it...But I'll be happy to provide any part of it if needed.
I'd recommend either:
Use physical triggers
Use a list of timed events
Physical triggers
Simply place a box on your level. When it scrolls partially or completely onto the screen (whichever makes more sense - maybe use both in different cases?), you trigger the event associated with that trigger.
This would be simpler to support in a level editor because the physical nature is inherently very easy to visualize.
Timed events
You basically create a timer object at the beginning of the level, and an ordered queue of events. In your game update loop, peek at the head of the queue. If the trigger time of the item at the head of the queue is less than the current elapsed time, pop the item off the queue and trigger the event.
Timed events would be more generically useful because it would also support non-scrolling level, or non-scrolling portions of levels.
Combination of both
You could also do some sort of combination of these to get the benefits of both styles: Easier visualization/level editing, and supporting non-scrolling sections or time-based events.
Each physical trigger will have its own script queue. When the trigger is hit, a timer is started and an event queue is created. That timer and queue is added to a list of currently running timers and queues.
In your update function, you check all items on the list, and trigger events the same way you did with the timed event queue above. Once a queue is emptied, you remove it from the list of timers/queues.
How to detect that a trigger is on-screen
You should implement scrolling first.
One you have scrolling, calculate the rectangle that matches where the screen is located in your pixel/world coordinate system. This will give you the "bounding box" of the screen.
From here, do an intersection test between your event trigger's "bounding box" and the screen.
Here is a test to see if there is any overlap between two rectangles. It isn't order-specific:
rect1.left < rect2.right && rect1.right > rect2.left
&& rect1.top < rect2.bottom && rect1.bottom > rect2.top
If the rects are touching at all, it will return true.
Here is a test to make sure rect1 contains rect2. Order is important:
rect1.left <= rect2.left && rect1.right >= rect2.right
&& rect1.top <= rect2.top && rect1.bottom >= rect2.bottom
If rect2 is completely contained by rect1 (it is completely on-screen), it will return true.
How to implement simple timers
Simply get some sort of clock value (could be SDL_GetTicks), and store that value.
To see how long has elapsed since that timer was started, call the function again and subtract. Compare the values with < to see if the difference is greater than the target time.
Unfortunately, this is where you should use pointers. Something along the lines of:
vector<BadGuy*> Listofbaddies;
//Place enemy just off screen
newYposition = SCREEN_HEIGHT + 20;
//an infinite (almost) amount of badguys can be created with this code:
Listofbaddies.push_back(new Badguy(newXposition,
newYposition,
EnemyType,
blahblah);
Which means that the badguy will need a constructor like:
Badguy::Badguy(float newX, float newY, string Type, whateverelseyouwant){
actualSpritePartOfBadguyClass.setPosition(newX, newY);
}
Does that make sense? I'll elaborate if you ask :D
I'm making a game now that uses something similar :)

I want to animate the movement of a foreign OS X app's window

Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.