How to create a custom mouse cursor in Matplotlib - matplotlib

I am interested in creating a custom mouse cursor, so that during drag and pick events on certain lines or points, the mouse changes from an arrow to a hand (or other symbol).
What is the best method of doing this?
I assume this is possible since the mouse cursor changes to a small cross hair during zoom operations. If possible, a solution using the PyQt/PySide backend would be preferable.

What you need is mpl_canvas. Follow this tutorial to set one up.
With an mpl_canvas, you can then set up events that get triggered.
fig = matplotlib.figure.Figure()
cid = fig.canvas.mpl_connect('button_press_event', your_method)
There are several kinds of signals under here (listed under Events).
With your signal set up, your_method gets called, with an event parameter. So do something like:
def your_method(event):
print('Your x and y mouse positions are ', event.xdata, event.ydata)
Click on the corrosponding Class and description links to see what exactly is in event. for a specific mpl_canvas Event.
In your specific case, to change how the mouse looks your_method should look something like:
def your_method(event):
#changes cursor to +
QtGui.QApplication.setOverrideCursor(QtGui.QCursor(QtCore.Qt.CrossCursor))

Related

Godot - Input.is_action_just_pressed() runs twice

So I have my Q and E set to control a Camera that is fixed in 8 directions. The problem is when I call Input.is_action_just_pressed() it sets true two times, so it does its content twice.
This is what it does with the counter:
0 0 0 0 1 1 2 2 2 2
How can I fix thix?
if Input.is_action_just_pressed("camera_right", true):
if cardinal_count < cardinal_index.size() - 1:
cardinal_count += 1
else:
cardinal_count = 0
emit_signal("cardinal_count_changed", cardinal_count)
On _process or _physics_process
Your code should work correctly - without reporting twice - if it is running in _process or _physics_process.
This is because is_action_just_pressed will return if the action was pressed in the current frame. By default that means graphics frame, but the method actually detect if it is being called in the physics frame or graphic frame, as you can see in its source code. And by design you only get one call of _process per graphics frame, and one call of _physics_process per physics frame.
On _input
However, if you are running the code in _input, remember you will get a call of _input for every input event. And there can be multiple in a single frame. Thus, there can be multiple calls of _input where is_action_just_pressed. That is because they are in the same frame (graphics frame, which is the default).
Now, let us look at the proposed solution (from comments):
if event is InputEventKey:
if Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
It is testing if the "camera_right" action was pressed in the current graphics frame. But it could be a different input event that one being currently processed (event).
Thus, you can fool this code. Press the key configured to "camera_right" and something else at the same time (or fast enough to be in the same frame), and the execution will enter there twice. Which is what we are trying to avoid.
To avoid it correctly, you need to check that the current event is the action you are interested in. Which you can do with event.is_action("camera_right"). Now, you have a choice. You can do this:
if event.is_action("camera_right") and event.is_pressed() and not event.is_echo():
# whatever
pass
Which is what I would suggest. It checks that it is the correct action, that it is a press (not a release) event, and it is not an echo (which are form keyboard repetition).
Or you could do this:
if event.is_action("camera_right") and Input.is_action_just_pressed("camera_right", true) and not event.is_echo():
# whatever
pass
Which I'm not suggesting because: first, it is longer; and second, is_action_just_pressed is really not meant to be used in _input. Since is_action_just_pressed is tied to the concept of a frame. The design of is_action_just_pressed is intended to work with _process or _physics_process, NOT _input.
So, apparently theres a built in method for echo detection:
is_echo()
Im closing this.
I've encountered the same issue and in my case it was down to the fact that my scene (the one containing the Input.is_action_just_pressed check) was placed in the scene tree, and was also autoloaded, which meant that the input was picked up from both locations and executed twice.
I took it out as an autoload and Input.is_action_just_pressed is now triggered once per input.

Player doesn't spawn correctly in procedural generated map

I've followed "Procedural Generation in Godot: Dungeon Generation" by KidsCanCode #https://www.youtube.com/watch?v=o3fwlk1NI-w and find myself unable to debug the current problem.
This specific commit has the code, but I'll try to explain in more detail bellow.
My main scene has a Camera2D node, a generic Node2D calles Rooms and a TileMap, everything is empty.
When the script starts, it runs a
func make_room(_pos, _size):
position = _pos
size = _size
var s = RectangleShape2D.new()
s.custom_solver_bias = 0.5
s.extents = size
$CollisionShape2D.shape = s
A few times and it fills $Rooms using .add_child(r) where r is a instance of the node that has the make_room() function. It will then iterate over $Rooms.get_children() a few times to create a AStar node to link all the rooms:
The magic comes when make_map() is called after afterwards, it fills the map with non-walkable blocks and then it carves the empty spaces, which works fine too:
There is a find_start_room() that is called to find the initial room, it also sets a global variable to the Main script start_room, which is used to write 'Start' on the map using draw_string(font, start_room.position - Vector2(125,0),"start",Color(3,4,8))
When I hit 'esc' it runs this simple code to instance the player:
player = Player.instance()
add_child(player)
player.position = start_room.position + Vector2(start_room.size.x/2, start_room.size.y/2)
play_mode = true
The problem comes when spawning the player. I tried doing some 'blind' fixing, such as adding or subtracting a Vector2(start_room.size.x/2, start_room.size.y/2) to player.position to see if I could make it fall within the room, to no avail.
Turning to the debugger didn't help, as the positions expressed by the variable inspectors don't seem to mean anything.
I tried implementing a simple 'mouse click print location':
print("Mouse Click/Unclick at: ", event.position)
print("Node thing",get_node("/root/Main/TileMap").world_to_map(event.position))
And also a 'start_room' print location:
print(get_node("/root/Main/TileMap").world_to_map(start_room.position))
And a when player moves print location, written directly into the Character script:
print(get_node("/root/Main/TileMap").world_to_map(self.position))
Getting results like the ones bellow:
Mouse Click/Unclick at: (518, 293)
Node thing(16, 9)
(-142, 0)
(-147, -3)
So, the player doesn't spawn on the same position as the start_room and the mouse position information is not the same as anything else.
Why is the player now spawning correctly? How can I debug this situation?
EDIT1: User Theraot mentioned about how the RigidBody2D is doing some weird collisions, and from what I understood, changing their collision behavior should fix the whole thing.
There's a section on the code that -after generating the random rooms- it removes some of the rooms like this:
for room in $Rooms.get_children():
if randf() < cull:
room.queue_free()
else:
room.mode = RigidBody2D.MODE_STATIC
room_positions.append(Vector3(room.position.x, room.position.y, 0))
From what I understand, if the room is randomly selected it will be deleted using queue_free() OR it will be appended to a room_positions for further processing. This means if I shift all the rooms to a different collision layer, the player/character instance would be alone with the TileMap on the same collision layer.
So I just added a simple room.collision_layer = 3 changing this section of the code to
for room in $Rooms.get_children():
if randf() < cull:
room.queue_free()
else:
room.mode = RigidBody2D.MODE_STATIC
room.collision_layer = 3
room_positions.append(Vector3(room.position.x, room.position.y, 0))
It doesn't seem to have changed anything, the player still spawns outside the room.
Do you see the rooms spread outwards?
You didn't write code to move the rooms. Sure, the code gives them a random position. But even if you set their position to Vector2.ZERO they move outwards, avoiding overlaps.
Why? Because these rooms are RigidBody2D, and they do push other physics objects. Such as other rooms or the player character.
That's the problem: These rooms are RigidBody2D, and you put your KinematicBody2D player character on top of one of them. The RigidBody2D pushes it out.
The tutorial you followed is exploiting this behavior of RigidBody2Ds to spread the rooms. However you don't need these RigidBody2D after you are done populating your TileMap.
Instead, you can store the start position in a variable for later placing the player character (you don't need offsets - by the way - the position of the room is the center of the room), and then remove the RigidBody2Ds. If you want to keep the code that writes the text, you would also have to modify it, so it does not fail when the room no longer exists.
Alternatively, you can edit their collision layer and mask so they don't collide with the player character (or anything for that matter, but why would you want these RigidBody2Ds that collide with nothing?).
Addendum post edit: Collision layers and mask don't work as you expect.
First of all, the collision layer and mask are flags. The values of the layers are powers of two (1, 2, 4, 8...). So, when you set it to 3, it is the layer 1 plus the layer 2. So it still collides with a collision mask of 1.
And second, even if you changed the collision layer of the rooms to 2 (so it does not match the collision mask of 1 that the player character has). The player character still has a layer 1 which match the collision mask of the rooms.
See also the proposal Make physics layers and masks logic simple and consistent.
Thus, you would need to change the layer and mask. Both. in such way that they don't collide. For example, you can set layer and mask to 0 (which disable all collisions). The algorithm that populates the TileMap does not use the layer and mask.

Nested list (tree) - how to nest elements by dragging to the right?

What would be the approach with Vue.Draggable/SortableJS in order to achieve the functionality as shown in this animated gif?
The default behavior of Sortable for nesting is to drag the element up to the above element until the mouse reaches emptyInsertThreshold pixels from the drop zone of the above element but I would like to be able to nest elements by dragging them to the right. Same for un-nesting.
I have set emptyInsertThreshold to 0 to disable the default behavior and now when I drag the element to the right I get the following events fired: clone and start (in that order).
But how do I:
Can get notified when the mouse has traveled the pre-defined distance to the right?
Inform Vue.Draggable that the ghost element should be nested as a child to the element under which I am doing the horizontal movement?
You can get the mouse position given in the start event under event.originalEvent and in the onMove event using the originalEvent argument (2nd argument). I would track the % that the mouse is past the end of the list item above the dragged item, using: (clientX - aboveItemStart) / aboveItemWidth. Then when it reaches 10% or so, change the DOM directly using event.dragged. Not sure how you would modify it to work with Vue.Draggable... you might need to edit the Vue.Draggable source.

Detect object at mouse click location

I have a large number of different image stimuli presented in various locations on screen.
When the participant clicks on a stimulus, I need the name of that stimulus item available for use in the rest of the script.
For example, you can achieve this in E-Prime with the SlideState.HitTest(x, y) method, where x and y are mouse coordinates.
The only similar thing I've been able to see in Psychopy is the mouse.isPressedIn(shape) method. However because you must supply a specific object as an argument, it seems you would need an if clause for each of the stimuli, which seems messy (especially for larger numbers of items)
Is there a better way of doing this? I'm still learning so I might be missing something.
Thanks!
No, I think not. However, if you just add all objects to a list and loop over them, the code will be neat enough.
# List of (pointers to) stimulus objects
shapes = [shape1, shape2, shape3, shape4, shape5]
# Loop over them and check if mouse is pressed on each one of them
for shape in shapes:
if mouse.isPressedIn(shape):
# Set this variable to point to the latest pressed shape
pressed_shape = shape
# Now you can do stuff with this object
pressed_shape.fillColor = 'red'
pressed_shape.draw()
print pressed_shape.name
Note that if the mouse clicks at a location where there are two objects, pressed_shape is the latest in the list for this solution.

How to add arrow annotations?

I need to add a batch of arrow annotations to an image, I know all the start and end points of the arrows.
And I've put them into an image (2 columns, many rows) which I used as a data sheet, how to realize it in script?
I noticed that in the DM help manual that the line annotation has the attributes-- start point and end point.
But the function to create an arrow annotation jsut looks like this:
Component NewArrowAnnotation( Number top, Number left, Number bottom, Number right )
Does that mean the number top and left define the start point, number bottom and right the end point?
I also need to change the color of the annotations, and add some text next to them (either side is OK, but please show me how to control it).
What isn't always clear from the current documentation is that annotations belong to the component object. You therefore find all required commands documented in the component section of the help documentation.
Note that also imageDisplays are themselves a subclass of the component object, so you can add "annotations" (components) to "imageDisplay" (components) using the ComponentAddChild... commands.
A script to add a simple arrow-annotation (pointing at 200/200) can therefore look like the following:
image test := RealImage( "Test", 4, 512, 512 )
test.ShowImage()
ImageDisplay disp = test.ImageGetImageDisplay(0)
component arrow = NewArrowAnnotation(100,100,200,200)
arrow.ComponentSetForegroundColor( 0, 1, 1 )
disp.ComponentAddChildAtEnd( arrow )