react on events generated by chaco tools: how to get values out of a chaco tool when an event is fired ? - traits

actually this should be a pretty simple question, but I am experiencing the quite steep learning curve of chaco and traits...
I am currently writing an application to plot a medical image using chaco and traits and I simply want to pick a pixel location from the image and use this pixel location to do evaluations on an image stack. So I started to write my own Chaco Tool that reacts on mouse clicks on an imageplot.
This works fine so far. When I click on the imageplot I can see the mouse coordinates WITHIN the Tool (a custom made PixelPickerTool). However, as I want to use this coordinate value outside the tool my question would be: How can I hand the coordinates over to another object or variable OUTSIDE the Tool when an event is fired.
To illustrate what I want to do I attached the main structure of the two classes I am Writing:
class PixelPickerTool(BaseTool):
'''Pick a Pixel coordinate from an image'''
ImageCoordinates = [0,0]
def normal_left_down(self, event):
print "Mouse:", event.x, event.y,
click_x, click_y = self.component.map_data((event.x, event.y))
img_x = int(click_x)
img_y = int(click_y)
coord = [img_x, img_y]
if ( (img_x > self.ImageSizeX) or (img_x < 0) ):
coord = [0,0]
if ( (img_y > self.ImageSizeY) or (img_y < 0) ):
coord = [0,0]
print coord
# this print gives out the coordinates of the pixel that was clicked - this works fine...
# so inside the picker too I can get the coordinates
# but how can I use the coordinates outside this tool ?
class ImagePlot(HasTraits):
# create simple chaco plot of 2D numpy image array, with a simple interactor (PixelPickerTool)
plot = Instance(Plot)
string = String("hallo")
picker = Instance(PixelPickerTool)
traits_view = View(
Item('plot', editor=ComponentEditor(), show_label=False,width=500, height=500, resizable=False),
Item('string', show_label=False, springy=True, width=300, height=20, resizable=False),
title="")
def __init__(self, numpyImage):
super(ImagePlot, self).__init__()
npImage = np.flipud(np.transpose(numpyImage))
plotdata = ArrayPlotData(imagedata = npImage)
plot = Plot(plotdata)
plot.img_plot("imagedata", colormap=gray)
self.plot = plot
# Bild Nullpunkt ist oben links!
self.plot.default_origin = 'top left'
pixelPicker = PixelPickerTool(plot)
self.picker = pixelPicker
plot.tools.append(pixelPicker)
I want to use the coordinates that are measured by the PixelPickerTool somewhere in this ImagePlot class. E.g. by handing them over to another Object like MyImageSeries.setCoordinate(xy_coordinateFromPickerTool)
So how can I hand over the pixel coordinates from PickerTool to some member variable in this class when an event is fired ?
Maybe something like this: self.PixelCoordinates = picker.getPixelCoordinates() could work ?
But how do I know then, when the on_normal_left_down function was executed in the picker ?
In the end I want to hand the coordinates over to another class which hold more images to process the images and do a fit at the pixel position determined in the ImagePlot.
I tried to use something like "_picker_changed" in my imagePlot class to detect if an event has been fired in the PickerTool, but this didn't detect event firing. So maybe I am doing something wrong...
Can anybody tell me how to get events and associated variables out of this picker tool ?
Cheers,
Andre

"But how do I know then, when the on_normal_left_down function was executed in the picker?"
There are several ways you could probably do this, but one way would be to simply do exactly what you are asking and fire an event that you define explicitly.
for instance:
from traits.api import Event
class PickerTool(BaseTool):
last_coords = SomeTrait
i_fired = Event
def normal_left_down(self,event):
# do whatever necessary processing
self.last_coords = do_some_stuff(event.some_attribute)
# now notify your parent
self.i_fired = True
and then listen to plot.picker.i_fired from wherever you want to display, and look in plot.picker.last_coords for the saved state.
Another thing you can do that may be simpler if what you want to do with these coordinates is very straightforward, is just pass on intialization the data structures the picker needs to interact with (or get them with a chain of calls to self.parent) and do your work directly inside the picker.

Related

QLayout.replace not replacing

I have the following code to replace a widget (self.lbl) each time I click on a button (self.btn):
import sys
from PySide2.QtCore import Slot
from PySide2.QtWidgets import QApplication, QLabel, QVBoxLayout, QWidget, \
QPushButton
class Workshop(QWidget):
def __init__(self):
super().__init__()
self.n = 0
self.btn = QPushButton('Push me')
self.lbl = QLabel(str(self.n))
self.main_layout = QVBoxLayout()
self.sub_layout = QVBoxLayout()
self.sub_layout.addWidget(self.lbl)
self.sub_layout.addWidget(self.btn)
self.main_layout.addLayout(self.sub_layout)
self.btn.clicked.connect(self.change_label)
self.setLayout(self.main_layout)
self.show()
#Slot()
def change_label(self):
new_label = QLabel(str(self.n + 1))
self.main_layout.replaceWidget(self.lbl, new_label)
self.n += 1
self.lbl = new_label
if __name__ == '__main__':
app = QApplication()
w = Workshop()
sys.exit(app.exec_())
Right after its initialization, the object w looks like this:
When I click on the "Push me" button (self.btn), the number is incremented as wanted, but the initial "0" remains in the background:
But the other numbers do not however remain in the background ; only "0" does. Fore example, here is "22" (result after I clicked 22 times on "Push me"):
Note: I know that I could achieve the resultant I want with the setText method, but this code is just a snippet that I will adapt for a class in which I will not have a method like setText.
Thank you!
When you replace the widget in the layout, the previous one still remains there.
From replaceWidget():
The parent of widget from is left unchanged.
The problem is that when a widget is removed from a layout, it still keeps its parent (in your case, the Workshop instance), so you can still view it. This is more clear if you set the alignment to AlignCenter for each new QLabel you create: you'll see that if you add a new label and resize the window, the previous one will keep its previous position:
class Workshop(QWidget):
def __init__(self):
# ...
self.lbl = QLabel(str(self.n), alignment=QtCore.Qt.AlignCenter)
# ...
def change_label(self):
new_label = QLabel(str(self.n + 1), alignment=QtCore.Qt.AlignCenter)
# ...
You have two possibilities, which are actually very similar:
set the parent of the "removed" widget to None: the garbage collector will remove the widget as soon as you overwrite self.lbl:
self.lbl.setParent(None)
remove the widget by calling deleteLater() which is what happens when reparenting a widget to None and, if it has no other persisting references, gets garbage collected:
self.lbl.deleteLater()
For your pourposes, I'd suggest you to go with deleteLater(), as calling setParent() (which is a reimplementation of QObject's setParent) actually does lots of other things (most importantly, checks the focus chain and resets the widget's window flags), and since the widget is going to be deleted anyway, all those things are actually unnecessary, and QObject's implementation of setParent(None) would be called anyway.
The graphic "glitch" you are facing might depend on the underlying low-level painting function, which has some (known) unexpected behaviors on MacOS in certain cases.

Moving a QGraphicsProxyWidget with ItemIgnoresTransformations after changing QGraphicsView scale

I have a QGraphicsScene that contains multiple custom QGraphicsItems. Each item contains a QGraphicsProxyWidget which itself contains whatever widgets are needed by the business logic. The proxy has a Qt::Window flag applied to it, so that it has a title bar to move it around. This is all working well, except when moving a proxy widget when the view has been scaled.
The user can move around the scene à la google maps, ie by zooming out then zooming in back a little farther away. This is done with calls to QGraphicsView::scale. Items should always be visible no matter the zoom value, so they have the QGraphicsItem::ItemIgnoresTransformations flag set.
What happens when moving a proxyWidget while the view has been scaled is that on the first move event the widget will jump to some location before properly being dragged.
I had this issue with Qt5.7.1, and could reproduce it with PyQt5 as it is simpler to reproduce and hack around, please see the snippet below.
Steps to reproduce:
move the widget around, notice nothing unusual
use the mouse wheel to zoom in or out. The higher the absolute scale, the higher the effect on the issue.
click on the widget, and notice how it jumps on the first moving of the mouse.
Snippet:
import sys
import PyQt5
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QWidget, QVBoxLayout, QPushButton
from PyQt5.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsProxyWidget, QGraphicsWidget, QGraphicsObject
global view
global scaleLabel
def scaleScene(event):
delta = 1.0015**event.angleDelta().y()
view.scale(delta, delta)
scaleLabel.setPlainText("scale: %.2f"%view.transform().m11())
view.update()
if __name__ == '__main__':
app = QApplication(sys.argv)
# create main widget
w = QWidget()
w.resize(800, 600)
layout = QVBoxLayout()
w.setLayout(layout)
w.setWindowTitle('Example')
w.show()
# rescale view on mouse wheel, notice how when view.transform().m11() is not 1,
# dragging the subwindow is not smooth on the first mouse move event
w.wheelEvent = scaleScene
# create scene and view
scene = QGraphicsScene()
scaleLabel = scene.addText("scale: 1")
view = QGraphicsView(scene)
layout.addWidget(view)
view.show();
# create item in which the proxy lives
item = QGraphicsWidget()
scene.addItem(item)
item.setFlag(PyQt5.QtWidgets.QGraphicsItem.ItemIgnoresTransformations)
item.setAcceptHoverEvents(True)
# create proxy with window and dummy content
proxy = QGraphicsProxyWidget(item, Qt.Window)
button = QPushButton('dummy')
proxy.setWidget(button)
# start app
sys.exit(app.exec_())
The jump distance is:
proportional to the scaling of the view , and to the distance of the mouse from the scene origin
goes from scene position (0,0) towards the mouse position (I think)
might be caused by the proxy widget not reporting the mouse press/move properly. I'm hinted at this diagnostic after looking at QGraphicsProxyWidgetPrivate::mapToReceiver in qgraphicsproxywidget.cpp (sample source), which does not seem to take scene scaling into account.
I am looking for either
confirmation that this is an issue with Qt and I did not misconfigured the proxy.
an explanation on how fix the mouse location given by the proxy to its children widgets (after installing a eventFilter)
any other workaround
Thanks
Almost 2 years later I got back to this issue again, and finally found a solution. Or rather a workaround, but a simple one at least. It turns out I can easily avoid getting into the issue with local/scene/ignored transforms in the first place.
Instead of parenting the QGraphicsProxyWidget to a QGraphicsWidget, and explicitly setting the QWidget as proxy target, I get the proxy directly from the QGraphicsScene, letting it set the window flag on the wrapper, and set the ItemIgnoresTransformations flag on the proxy. Then (and here's the workaround) I install an event filter on the proxy, intercept the GraphicsSceneMouseMove event where I force the proxy position to currentPos+mouseDelta (both in scene coordinates).
Here's the code sample from above, patched with that solution:
import sys
import PyQt5
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import *
global view
global scaleLabel
def scaleScene(event):
delta = 1.0015**event.angleDelta().y()
view.scale(delta, delta)
scaleLabel.setPlainText("scale: %.2f"%view.transform().m11())
view.update()
class ItemFilter(PyQt5.QtWidgets.QGraphicsItem):
def __init__(self, target):
super(ItemFilter, self).__init__()
self.target = target
def boundingRect(self):
return self.target.boundingRect()
def paint(self, *args, **kwargs):
pass
def sceneEventFilter(self, watched, event):
if watched != self.target:
return False
if event.type() == PyQt5.QtCore.QEvent.GraphicsSceneMouseMove:
self.target.setPos(self.target.pos()+event.scenePos()-event.lastScenePos())
event.setAccepted(True)
return True
return super(ItemFilter, self).sceneEventFilter(watched, event)
if __name__ == '__main__':
app = QApplication(sys.argv)
# create main widget
w = QWidget()
w.resize(800, 600)
layout = QVBoxLayout()
w.setLayout(layout)
w.setWindowTitle('Example')
w.show()
# rescale view on mouse wheel, notice how when view.transform().m11() is not 1,
# dragging the subwindow is not smooth on the first mouse move event
w.wheelEvent = scaleScene
# create scene and view
scene = QGraphicsScene()
scaleLabel = scene.addText("scale: 1")
view = QGraphicsView(scene)
layout.addWidget(view)
view.show();
button = QPushButton('dummy')
proxy = scene.addWidget(button, Qt.Window)
proxy.setFlag(PyQt5.QtWidgets.QGraphicsItem.ItemIgnoresTransformations)
itemFilter = ItemFilter(proxy)
scene.addItem(itemFilter)
proxy.installSceneEventFilter(itemFilter)
# start app
sys.exit(app.exec_())
Hoping this may help someone who's ended up in the same dead end I was :)

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

THREE.js rotating camera around an object using orbit path

I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.

Glut glLoadMatrixf camera equivalent

In my glut application I'm simulating a plane with the camera. When the planes speed is low I intend to have the nose start to point towards the ground as the camera falls. My first instinct was to just change the pitch until it was pointed downwards at -90degrees. However I can't just change the pitch because if the plane is tilted on its side or upside down then it would note be changing direction towards the ground.
Now i'm trying to do a rough simulation of this by shifting the 'lookAt.y' downwards. To do this I am trying to get all the current camera coordinates that I use to set the camera
(eye.x, eye.y, eye.z, look.x, look.y, look.z, up.x, up.y, up.z). Then recall the set with the new modified values.
I've been working with the Camera.cpp and Camera.h to control my camera functions. They can be found here
after adding methods to get all the values, only the eye values are actually updated when various camera motions are made. I guess my question is how do I retrieve these values.
The glLoadMaxtrix call is in this function
void Camera :: setModelViewMatrix(void)
{ // load model view matrix with existing camera values
float m[16];
Vector3 eVec(eye.x, eye.y, eye.z);
m[0] = u.x; m[4] = u.y; m[8] = u.z; m[12] = -eVec.dot(u);
m[1] = v.x; m[5] = v.y; m[9] = v.z; m[13] = -eVec.dot(v);
m[2] = n.x; m[6] = n.y; m[10] = n.z; m[14] = -eVec.dot(n);
m[3] = 0; m[7] = 0; m[11] = 0; m[15] = 1.0;
look.x = u.y; look.y = v.y; look.z = n.y;
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(m);
}
Is there a way to get 'eye', 'lookAt', and 'up' values from the matrix here? Or should I do something else to get these values?
-Thanks in advance for your help
The camera class you link to is not an actual OpenGL class, but it should be simple enough to work with.
The function quoted just takes the current values of the camera object and sends them to OpenGL. If you look at the camera's set function, you can see how the program calculates the values it actually stores.
The eye value is stored directly. The lookAt value is just the value of (eye - n), by vector math. The up value is the hardest, but if I remember my vector math correctly, I believe that up = (n cross u).