Interference of Sliders in Kivy - slider

I'm learning Kivy and currently try to understand the Slider class. I created two sliders. Slider one is supposed to react to on_touch_move only, while slider two should react to on_touch_up and on_touch_down. If I implement this, like I did in the example below, both sliders interfere, i.e. they react to all three event dispatchers. I tried to understand why that is and how to solve the issue, but I can't. Thank you for helping me out.
The sliders.kv file:
#: kivy 1.9.0
SliderScreen:
<SliderScreen>:
Slider:
min: 0
max: 1
value: 0.75
step: 0.01
on_touch_move: root.test_a()
Slider:
min: 0
max: 1
value: 0.25
step: 0.01
on_touch_up: root.test_b()
on_touch_down: root.test_c()
and main.py:
import kivy
kivy.require('1.9.0')
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.slider import Slider
class SliderScreen(BoxLayout):
def test_a(self):
print("test_a accessed")
def test_b(self):
print("test_b accessed")
def test_c(self):
print("test_c accessed")
class SlidersApp(App):
pass
if __name__ == '__main__':
SlidersApp().run()

on_touch_move, on_touch_up and on_touch_down events are captured by SliderScreen class and then propagated to all its widgets. According to the documentation:
By default, touch events are dispatched to all currently displayed
widgets. This means widgets recieve the touch event whether it occurs
within their physical area or not.
This can be counter intuitive if you have experience with other GUI
toolkits. These typically divide the screen into geometric areas and
only dispatch touch or mouse events to the widget if the coordinate
lies within the widgets area.
This requirement becomes very restrictive when working with touch
input. Swipes, pinches and long presses may well originate from
outside of the widget that wants to know about them and react to them.
In order to provide the maximum flexibility, Kivy dispatches the
events to all the widgets and lets them decide how to react to them.
If you only want to respond to touch events inside the widget, you
simply check:
def on_touch_down(self, touch):
if self.collide_point(*touch.pos):
# The touch has occurred inside the widgets area. Do stuff!
pass
Therefore you should use in your code:
Builder.load_string("""
<SliderScreen>:
Slider:
min: 0
max: 1
value: 0.75
step: 0.01
on_touch_move: if self.collide_point(*args[1].pos): root.test_a()
Slider:
min: 0
max: 1
value: 0.25
step: 0.01
on_touch_up: if self.collide_point(*args[1].pos): root.test_b()
on_touch_down: if self.collide_point(*args[1].pos): root.test_c()
""")
class SliderScreen(BoxLayout):
def test_a(self):
print("test_a accessed")
def test_b(self):
print("test_b accessed")
def test_c(self):
print("test_c accessed")

Related

max_height doesn't work for MDTextField in KivyMD

I was trying use max_height to limit the number of lines that the multiline=True MD TextField can expand to. In the KivyMD documentation for the TextField class (https://kivymd.readthedocs.io/en/latest/components/text-field/#module-kivymd.uix.textfield), there is sample code that is accompanied by a gif of what running it should look like, but when I copy/pasted it into a python file by itself to test it and ran it in PyCharm, the MDTextField didn't stop like it does in the gif or at all.
The example code given:
from kivy.lang import Builder
from kivymd.app import MDApp
KV = '''
MDScreen
MDTextField:
size_hint_x: .5
hint_text: "multiline=True"
max_height: "200dp"
mode: "fill"
fill_color: 0, 0, 0, .4
multiline: True
pos_hint: {"center_x": .5, "center_y": .5}
'''
class Example(MDApp):
def build(self):
return Builder.load_string(KV)
Example().run()
gif of what it should do
Is this some kind of bug or is there something I can do about it? I'm trying to implement this in my project, but not even the example code is working for me.
max_height only limits the height that the text box can reach, not the number of lines the user can input, as stated here: https://kivymd.readthedocs.io/en/1.1.1/components/textfield/#kivymd.uix.textfield.textfield.MDTextField.max_height
There is no built-in way to limit the number of lines the user can input, you'll have to do it manually yourself such as counting number of lines of input.

what is the proper way to set kivy scrollview effects_cls property?

I want to stop the user from over scrolling. kivy doc say that the effects_cls property will change this behavior, but I have not found a way to make it work.
Although you have solved your problem I will provide an example for future users.
You can change what effect is being used by setting effect_cls to any effect class. If you want to disable the overscroll effect to prevent the scroll bouncing effect ScrollEffect solve the problem.
Example using kivy Language:
from kivy.app import App
from kivy.uix.scrollview import ScrollView
from kivy.lang import Builder
Builder.load_string('''
#:import ScrollEffect kivy.effects.scroll.ScrollEffect
#:import Button kivy.uix.button.Button
<RootWidget>
effect_cls: ScrollEffect
GridLayout:
size_hint_y: None
height: self.minimum_height
cols: 1
on_parent:
for i in range(10): self.add_widget(Button(text=str(i), size_hint_y=None))
''')
class RootWidget(ScrollView):
pass
class MainApp(App):
def build(self):
root = RootWidget()
return root
if __name__ == '__main__':
MainApp().run()
Output:
so I was trying to use effect_cls: ScrollEffect when it should be effect_cls: 'ScrollEffect'.
have to pass it as a string.

Moving a QGraphicsProxyWidget with ItemIgnoresTransformations after changing QGraphicsView scale

I have a QGraphicsScene that contains multiple custom QGraphicsItems. Each item contains a QGraphicsProxyWidget which itself contains whatever widgets are needed by the business logic. The proxy has a Qt::Window flag applied to it, so that it has a title bar to move it around. This is all working well, except when moving a proxy widget when the view has been scaled.
The user can move around the scene à la google maps, ie by zooming out then zooming in back a little farther away. This is done with calls to QGraphicsView::scale. Items should always be visible no matter the zoom value, so they have the QGraphicsItem::ItemIgnoresTransformations flag set.
What happens when moving a proxyWidget while the view has been scaled is that on the first move event the widget will jump to some location before properly being dragged.
I had this issue with Qt5.7.1, and could reproduce it with PyQt5 as it is simpler to reproduce and hack around, please see the snippet below.
Steps to reproduce:
move the widget around, notice nothing unusual
use the mouse wheel to zoom in or out. The higher the absolute scale, the higher the effect on the issue.
click on the widget, and notice how it jumps on the first moving of the mouse.
Snippet:
import sys
import PyQt5
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QWidget, QVBoxLayout, QPushButton
from PyQt5.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsProxyWidget, QGraphicsWidget, QGraphicsObject
global view
global scaleLabel
def scaleScene(event):
delta = 1.0015**event.angleDelta().y()
view.scale(delta, delta)
scaleLabel.setPlainText("scale: %.2f"%view.transform().m11())
view.update()
if __name__ == '__main__':
app = QApplication(sys.argv)
# create main widget
w = QWidget()
w.resize(800, 600)
layout = QVBoxLayout()
w.setLayout(layout)
w.setWindowTitle('Example')
w.show()
# rescale view on mouse wheel, notice how when view.transform().m11() is not 1,
# dragging the subwindow is not smooth on the first mouse move event
w.wheelEvent = scaleScene
# create scene and view
scene = QGraphicsScene()
scaleLabel = scene.addText("scale: 1")
view = QGraphicsView(scene)
layout.addWidget(view)
view.show();
# create item in which the proxy lives
item = QGraphicsWidget()
scene.addItem(item)
item.setFlag(PyQt5.QtWidgets.QGraphicsItem.ItemIgnoresTransformations)
item.setAcceptHoverEvents(True)
# create proxy with window and dummy content
proxy = QGraphicsProxyWidget(item, Qt.Window)
button = QPushButton('dummy')
proxy.setWidget(button)
# start app
sys.exit(app.exec_())
The jump distance is:
proportional to the scaling of the view , and to the distance of the mouse from the scene origin
goes from scene position (0,0) towards the mouse position (I think)
might be caused by the proxy widget not reporting the mouse press/move properly. I'm hinted at this diagnostic after looking at QGraphicsProxyWidgetPrivate::mapToReceiver in qgraphicsproxywidget.cpp (sample source), which does not seem to take scene scaling into account.
I am looking for either
confirmation that this is an issue with Qt and I did not misconfigured the proxy.
an explanation on how fix the mouse location given by the proxy to its children widgets (after installing a eventFilter)
any other workaround
Thanks
Almost 2 years later I got back to this issue again, and finally found a solution. Or rather a workaround, but a simple one at least. It turns out I can easily avoid getting into the issue with local/scene/ignored transforms in the first place.
Instead of parenting the QGraphicsProxyWidget to a QGraphicsWidget, and explicitly setting the QWidget as proxy target, I get the proxy directly from the QGraphicsScene, letting it set the window flag on the wrapper, and set the ItemIgnoresTransformations flag on the proxy. Then (and here's the workaround) I install an event filter on the proxy, intercept the GraphicsSceneMouseMove event where I force the proxy position to currentPos+mouseDelta (both in scene coordinates).
Here's the code sample from above, patched with that solution:
import sys
import PyQt5
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import *
global view
global scaleLabel
def scaleScene(event):
delta = 1.0015**event.angleDelta().y()
view.scale(delta, delta)
scaleLabel.setPlainText("scale: %.2f"%view.transform().m11())
view.update()
class ItemFilter(PyQt5.QtWidgets.QGraphicsItem):
def __init__(self, target):
super(ItemFilter, self).__init__()
self.target = target
def boundingRect(self):
return self.target.boundingRect()
def paint(self, *args, **kwargs):
pass
def sceneEventFilter(self, watched, event):
if watched != self.target:
return False
if event.type() == PyQt5.QtCore.QEvent.GraphicsSceneMouseMove:
self.target.setPos(self.target.pos()+event.scenePos()-event.lastScenePos())
event.setAccepted(True)
return True
return super(ItemFilter, self).sceneEventFilter(watched, event)
if __name__ == '__main__':
app = QApplication(sys.argv)
# create main widget
w = QWidget()
w.resize(800, 600)
layout = QVBoxLayout()
w.setLayout(layout)
w.setWindowTitle('Example')
w.show()
# rescale view on mouse wheel, notice how when view.transform().m11() is not 1,
# dragging the subwindow is not smooth on the first mouse move event
w.wheelEvent = scaleScene
# create scene and view
scene = QGraphicsScene()
scaleLabel = scene.addText("scale: 1")
view = QGraphicsView(scene)
layout.addWidget(view)
view.show();
button = QPushButton('dummy')
proxy = scene.addWidget(button, Qt.Window)
proxy.setFlag(PyQt5.QtWidgets.QGraphicsItem.ItemIgnoresTransformations)
itemFilter = ItemFilter(proxy)
scene.addItem(itemFilter)
proxy.installSceneEventFilter(itemFilter)
# start app
sys.exit(app.exec_())
Hoping this may help someone who's ended up in the same dead end I was :)

Qt5 QML, why are onHeightChanged and onWidthChanged called without change?

import QtQuick 2.5
import QtQuick.Controls 1.4
import QtQuick.Controls.Styles 1.4
import QtQuick.Layouts 1.2
ApplicationWindow
{
visible: true
width: 640
height: 480
property int lastW: 0
property int lastH: 0
function doSomething()
{
if (lastW == width && lastH == height)
console.log("width & height same as last time")
lastW = width;
lastH = height;
}
onHeightChanged: doSomething();
onWidthChanged: doSomething();
}
Why is doSomething called with no change in width and height (except for once at the start). When i resize the window, i get the console log message.
running windows 8.1
doSomething runs every time width or height of the ApplicationWindow changes. The window may change size in both dimensions simultaneously. If in one moment size change from 100x100 to 101x101 then both signals widthChanged and heightChanged will be emitted for width=101 and height=101. That is why console.log("width & height same as last time") is being executed despite the fact that at the first glance this should never happen.
To comment on doSomething being run on the start. For me doSomething never fires unless I resize window. If for you it does fire when application starts it may be because for a short moment the ApplicationWindow has some initial size (for example 0x0) and just after that it changes size to 640x480 and doSomething runs.
In some rare cases what I have written above may not be valid. You can try to resize ApplicationWindow in one dimension only and still sometimes changed signal will occur twice for the same value. My guess is that in those cases value has changed so fast that while triggering changed twice in QML you read only the second value.
I suspect that it works like this:
width=100 then quickly changes value to 101 fires changed, changes to 102 and again changed is fired. After that the QML signals are being executed. Now you receive two changed signals but in both you read value 102.

react on events generated by chaco tools: how to get values out of a chaco tool when an event is fired ?

actually this should be a pretty simple question, but I am experiencing the quite steep learning curve of chaco and traits...
I am currently writing an application to plot a medical image using chaco and traits and I simply want to pick a pixel location from the image and use this pixel location to do evaluations on an image stack. So I started to write my own Chaco Tool that reacts on mouse clicks on an imageplot.
This works fine so far. When I click on the imageplot I can see the mouse coordinates WITHIN the Tool (a custom made PixelPickerTool). However, as I want to use this coordinate value outside the tool my question would be: How can I hand the coordinates over to another object or variable OUTSIDE the Tool when an event is fired.
To illustrate what I want to do I attached the main structure of the two classes I am Writing:
class PixelPickerTool(BaseTool):
'''Pick a Pixel coordinate from an image'''
ImageCoordinates = [0,0]
def normal_left_down(self, event):
print "Mouse:", event.x, event.y,
click_x, click_y = self.component.map_data((event.x, event.y))
img_x = int(click_x)
img_y = int(click_y)
coord = [img_x, img_y]
if ( (img_x > self.ImageSizeX) or (img_x < 0) ):
coord = [0,0]
if ( (img_y > self.ImageSizeY) or (img_y < 0) ):
coord = [0,0]
print coord
# this print gives out the coordinates of the pixel that was clicked - this works fine...
# so inside the picker too I can get the coordinates
# but how can I use the coordinates outside this tool ?
class ImagePlot(HasTraits):
# create simple chaco plot of 2D numpy image array, with a simple interactor (PixelPickerTool)
plot = Instance(Plot)
string = String("hallo")
picker = Instance(PixelPickerTool)
traits_view = View(
Item('plot', editor=ComponentEditor(), show_label=False,width=500, height=500, resizable=False),
Item('string', show_label=False, springy=True, width=300, height=20, resizable=False),
title="")
def __init__(self, numpyImage):
super(ImagePlot, self).__init__()
npImage = np.flipud(np.transpose(numpyImage))
plotdata = ArrayPlotData(imagedata = npImage)
plot = Plot(plotdata)
plot.img_plot("imagedata", colormap=gray)
self.plot = plot
# Bild Nullpunkt ist oben links!
self.plot.default_origin = 'top left'
pixelPicker = PixelPickerTool(plot)
self.picker = pixelPicker
plot.tools.append(pixelPicker)
I want to use the coordinates that are measured by the PixelPickerTool somewhere in this ImagePlot class. E.g. by handing them over to another Object like MyImageSeries.setCoordinate(xy_coordinateFromPickerTool)
So how can I hand over the pixel coordinates from PickerTool to some member variable in this class when an event is fired ?
Maybe something like this: self.PixelCoordinates = picker.getPixelCoordinates() could work ?
But how do I know then, when the on_normal_left_down function was executed in the picker ?
In the end I want to hand the coordinates over to another class which hold more images to process the images and do a fit at the pixel position determined in the ImagePlot.
I tried to use something like "_picker_changed" in my imagePlot class to detect if an event has been fired in the PickerTool, but this didn't detect event firing. So maybe I am doing something wrong...
Can anybody tell me how to get events and associated variables out of this picker tool ?
Cheers,
Andre
"But how do I know then, when the on_normal_left_down function was executed in the picker?"
There are several ways you could probably do this, but one way would be to simply do exactly what you are asking and fire an event that you define explicitly.
for instance:
from traits.api import Event
class PickerTool(BaseTool):
last_coords = SomeTrait
i_fired = Event
def normal_left_down(self,event):
# do whatever necessary processing
self.last_coords = do_some_stuff(event.some_attribute)
# now notify your parent
self.i_fired = True
and then listen to plot.picker.i_fired from wherever you want to display, and look in plot.picker.last_coords for the saved state.
Another thing you can do that may be simpler if what you want to do with these coordinates is very straightforward, is just pass on intialization the data structures the picker needs to interact with (or get them with a chain of calls to self.parent) and do your work directly inside the picker.