Qt5 QML, why are onHeightChanged and onWidthChanged called without change? - qml

import QtQuick 2.5
import QtQuick.Controls 1.4
import QtQuick.Controls.Styles 1.4
import QtQuick.Layouts 1.2
ApplicationWindow
{
visible: true
width: 640
height: 480
property int lastW: 0
property int lastH: 0
function doSomething()
{
if (lastW == width && lastH == height)
console.log("width & height same as last time")
lastW = width;
lastH = height;
}
onHeightChanged: doSomething();
onWidthChanged: doSomething();
}
Why is doSomething called with no change in width and height (except for once at the start). When i resize the window, i get the console log message.
running windows 8.1

doSomething runs every time width or height of the ApplicationWindow changes. The window may change size in both dimensions simultaneously. If in one moment size change from 100x100 to 101x101 then both signals widthChanged and heightChanged will be emitted for width=101 and height=101. That is why console.log("width & height same as last time") is being executed despite the fact that at the first glance this should never happen.
To comment on doSomething being run on the start. For me doSomething never fires unless I resize window. If for you it does fire when application starts it may be because for a short moment the ApplicationWindow has some initial size (for example 0x0) and just after that it changes size to 640x480 and doSomething runs.
In some rare cases what I have written above may not be valid. You can try to resize ApplicationWindow in one dimension only and still sometimes changed signal will occur twice for the same value. My guess is that in those cases value has changed so fast that while triggering changed twice in QML you read only the second value.
I suspect that it works like this:
width=100 then quickly changes value to 101 fires changed, changes to 102 and again changed is fired. After that the QML signals are being executed. Now you receive two changed signals but in both you read value 102.

Related

QML (KDE Plasmoid): Can't get seamless scrolling text to loop accordingly

I'm working on a plasmoid for KDE Plasma and can't figure out how to implement a scrolling header bar, similar to ones you see at the bottom of the screen on news channels. I want to have a line of text which horizontally scrolls inside a colored box at a fixed speed, looping forever and seamlessly starting over once it reaches the end. I figured out the basic part of the looping transition, allowing the text to move outside of its box then come out the other end:
Rectangle {
width: myItem.width
height: myItem.height
color: myColor
Text {
width: myItem.width
wrapMode: "NoWrap"
maximumLineCount: 1
color: theme.textColor
text: "Whatever goes here."
SequentialAnimation on x {
running: true
loops: Animation.Infinite
NumberAnimation { from: -myItem.width; to: myItem.width; duration: 1000; easing.type: Easing.InOutQuad }
PauseAnimation { duration: 250 }
}
}
}
But this doesn't take into account the length of the text string in order to adjust the start / end position and duration to its real width. It also requires moving the text completely out of bounds then back in, leaving the box empty for one frame... I wonder if there's a way to make it connect seamlessly. It also doesn't notice when I resize the plasmoid and adapt the animation range, only the scale detected at the start is taken into account. How do you suggest redoing that definition to work around scale issues and get consistent results with any box size and text length?

Qt QML Settings.hasTouchScreen returns false

I am trying to find out why flicking is not working with a TreeView example on my Raspberry Pi3 with touch screen.
Looking at the qml code of TreeView.qml, e.g.
https://github.com/RSATom/Qt/blob/master/qtquickcontrols/src/controls/TreeView.qml:
BasicTableView {
...
__mouseArea: MouseArea {
id: mouseArea
parent: __listView
width: __listView.width
height: __listView.height
z: -1
propagateComposedEvents: true
focus: true
// If there is not a touchscreen, keep the flickable from eating our mouse drags.
// If there is a touchscreen, flicking is possible, but selection can be done only by tapping, not by dragging.
preventStealing: !Settings.hasTouchScreen
...
}
}
By similarly looking at the qml code for BasicTableView.qml, it seems that behavior is controlled by Settings.hasTouchScreen.
According to:
https://code.woboq.org/qt5/qtquickcontrols/src/controls/Private/qquickcontrolsettings.cpp.html
it corresponds to the following method:
bool QQuickControlSettings1::hasTouchScreen() const
{
const auto devices = QTouchDevice::devices();
for (const QTouchDevice *dev : devices)
if (dev->type() == QTouchDevice::TouchScreen)
return true;
return false;
}
However, in my case, Settings.hasTouchScreen returns false; i.e. the touch screen (although working for the rest), is not
correctly detected by the QML environment, which probably explains why the flicking does not work.
According to https://doc.qt.io/qt-5/qtouchdevice.html, my touch device should have been registered somehow by the private QWindowSystemInterface::registerTouchDevice() method, but wasn't.
How can I get this to work?
Thanks!
It seems not to work correctly with tslib, but works by using the evdevtouch plugin which is enabled by adding the following command line arguments when launching the program:
-plugin evdevtouch:/dev/input/eventX
where eventX is the event assigned to the touch input.
With that, QTouchDevice::devices() is no longer empty, and the flicking of the TreeView works.

Querying global mouse position in QML

I'm programming a small PoC in QML. In a couple of places in my code I need to bind to/query global mouse position (say, mouse position in a scene or game window). Even in cases where mouse is outside of MouseAreas that I've defined so far.
Looking around, the only way to do it seems to be having whole screen covered with another MouseArea, most likely with hovering enabled. Then I also need to deal with semi-manually propagating (hover) events to underlying mouseAreas..
Am I missing something here? This seems like a pretty common case - is there a simpler/more elegant way to achieve it?
EDIT:
The most problematic case seems to be while dragging outside a MouseArea. Below is a minimalistic example (it's using V-Play components and a mouse event spy from derM's answer). When I click the image and drag outside the MouseArea, mouse events are not coming anymore so the position cannot be updated unless there is a DropArea below.
The MouseEventSpy is taken from here in response to one of the answers. It is only modified to include the position as parameters to the signal.
import VPlay 2.0
import QtQuick 2.0
import MouseEventSpy 1.0
GameWindow {
id: gameWindow
activeScene: scene
screenWidth: 960
screenHeight: 640
Scene {
id: scene
anchors.fill: parent
Connections {
target: MouseEventSpy
onMouseEventDetected: {
console.log(x)
console.log(y)
}
}
Image {
id: tile
x: 118
y: 190
width: 200
height: 200
source: "../assets/vplay-logo.png"
anchors.centerIn: parent
Drag.active: mausA.drag.active
Drag.dragType: Drag.Automatic
MouseArea {
id: mausA
anchors.fill: parent
drag.target: parent
}
}
}
}
You can install a eventFilter on the QGuiApplication, where all mouse events will pass through.
How to do this is described here
In the linked solution, I drop the information about the mouse position when emitting the signal. You can however easily retrieve the information by casting the QEvent that is passed to the eventFilter(...)-method into a QMouseEvent and add it as parameters to the signal.
In the linked answer I register it as singleton available in QML and C++ so you can connect to the signal where ever needed.
As it is provided in the linked answer, the MouseEventSpy will only handle QMouseEvents of various types. Once you start dragging something, there won't be QMouseEvents but QDragMoveEvents e.t.c. Therefore you need to extend the filter method, to also handle those.
bool MouseEventSpy::eventFilter(QObject* watched, QEvent* event)
{
QEvent::Type t = event->type();
if (t == QEvent::MouseButtonDblClick
|| t == QEvent::MouseButtonPress
|| t == QEvent::MouseButtonRelease
|| t == QEvent::MouseMove) {
QMouseEvent* e = static_cast<QMouseEvent*>(event);
emit mouseEventDetected(e->x(), e->y());
}
if (t == QEvent::DragMove) {
QDragMoveEvent* e = static_cast<QDragMoveEvent*>(event);
emit mouseEventDetected(e->pos().x(), e->pos().y());
}
return QObject::eventFilter(watched, event);
}
You can then translate the coordinates to what ever you need to (Screen, Window, ...)
As you have only a couple of places where you need to query global mouse position, I would suggest you to use mapToGlobal or mapToItem methods.
I believe you can get cursor's coordinates from C++ side. Take a look on answer on this question. The question doesn't related to your problem but the solution works as well.
On my side I managed to get global coordinates by directly calling mousePosProvider.cursorPos() without any MouseArea.

Issue in QML slider implementation. Slider jumping backwards when dragged forward

I have created a slider for a mediaplayer application in QML. The value property of the slider is updated by a thread in the background. My issue is that when the slider is dragged to do a seek,even though a seek action has been done and new value for the slider is updated by dragging, when the mouse is released, the handle jumps to its previous position before coming to the new position. I created the slider using the code below
Slider {
id: sliderHorizontal
x: 145
y: 478
visible: false
activeFocusOnPress:true
maximumValue: 100.0
minimumValue: 0.0
updateValueWhileDragging :true
value: Media.SliderValue
onValueChanged: {
//seek to the new value calling a c++ function
}
}
Can anyone guess why the slider is going back even though the background thread that updates the value is also updating to a new position ?

Interference of Sliders in Kivy

I'm learning Kivy and currently try to understand the Slider class. I created two sliders. Slider one is supposed to react to on_touch_move only, while slider two should react to on_touch_up and on_touch_down. If I implement this, like I did in the example below, both sliders interfere, i.e. they react to all three event dispatchers. I tried to understand why that is and how to solve the issue, but I can't. Thank you for helping me out.
The sliders.kv file:
#: kivy 1.9.0
SliderScreen:
<SliderScreen>:
Slider:
min: 0
max: 1
value: 0.75
step: 0.01
on_touch_move: root.test_a()
Slider:
min: 0
max: 1
value: 0.25
step: 0.01
on_touch_up: root.test_b()
on_touch_down: root.test_c()
and main.py:
import kivy
kivy.require('1.9.0')
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.slider import Slider
class SliderScreen(BoxLayout):
def test_a(self):
print("test_a accessed")
def test_b(self):
print("test_b accessed")
def test_c(self):
print("test_c accessed")
class SlidersApp(App):
pass
if __name__ == '__main__':
SlidersApp().run()
on_touch_move, on_touch_up and on_touch_down events are captured by SliderScreen class and then propagated to all its widgets. According to the documentation:
By default, touch events are dispatched to all currently displayed
widgets. This means widgets recieve the touch event whether it occurs
within their physical area or not.
This can be counter intuitive if you have experience with other GUI
toolkits. These typically divide the screen into geometric areas and
only dispatch touch or mouse events to the widget if the coordinate
lies within the widgets area.
This requirement becomes very restrictive when working with touch
input. Swipes, pinches and long presses may well originate from
outside of the widget that wants to know about them and react to them.
In order to provide the maximum flexibility, Kivy dispatches the
events to all the widgets and lets them decide how to react to them.
If you only want to respond to touch events inside the widget, you
simply check:
def on_touch_down(self, touch):
if self.collide_point(*touch.pos):
# The touch has occurred inside the widgets area. Do stuff!
pass
Therefore you should use in your code:
Builder.load_string("""
<SliderScreen>:
Slider:
min: 0
max: 1
value: 0.75
step: 0.01
on_touch_move: if self.collide_point(*args[1].pos): root.test_a()
Slider:
min: 0
max: 1
value: 0.25
step: 0.01
on_touch_up: if self.collide_point(*args[1].pos): root.test_b()
on_touch_down: if self.collide_point(*args[1].pos): root.test_c()
""")
class SliderScreen(BoxLayout):
def test_a(self):
print("test_a accessed")
def test_b(self):
print("test_b accessed")
def test_c(self):
print("test_c accessed")