Qt QML Settings.hasTouchScreen returns false - qml

I am trying to find out why flicking is not working with a TreeView example on my Raspberry Pi3 with touch screen.
Looking at the qml code of TreeView.qml, e.g.
https://github.com/RSATom/Qt/blob/master/qtquickcontrols/src/controls/TreeView.qml:
BasicTableView {
...
__mouseArea: MouseArea {
id: mouseArea
parent: __listView
width: __listView.width
height: __listView.height
z: -1
propagateComposedEvents: true
focus: true
// If there is not a touchscreen, keep the flickable from eating our mouse drags.
// If there is a touchscreen, flicking is possible, but selection can be done only by tapping, not by dragging.
preventStealing: !Settings.hasTouchScreen
...
}
}
By similarly looking at the qml code for BasicTableView.qml, it seems that behavior is controlled by Settings.hasTouchScreen.
According to:
https://code.woboq.org/qt5/qtquickcontrols/src/controls/Private/qquickcontrolsettings.cpp.html
it corresponds to the following method:
bool QQuickControlSettings1::hasTouchScreen() const
{
const auto devices = QTouchDevice::devices();
for (const QTouchDevice *dev : devices)
if (dev->type() == QTouchDevice::TouchScreen)
return true;
return false;
}
However, in my case, Settings.hasTouchScreen returns false; i.e. the touch screen (although working for the rest), is not
correctly detected by the QML environment, which probably explains why the flicking does not work.
According to https://doc.qt.io/qt-5/qtouchdevice.html, my touch device should have been registered somehow by the private QWindowSystemInterface::registerTouchDevice() method, but wasn't.
How can I get this to work?
Thanks!

It seems not to work correctly with tslib, but works by using the evdevtouch plugin which is enabled by adding the following command line arguments when launching the program:
-plugin evdevtouch:/dev/input/eventX
where eventX is the event assigned to the touch input.
With that, QTouchDevice::devices() is no longer empty, and the flicking of the TreeView works.

Related

NavigationLink button focusable override issue

I face an issue which stucks for days. I am createing a tvos application which reqiures a custome navigationlink(button), when I move the focus to the navigation item, it should scale a little bit, and also I need to change the parent's view backgound. It is pretty simple, but it seems that the focusabe override the my custome button Style. The test shows that the background image was changed but without any scale effect when the navigationbutton get focused. Any suggestion?
NavigationLink(destination: Text("myview"))
{Text("Test")
}
.buttonStyle(myButtonStyle())
.Focusable(true){(focus) in
//the code to change the background image
//myButtonStyle definition
struct MyButtonStyle: ButtonStyle {
func makeBody(configuration: Configuration) -> some View {
return AppButton(configuration: configuration)
}
}
struct AppButton: View {
#Environment(\.isFocused) var focused: Bool
let configuration: ButtonStyle.Configuration
var body: some View {
configuration.label
.scaleEffect(focused ? 1.1 : 1.0)
.focusable(true)
}
}
The line to change the background image is always called when the item get focused as my expected, but the scale effect is gone. If I remove the following line of codes, the scale effect is back:
// .Focusable(true){(focus) in
//the code to change the background image
// }
It looks like that this line of code override my custome style of navigation button, any ideas? Appreciate any help!
Ah, finally I found the tricky, though there is very little document about this. When Focusable was introduced, it should not be in your code to change focus engine, which will cause the navigationlink tap message uncaptured, then your navigationlink for another view will not work.
Use .onChange() function to deal with any focus change event, not use Focusable.

MouseArea in qml(onEntered, onExited, onReleased)

I have one button. I want to change the states of button e.g:
Default image is black.
onEntered i want image as blue, onExited i want image as black(equal to default state), and onReleased i want image as blue(equal to onEntered state).
Note:
onRelease should be active inside the button and outside the button onRelease shouldn't work.
How this can be achieved?
Mouse area looks like this:
MouseArea
{
anchors.fill: firstImage(parent)
onEntered:
{
firstImage.source = "blue.img"
}
onExited:
{
firstImage.source = "black.img"
}
onReleased:
{
firstImage.source = "blue.img"
}
}
Problem i am facing is:
onRelease is active outside the button.
I want onRelease to be active when press is released inside the button.
You can leverage the fact that you can give every object in qml an extra custom property.
I came up with the following, which seems to be what you ask, however, I see a flaw, because when you are 'entered' and press, the button will go to entered state, so there is not difference in the 'released' state, and after leaving the MouseArea it will again go to 'exited' state.
Note, I did not copy the firstImage.source stuff, but you can easily tailor this example to your situation
import QtQuick.Controls 2.4
Button {
hoverEnabled: true
property bool touched : false
onHoveredChanged: touched = hovered
onReleased: touched = true
text: touched ? "touched" : "not touched"
}
The hoverEnabled needs to be set

ObjectiveC Accessibility API: UnMaximize Window

I’m not sure if I am referring to this correctly, but when I use the word “UnMaximize”, I’m referring to:
When you click on the green button which is third on the top left of a
Chrome Window, it Maximizes the Window. When I use the word
“UnMaximize” above, I’m referring to the behavior that clicks that
button again so that it is no longer in full screen.
(By the way, what is the correct word for this in MacOS Terminology?)
I enjoy using the Easy Move+Resize App. While it can move Windows around, unfortunately, it has no effect on windows that are Maximized. Fortunately, the code is available on Github.
I’m curious if anyone can point me how to UnMaximize a Window using the Accessibility API
Does anyone what is the UnMaximize equivalent to kAXCloseButtonAttribute
I’m using MacOs 10.12 if that helps.
I’m grateful to #Willeke - Willeke for pointing me in the correct direction.
As mentioned in my question, I was looking at the code of the Easy Move+Resize App on GitHub. The problem with this code/app is that it does not work for Windows that are currently Maximized i.e. it tries to move these Windows, but it cannot, because they are fixed. (Note: This only has use and is relevant in a multi-monitor setup.) This app works correctly for Windows that are not Maximized.
Here, I am trying to add code that would UnMaximize a window in order to move it, and then Maximize it again after it has been moved. Obviously, the code below is in the context of this app, but I’m sure would be useful to users in other contexts.
I first added a wasMaximized property to EMRMoveResize.h
//EMRMoveResize.h
#property bool wasMaximized;
Next, I moved to EMRAppDelegate.m where the actual Event Callback code is. It should be noted that we are only concerned with moving i.e. only concerned with the Left Mouse Button. (This app uses the Right Mouse Button for resizing, which is not relavent when the Window has been maximized.) So, we are only concerned with kCGEventLeftMouseDown, kCGEventLeftMouseDragged and finally with kCGEventLeftMouseUp. In pseudo code, I have done something like:
If (LeftMouseDown) {
Find out if Window is Maximized
If (Window is Maximized) {
set the wasMaximized property
Click FullScreen Button to UnMaximize the Window in Order to Move it
}
The Window is now UnMaximized would now move as other windows in the LeftMouseDragged event, which I have not made any changes to. Finally,
If(LeftMouseUp) {
If(wasMaximized value was set) {
Click FullScreen Button again to Maximize the Window (Since it started out as Maximized)
Reset the wasMaximized property
}
}
Now for the snippets of code changes to EMRAppDelegate.m
if (type == kCGEventLeftMouseDown
|| type == kCGEventRightMouseDown) {
//..
//Skipped Unchanged Code
//..
//Find out if Window is Maximized
CFTypeRef TypeRef = nil;
if (AXUIElementCopyAttributeValue((AXUIElementRef)_clickedWindow, CFSTR("AXFullScreen"), &TypeRef)) {
if(Debug) NSLog(#"Could not get wasMaximized Value");
} else {
[moveResize setWasMaximized: CFBooleanGetValue(TypeRef)];
if(Debug) NSLog(CFBooleanGetValue(TypeRef) ? #"Updated Maximized to True" : #"Updated Maximized to False");
}
//Click FullScreen Button to UnMaximize the Window in Order to Move it
if([moveResize wasMaximized]) {
AXUIElementRef buttonRef = nil;
AXUIElementCopyAttributeValue(_clickedWindow, kAXFullScreenButtonAttribute, (CFTypeRef*)&buttonRef);
if(Debug) NSLog(#"buttonRef: %p", buttonRef);
AXUIElementPerformAction(buttonRef, kAXPressAction);
CFRelease(buttonRef);
}
//..
//Skipped Unchanged Code
//..
}
if (type == kCGEventLeftMouseUp
|| type == kCGEventRightMouseUp) {
//..
//Skipped Unchanged Code
//..
//Click FullScreen Button again to Maximize the Window (Since it started out as Maximized)
AXUIElementRef _clickedWindow = [moveResize window];
if([moveResize wasMaximized]) {
AXUIElementRef buttonRef = nil;
AXUIElementCopyAttributeValue(_clickedWindow, kAXFullScreenButtonAttribute, (CFTypeRef*)&buttonRef);
if(Debug) NSLog(#"buttonRef: %p", buttonRef);
AXUIElementPerformAction(buttonRef, kAXPressAction);
CFRelease(buttonRef);
[moveResize setWasMaximized: false];
}
//..
//Skipped Unchanged Code
//..
}
This worked for me. But I'm not an expert in Objective C or MacOS, so if you feel something can be improved, feel free to edit this post.

Querying global mouse position in QML

I'm programming a small PoC in QML. In a couple of places in my code I need to bind to/query global mouse position (say, mouse position in a scene or game window). Even in cases where mouse is outside of MouseAreas that I've defined so far.
Looking around, the only way to do it seems to be having whole screen covered with another MouseArea, most likely with hovering enabled. Then I also need to deal with semi-manually propagating (hover) events to underlying mouseAreas..
Am I missing something here? This seems like a pretty common case - is there a simpler/more elegant way to achieve it?
EDIT:
The most problematic case seems to be while dragging outside a MouseArea. Below is a minimalistic example (it's using V-Play components and a mouse event spy from derM's answer). When I click the image and drag outside the MouseArea, mouse events are not coming anymore so the position cannot be updated unless there is a DropArea below.
The MouseEventSpy is taken from here in response to one of the answers. It is only modified to include the position as parameters to the signal.
import VPlay 2.0
import QtQuick 2.0
import MouseEventSpy 1.0
GameWindow {
id: gameWindow
activeScene: scene
screenWidth: 960
screenHeight: 640
Scene {
id: scene
anchors.fill: parent
Connections {
target: MouseEventSpy
onMouseEventDetected: {
console.log(x)
console.log(y)
}
}
Image {
id: tile
x: 118
y: 190
width: 200
height: 200
source: "../assets/vplay-logo.png"
anchors.centerIn: parent
Drag.active: mausA.drag.active
Drag.dragType: Drag.Automatic
MouseArea {
id: mausA
anchors.fill: parent
drag.target: parent
}
}
}
}
You can install a eventFilter on the QGuiApplication, where all mouse events will pass through.
How to do this is described here
In the linked solution, I drop the information about the mouse position when emitting the signal. You can however easily retrieve the information by casting the QEvent that is passed to the eventFilter(...)-method into a QMouseEvent and add it as parameters to the signal.
In the linked answer I register it as singleton available in QML and C++ so you can connect to the signal where ever needed.
As it is provided in the linked answer, the MouseEventSpy will only handle QMouseEvents of various types. Once you start dragging something, there won't be QMouseEvents but QDragMoveEvents e.t.c. Therefore you need to extend the filter method, to also handle those.
bool MouseEventSpy::eventFilter(QObject* watched, QEvent* event)
{
QEvent::Type t = event->type();
if (t == QEvent::MouseButtonDblClick
|| t == QEvent::MouseButtonPress
|| t == QEvent::MouseButtonRelease
|| t == QEvent::MouseMove) {
QMouseEvent* e = static_cast<QMouseEvent*>(event);
emit mouseEventDetected(e->x(), e->y());
}
if (t == QEvent::DragMove) {
QDragMoveEvent* e = static_cast<QDragMoveEvent*>(event);
emit mouseEventDetected(e->pos().x(), e->pos().y());
}
return QObject::eventFilter(watched, event);
}
You can then translate the coordinates to what ever you need to (Screen, Window, ...)
As you have only a couple of places where you need to query global mouse position, I would suggest you to use mapToGlobal or mapToItem methods.
I believe you can get cursor's coordinates from C++ side. Take a look on answer on this question. The question doesn't related to your problem but the solution works as well.
On my side I managed to get global coordinates by directly calling mousePosProvider.cursorPos() without any MouseArea.

Zoom Flickable's content on wheel event

By default, Flickable scrolls on mouse wheel event: horizontally if Shift modifier is pressed, and vertically if no modifier is active. I'd like to override this behaviour to make it zoom in/out.
I tried to use MouseArea as a child of Flickable. Defining desired behaviour inside onWheel handler I get what I want, but it breaks flicking feature. Motion of two touch points is recognised as a wheel event on my Mac, what prevents Flickable to steel this event (MouseArea.preventSteeling is false by default). So I get somewhat mixed zooming feature that respects velocity/acceleration behaviour of Flickable.
Add an onWheel handler underneath your MouseArea:
MouseArea
{
onWheel: {
if (wheel.modifiers & Qt.ControlModifier){
if (wheel.angleDelta.y > 0)
{
zoomin()
}
else
{
zoomout()
}
}
else{
wheel.accepted=false
}
}
}