PyQt5 - Get the pixel color inside a QWidget - pyqt5

I made a QWidget and inside I made some other items like QLabels which display images.
Consider what is inside that parent Widget I was trying to get the color where I would click.
Searching I found this thread but it is a bit old and I am not able to translate it to Python.
thread:
https://www.qtcentre.org/threads/49693-How-to-get-color-of-pixel-or-point
code:
QPixmap qPix = QPixmap::grabWidget(ui->myWidget);
QImage image(qPix.toImage());
QColor color(image.pixel(0, 1));
How would this translate this to PyQt5 if it is the correct answer?

QPixmap.grabWidget() is considered obsolete, and you should use QWidget.grab() instead.
pixmap = self.someWidget.grab()
img = pixmap.toImage()
color = img.pixelColor(0, 1)

Related

Is there a way to apply an alpha mask to a FlxCamera?

I'm trying to implement this camera but one of the obstacles I'm facing right now, is the merging of two cameras (what he describes here).
At first I tried to make a non-rectangular camera, but I don't think it's possible without changing a lot of things in the way HaxeFlixel renders.
And then I found the alphaMask() function in the FlxSpriteUtil package and I think it would be a better solution.
Not only would it solve my problem, it would actually permit all kinds of funky-shaped cameras, you just have to create the right mask!
But the new problem is that I don't know how to (and again, if it's possible without changing a bit the FlxCamera) apply it to the camera.
Internally, the FlxCamera might use a FlxSprite, but only in blit render mode, and I am in tiles render mode (haven't found how to change, not good enough solution in my opinion), which uses a Flash Sprite instead and I don't know what to do with it.
So in short, do you have an idea how to apply an AlphaMask to a FlxCamera? Or another way to achieve what I'm trying to do?
PS: If you want to have a look at the (ugly and frenchly commented) code, it's over here!
You can render the contents of a FlxCamera to a FlxSprite (though it does require conditional code based on the render mode). The TurnBasedRPG tutorial game uses this for the wave effect in the combat screen, see CombatHUD.hx:
if (FlxG.renderBlit)
screenPixels.copyPixels(FlxG.camera.buffer, FlxG.camera.buffer.rect, new Point());
else
screenPixels.draw(FlxG.camera.canvas, new Matrix(1, 0, 0, 1, 0, 0));
Here's a code example that uses this to create a HaxeFlixel-shaped camera:
package;
import flixel.tweens.FlxTween;
import flash.geom.Matrix;
import flixel.FlxCamera;
import flixel.FlxG;
import flixel.FlxSprite;
import flixel.FlxState;
import flixel.graphics.FlxGraphic;
import flixel.system.FlxAssets;
import flixel.util.FlxColor;
import openfl.geom.Point;
using flixel.util.FlxSpriteUtil;
class PlayState extends FlxState
{
static inline var CAMERA_SIZE = 100;
var maskedCamera:FlxCamera;
var cameraSprite:FlxSprite;
var mask:FlxSprite;
override public function create():Void
{
super.create();
maskedCamera = new FlxCamera(0, 0, CAMERA_SIZE, CAMERA_SIZE);
maskedCamera.bgColor = FlxColor.WHITE;
maskedCamera.scroll.x = 50;
FlxG.cameras.add(maskedCamera);
// this is a bit of a hack - we need this camera to be rendered so we can copy the content
// onto the sprite, but we don't want to actually *see* it, so just move it off-screen
maskedCamera.x = FlxG.width;
cameraSprite = new FlxSprite();
cameraSprite.makeGraphic(CAMERA_SIZE, CAMERA_SIZE, FlxColor.WHITE, true);
cameraSprite.x = 50;
cameraSprite.y = 100;
cameraSprite.cameras = [FlxG.camera];
add(cameraSprite);
mask = new FlxSprite(FlxGraphic.fromClass(GraphicLogo));
var redSquare = new FlxSprite(0, 25);
redSquare.makeGraphic(50, 50, FlxColor.RED);
add(redSquare);
FlxTween.tween(redSquare, {x: 150}, 1, {type: FlxTween.PINGPONG});
}
override public function update(elapsed:Float):Void
{
super.update(elapsed);
var pixels = cameraSprite.pixels;
if (FlxG.renderBlit)
pixels.copyPixels(maskedCamera.buffer, maskedCamera.buffer.rect, new Point());
else
pixels.draw(maskedCamera.canvas);
cameraSprite.alphaMaskFlxSprite(mask, cameraSprite);
}
}

what is the proper way to set kivy scrollview effects_cls property?

I want to stop the user from over scrolling. kivy doc say that the effects_cls property will change this behavior, but I have not found a way to make it work.
Although you have solved your problem I will provide an example for future users.
You can change what effect is being used by setting effect_cls to any effect class. If you want to disable the overscroll effect to prevent the scroll bouncing effect ScrollEffect solve the problem.
Example using kivy Language:
from kivy.app import App
from kivy.uix.scrollview import ScrollView
from kivy.lang import Builder
Builder.load_string('''
#:import ScrollEffect kivy.effects.scroll.ScrollEffect
#:import Button kivy.uix.button.Button
<RootWidget>
effect_cls: ScrollEffect
GridLayout:
size_hint_y: None
height: self.minimum_height
cols: 1
on_parent:
for i in range(10): self.add_widget(Button(text=str(i), size_hint_y=None))
''')
class RootWidget(ScrollView):
pass
class MainApp(App):
def build(self):
root = RootWidget()
return root
if __name__ == '__main__':
MainApp().run()
Output:
so I was trying to use effect_cls: ScrollEffect when it should be effect_cls: 'ScrollEffect'.
have to pass it as a string.

How to use mask with transparency on QWidget?

I am trying to use mask on my QWidget. I want to overlay existing widget with row of buttons - similar to Skype
Notice that these buttons don't have jagged edges - they are nicely antialiased and widget below them is still visible.
I tried to accomplish that using Qt Stylesheets but on pixels that should be "masked out" was just black colour - it was round button on black, rectangular background.
Then I tried to do this using QWidget::mask(). I used following code
QImage alpha_mask(QSize(50, 50), QImage::Format_ARGB32);
alpha_mask.fill(Qt::transparent);
QPainter painter(&alpha_mask);
painter.setBrush(Qt::black);
painter.setRenderHint(QPainter::Antialiasing);
painter.drawEllipse(QPoint(25,25), 24, 24);
QPixmap mask = QPixmap::fromImage(alpha_mask);
widget.setMask(mask.mask());
Sadly, it results in following effect
"Edges" are jagged, where they should be smooth. I saved generated mask so I could investigate if it was the problem
it wasn't.
I know that Linux version of Skype does use Qt so it should be possible to reproduce. But how?
One possible approach I see is the following.
Prepare a nice high resolution pixmap with the circular button icon over transparent background.
Paint the pixmap on a square widget.
Then mask the widget leaving just a little bit of margin beyond the border of the circular icon so that the widget mask jaggedness won't touch the smooth border of the icon.
I managed to get a nice circular button with not so much code.
Here is the constructor of my custom button:
Button::Button(Type t, QWidget *parent) : QPushButton(parent) {
setIcon(getIcon(t));
resize(30,30);
setMouseTracking(true);
// here I apply a centered mask and 2 pixels bigger than the button
setMask(QRegion(QRect(-1,-1,32,32),QRegion::Ellipse));
}
and in the style sheet I have the following:
Button {
border-radius: 15px;
background-color: rgb(136, 0, 170);
}
With border-radius I get the visual circle and the mask doesn't corrupt the edges because it is 1 pixel away.
You are using the wrong approach for generating masks. I would generate them from the button images themselves:
QImage image(widget.size(), QImage::Format_Alpha8);
widget.render(&image);
widget.setMask(QBitmap::fromImage(image.createMaskFromColor(qRgba(0, 0, 0, 0))));

PySide: Resizable scene in QGraphicsView

I'm trying to find a way to mark the border of a QGraphicsScene, and make it resizable inside a QGraphicsView, to create something similar to Microsoft Paint.
In other words, my current QGraphicsView looks like this:
But my image is only this big, as indicated by the red box:
I want my QGraphicsView to be like this (the little black boxes are cornergrabbers for resizing the canvas):
Functionally, I want it to be similar to MS Paint:
The canvas (scene) is resizable, and the scrollbars on the window (view) appear when needed. The blue background color (solid gray background) appears behind the canvas.
How would I go about accomplishing this?
To try to get the grey background, I've been experimenting with QGraphicsView.setBackgroundBrush() and QGraphicsScene.setBackgroundBrush(). I've learned that QGraphicsView's background brush completely overrides QGraphicsScene's background brush if one is set. Even if I only set the background brush for QGraphicsScene, that background brush extends over the image's original boundaries.
Here is a link to my test code.
Help is appreciated!
I have to struggle with your constructors...dunno if it works on Windows, but have to make it to work with Linux. Try :
def setPixmap(self, pixmap):
if self.pixmap_item:
self.removeItem(self.pixmap_item)
self.pixmap_item = self.addPixmap(pixmap)
self.setPixBackGround()
def setPixBackGround(self):
# put Background rect for image
pixR = self.pixmap_item.pixmap().rect()
bgRectangle = self.addRect(pixR.x()-10, pixR.y()-10,
pixR.width()+20, pixR.height()+20)
# set color and Z value to put it behind image
bgColor = QColor(58, 176, 176)
bgRectangle.setBrush(bgColor)
bgRectangle.setZValue(-.1)
# take coordinates for brabbers
bgR = bgRectangle.rect()
grab1R = QRect(-5,-5,10,10)
# put grabbers as wish...
grab1 = self.addRect(grab1R)
grab1.setPos(bgR.topLeft())
grab2 = self.addRect(grab1R)
grab2.setPos(bgR.topRight())
# ....etc....

react on events generated by chaco tools: how to get values out of a chaco tool when an event is fired ?

actually this should be a pretty simple question, but I am experiencing the quite steep learning curve of chaco and traits...
I am currently writing an application to plot a medical image using chaco and traits and I simply want to pick a pixel location from the image and use this pixel location to do evaluations on an image stack. So I started to write my own Chaco Tool that reacts on mouse clicks on an imageplot.
This works fine so far. When I click on the imageplot I can see the mouse coordinates WITHIN the Tool (a custom made PixelPickerTool). However, as I want to use this coordinate value outside the tool my question would be: How can I hand the coordinates over to another object or variable OUTSIDE the Tool when an event is fired.
To illustrate what I want to do I attached the main structure of the two classes I am Writing:
class PixelPickerTool(BaseTool):
'''Pick a Pixel coordinate from an image'''
ImageCoordinates = [0,0]
def normal_left_down(self, event):
print "Mouse:", event.x, event.y,
click_x, click_y = self.component.map_data((event.x, event.y))
img_x = int(click_x)
img_y = int(click_y)
coord = [img_x, img_y]
if ( (img_x > self.ImageSizeX) or (img_x < 0) ):
coord = [0,0]
if ( (img_y > self.ImageSizeY) or (img_y < 0) ):
coord = [0,0]
print coord
# this print gives out the coordinates of the pixel that was clicked - this works fine...
# so inside the picker too I can get the coordinates
# but how can I use the coordinates outside this tool ?
class ImagePlot(HasTraits):
# create simple chaco plot of 2D numpy image array, with a simple interactor (PixelPickerTool)
plot = Instance(Plot)
string = String("hallo")
picker = Instance(PixelPickerTool)
traits_view = View(
Item('plot', editor=ComponentEditor(), show_label=False,width=500, height=500, resizable=False),
Item('string', show_label=False, springy=True, width=300, height=20, resizable=False),
title="")
def __init__(self, numpyImage):
super(ImagePlot, self).__init__()
npImage = np.flipud(np.transpose(numpyImage))
plotdata = ArrayPlotData(imagedata = npImage)
plot = Plot(plotdata)
plot.img_plot("imagedata", colormap=gray)
self.plot = plot
# Bild Nullpunkt ist oben links!
self.plot.default_origin = 'top left'
pixelPicker = PixelPickerTool(plot)
self.picker = pixelPicker
plot.tools.append(pixelPicker)
I want to use the coordinates that are measured by the PixelPickerTool somewhere in this ImagePlot class. E.g. by handing them over to another Object like MyImageSeries.setCoordinate(xy_coordinateFromPickerTool)
So how can I hand over the pixel coordinates from PickerTool to some member variable in this class when an event is fired ?
Maybe something like this: self.PixelCoordinates = picker.getPixelCoordinates() could work ?
But how do I know then, when the on_normal_left_down function was executed in the picker ?
In the end I want to hand the coordinates over to another class which hold more images to process the images and do a fit at the pixel position determined in the ImagePlot.
I tried to use something like "_picker_changed" in my imagePlot class to detect if an event has been fired in the PickerTool, but this didn't detect event firing. So maybe I am doing something wrong...
Can anybody tell me how to get events and associated variables out of this picker tool ?
Cheers,
Andre
"But how do I know then, when the on_normal_left_down function was executed in the picker?"
There are several ways you could probably do this, but one way would be to simply do exactly what you are asking and fire an event that you define explicitly.
for instance:
from traits.api import Event
class PickerTool(BaseTool):
last_coords = SomeTrait
i_fired = Event
def normal_left_down(self,event):
# do whatever necessary processing
self.last_coords = do_some_stuff(event.some_attribute)
# now notify your parent
self.i_fired = True
and then listen to plot.picker.i_fired from wherever you want to display, and look in plot.picker.last_coords for the saved state.
Another thing you can do that may be simpler if what you want to do with these coordinates is very straightforward, is just pass on intialization the data structures the picker needs to interact with (or get them with a chain of calls to self.parent) and do your work directly inside the picker.