I have a GUI that show line and text aligned to each line, I use QGraphicsSimpleTextItem() for text.
Issue:
If there is no Rotation method used on QGraphicsSimpleTextItem() zoom and pan (subclass QGraphcisView) interaction runs rapidly, but if a Rotation is assigned to those items text zoom and pan interaction becomes very slow.
Question:
I have Line Profiler to find the line that consumes more time in the class, but nothing really stands out, as shown below. Is there any reason this would happen? How can I improve this?
Setting text rotation (ang)
Commenting out text rotation line (ang)
Per Hit time does not show a great increase or decrease but the user experience regarding zoom and pan interaction when commenting line # dict_Text[str(i)].setRotation(ang[i]) is very different.
Reproduce Problem:
Below is a code that reproduces the trouble I am experiencing, first run the code as is an you will have a very slow zoom and pan interaction, then comment out the line dict_Text[str(i)].setRotation(ang[i]) and zoom and pan interaction will be very fast.
Code:
from PyQt5 import QtWidgets, QtCore, QtGui
import numpy as np
import sys
print(QtCore.PYQT_VERSION_STR)
class GraphicsView(QtWidgets.QGraphicsView):
#profile
def __init__(self, scene, parent):
super(GraphicsView, self).__init__(scene, parent)
#Mouse Tracking
self.setMouseTracking(True)
#Zoom Anchor
self.setTransformationAnchor(QtWidgets.QGraphicsView.AnchorUnderMouse)
self.setResizeAnchor(QtWidgets.QGraphicsView.AnchorUnderMouse)
#Antialiasing and indexing
self.setRenderHints(QtGui.QPainter.Antialiasing | QtGui.QPainter.HighQualityAntialiasing | QtGui.QPainter.TextAntialiasing)
self.setCacheMode(QtWidgets.QGraphicsView.CacheBackground)
self.resetCachedContent()
scene.setItemIndexMethod(QtWidgets.QGraphicsScene.NoIndex)
#Pan variable
self.pos_init_class = None
#profile
def mousePressEvent(self, event):
pos = self.mapToScene(event.pos())
#Mouse Pan
if event.button() == QtCore.Qt.MiddleButton:
self.pos_init_class = pos
super(GraphicsView, self).mousePressEvent(event)
#profile
def mouseReleaseEvent(self, event):
if self.pos_init_class and event.button() == QtCore.Qt.MiddleButton:
#Mouse Pan
self.pos_init_class = None
super(GraphicsView, self).mouseReleaseEvent(event)
#profile
def mouseMoveEvent(self, event):
if self.pos_init_class:
#Mouse Pan
delta = self.pos_init_class - self.mapToScene(event.pos())
r = self.mapToScene(self.viewport().rect()).boundingRect()
self.setSceneRect(r.translated(delta))
super(GraphicsView, self).mouseMoveEvent(event)
#profile
def wheelEvent(self, event):
#Mouse Zoom
if event.angleDelta().y() > 0:
self.scale(1.5, 1.5)
else:
self.scale(1 / 1.5, 1 / 1.5)
class Ui_MainWindow(object):
def __init__(self):
super(Ui_MainWindow, self).__init__()
def plt_plot(self):
#Create data set
size = 200
x = np.random.randint(0, high=1000, size=size, dtype=int)
y = np.random.randint(0, high=1000, size=size, dtype=int)
ang = np.random.randint(1, high=360, size=size, dtype=int)
#Store Text in Dict
dict_Text = {}
for i in range(len(x)):
#Create Text Item
dict_Text[str(i)] = QtWidgets.QGraphicsSimpleTextItem()
#Set text
dict_Text[str(i)].setText('nn-mm \nL: 50.6 m \nD: 1500 mm')
#Set Pos
dict_Text[str(i)].setPos(x[i], y[i])
#Set rotation angle
dict_Text[str(i)].setRotation(ang[i])
#Add to Scene
self.graphicsView.scene().addItem(dict_Text[str(i)])
def setupUi(self, MainWindow):
#Central Widget
self.centralwidget = QtWidgets.QWidget(MainWindow)
MainWindow.setCentralWidget(self.centralwidget)
main_width, main_heigth = 1200, 800
MainWindow.resize(main_width, main_heigth)
#Create GraphicsView and Scene
self.scene = QtWidgets.QGraphicsScene()
self.graphicsView = GraphicsView(scene=self.scene, parent=self.centralwidget)
#Set Geometry
self.graphicsView.setGeometry(QtCore.QRect(0, 0, main_width, main_heigth))
#plot dummy data set
self.plt_plot()
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
While the QGraphicsView framework documentations states that it's pretty fast, that doesn't mean that it's always fast.
The rendering speed depends on many factors, and individual item transformation can drastically decrease the overall performance.
Consider that the drawing of all items is done as individual raster painting (the backend is almost all Qt's own rendering: its optimization, while normally good, is not perfect).
For each item that has an individual transformation, the painter will need to do the painting based on that transformation.
If you have 200 items, each one with its own transformation, that means a lot of computing.
Note: a transformation is a matrix that can transform the painting (meaning that everything will need special and additional computation).
Qt transformations pretty standard:
translation
scale
shear
[projection]
rotation (which is done by combining shearing and scaling, hence the complexity)
perspective (which is done by combining projection and scaling)
Then you have to add the fact that you're not drawing simple items, but text based items. Text painting requires a lot of computation, despite all the optimization that Qt and the underlying system provide.
I wouldn't go into deep on how text painting is done, but you have to consider a moltitude of aspects; let's just consider a very few of them:
each letter is composed of many complex polygons (many of them using bezier curves)
each letter has different sizes and spacings, including per-letter (and per letter-pair) spacing, also known as kerning
some fonts have even more advanced features, like ligature
even simple alignment has to be taken into account, possibly according to the system, widget or even text option layout direction
lots of other things...
Consider this (it doesn't work exactly like this, but that's just for the sake of the example): in your text you have about 20 drawable characters.
Imagine every character as an individual newly created instance of QPainterPath, containing lots of lines and bezier curves (as almost any characters does). That's about 4000 individual paths with their own curves, each one created every time they are drawn.
Then you need to also apply a transformation matrix, due to the rotation (as explained before, both shear and translation).
I need to remark that the above is an over-simplification of how text drawing is done (as Qt also partially relies on the underlying system font rendering).
So, is there a solution?
Well, "not really" and "not always".
First of all, instead of using setSceneRect(), you could get some slight improvement by scrolling the contents of the scene. This is done by setting a (very much) bigger sceneRect and hiding the scrollbars using set<Orientation>ScrollBarPolicy to ScrollBarAlwaysOff, then moving the visible area by setting the delta position on the scroll bar values. Moving the scroll bars will just cause the repainting of the viewport, while setSceneRect() also requires (recursive) computation of the visible area based on the transformation and the scroll bar sizes.
Then, there is the OpenGL alternative, which might improve performance:
In order to accurately and quickly apply transformations and effects to items, Graphics View is built with the assumption that the user's hardware is able to provide reasonable performance for floating point instructions.
[...]
As a result, certain kinds of effects may be slower than expected on certain devices. It may be possible to compensate for this performance hit by making optimizations in other areas; for example, by using OpenGL to render a scene.
See OpenGL Rendering about that, but consider that it does not always guarantee better performances.
Finally, if you need to show that many individual text items, each one with its own rotation, you must expect that performance will drastically decrease. The only possible alternative is to try to render those text items as (bigger) images, and then use QGraphicsPixmapItem, but in order to get reliable results (as bitmap based objects are prone to aliasing when transformed) you'd need to use bigger sizes for each item.
Related
I'm new to opencv and I'm trying to detect person through cv2.findContours with morphological transformation of the video. Here is the code snippet..
import numpy as np
import imutils
import cv2 as cv
import time
cap = cv.VideoCapture(0)
while(cap.isOpened()):
ret, frame = cap.read()
#frame = imutils.resize(frame, width=700,height=100)
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
gray = cv.GaussianBlur(gray, (21, 21), 0)
cv.accumulateWeighted(gray, avg, 0.5)
mask2 = cv.absdiff(gray, cv.convertScaleAbs(avg))
mask = cv.absdiff(gray, cv.convertScaleAbs(avg))
contours0, hierarchy = cv.findContours(mask2,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_SIMPLE)
for cnt in contours0:
.
.
.
The rest of the code has the logic of a contour passing a line and incrementing the count.
The problem I'm encountering is, cv.findContours detects every movement/change in the frame (including the person). What I want is cv.findContours to detect only person and not any other movement. I know that person detection can be achieved through harrcasacade but is there any way I can implement detection using cv2.findContours?
If not then is there a way I can still do morphological transformation and detect people because the project I'm working on requires filtering of noise and much of the background to detect the person and increment it's count on passing the line.
I will show you two options to do this.
The method I mentioned in the comments which you can use with Yolo to detect humans:
Use saliency to detect the standout parts of the video
Apply K-Means Clustering to cluster the objects into individual clusters.
Apply Background Subtraction and erosion or dilation (or both depends on the video but try them all and see which one does the best job).
Crop the objects
Send the cropped objects to Yolo
If the class name is a pedestrian or human then draw the bounding boxes on them.
Using OpenCV's builtin pedestrian detection which is much more easier:
Convert frames to black and white
Use pedestrian_cascade.detectMultiScale() on the grey frames.
Draw a bounding box over each pedestrian
Second method is much simpler but it depends what is expected of you for this project.
MDAnalysis distance selection commands like 'around' and 'sphzere' selects atoms from periodic image (I am using a rectangular box).
universe.select_atoms("name OW and around 4 (resid 20 and name O2)")
However, the coordinates of the atoms from the PBC box reside on the other side of the box. In other words, I have to manually translate the atoms to ensure that they actually are withing the 4 Angstrom distance.
Is there a selection feature to achieve this using the select_atoms function?
If I well understand, you would like to get the atoms around a given selection in the image that is the closest to that selection.
universe.select_atoms does not modify the coordinates, and I am not aware of a function that gives you what you want. The following function could work for an orthorhombic box like yours:
def pack_around(atom_group, center):
"""
Translate atoms to their periodic image the closest to a given point.
The function assumes that the center is in the main periodic image.
"""
# Get the box for the current frame
box = atom_group.universe.dimensions
# The next steps assume that all the atoms are in the same
# periodic image, so let's make sure it is the case
atom_group.pack_into_box()
# AtomGroup.positions is a property rather than a simple attribute.
# It does not always propagate changes very well so let's work with
# a copy of the coordinates for now.
positions = atom_group.positions.copy()
# Identify the *coordinates* to translate.
sub = positions - center
culprits = numpy.where(numpy.sqrt(sub**2) > box[:3] / 2)
# Actually translate the coordinates.
positions[culprits] -= (u.dimensions[culprits[1]]
* numpy.sign(sub[culprits]))
# Propagate the new coordinates.
atom_group.positions = positions
Using that function, I got the expected behavior on one of MDAnalysis test files. You need MDAnalysisTests to be installed to run the following piece of code:
import numpy
import MDAnalysis as mda
from MDAnalysisTests.datafiles import PDB_sub_sol
u = mda.Universe(PDB_sub_sol)
selection = u.select_atoms('around 15 resid 32')
center = u.select_atoms('resid 32').center_of_mass()
# Save the initial file for latter comparison
u.atoms.write('original.pdb')
selection.write('selection_original.pdb')
# Translate the coordinates
pack_around(selection, center)
# Save the new coordinates
u.atoms.write('modified.pdb')
selection.write('selection_modified.pdb')
I'm looking for a more efficient way to draw continuous lines in PsychoPy. That's what I've come up with, for now...
edit: the only improvement I could think of is to add a new line only if the mouse has really moved by adding if (mspos1-mspos2).any():
ms = event.Mouse(myWin)
lines = []
mspos1 = ms.getPos()
while True:
mspos2 = ms.getPos()
if (mspos1-mspos2).any():
lines.append(visual.Line(myWin, start=mspos1, end=mspos2))
for j in lines:
j.draw()
myWin.flip()
mspos1 = mspos2
edit: I tried it with Shape.Stim (code below), hoping that it would work better, but it get's edgy even more quickly..
vertices = [ms.getPos()]
con_line = visual.ShapeStim(myWin,
lineColor='red',
closeShape=False)
myclock.reset()
i = 0
while myclock.getTime() < 15:
new_pos = ms.getPos()
if (vertices[i]-new_pos).any():
vertices.append(new_pos)
i += 1
con_line.vertices=vertices
con_line.draw()
myWin.flip()
The problem is that it becomes too ressource demanding to draw those many visual.Lines or manipulate those many vertices in the visual.ShapeStim on each iteration of the loop. So it will hang on the draw (for Lines) or vertex assignment (for ShapeStim) so long that the mouse has moved enough for the line to show discontinuities ("edgy").
So it's a performance issue. Here are two ideas:
Have a lower threshold for the minimum distance travelled by the mouse before you want to add a new coordinate to the line. In the example below I impose a the criterion that the mouse position should be at least 10 pixels away from the previous vertex to be recorded. In my testing, this compressed the number of vertices recorded per second to about a third. This strategy alone will postpone the performance issue but not prevent it, so on to...
Use the ShapeStim solution but regularly use new ShapeStims, each with fewer vertices so that the stimulus to be updated isn't too complex. In the example below I set the complexity at 500 pixels before shifting to a new stimulus. There might be a small glitch while generating the new stimulus, but nothing I've noticed.
So combining these two strategies, starting and ending mouse drawing with a press on the keyboard:
# Setting things up
from psychopy import visual, event, core
import numpy as np
# The crucial controls for performance. Adjust to your system/liking.
distance_to_record = 10 # number of pixels between coordinate recordings
screenshot_interval = 500 # number of coordinate recordings before shifting to a new ShapeStim
# Stimuli
myWin = visual.Window(units='pix')
ms = event.Mouse()
myclock = core.Clock()
# The initial ShapeStim in the "stimuli" list. We can refer to the latest
# as stimuli[-1] and will do that throughout the script. The others are
# "finished" and will only be used for draw.
stimuli = [visual.ShapeStim(myWin,
lineColor='white',
closeShape=False,
vertices=np.empty((0, 2)))]
# Wait for a key, then start with this mouse position
event.waitKeys()
stimuli[-1].vertices = np.array([ms.getPos()])
myclock.reset()
while not event.getKeys():
# Get mouse position
new_pos = ms.getPos()
# Calculating distance moved since last. Pure pythagoras.
# Index -1 is the last row.index
distance_moved = np.sqrt((stimuli[-1].vertices[-1][0]-new_pos[0])**2+(stimuli[-1].vertices[-1][1]-new_pos[1])**2)
# If mouse has moved the minimum required distance, add the new vertex to the ShapeStim.
if distance_moved > distance_to_record:
stimuli[-1].vertices = np.append(stimuli[-1].vertices, np.array([new_pos]), axis=0)
# ... and show it (along with any "full" ShapeStims
for stim in stimuli:
stim.draw()
myWin.flip()
# Add a new ShapeStim once the old one is too full
if len(stimuli[-1].vertices) > screenshot_interval:
print "new shapestim now!"
stimuli.append(visual.ShapeStim(myWin,
lineColor='white',
closeShape=False,
vertices=[stimuli[-1].vertices[-1]])) # start from the last vertex
I have images (4000x2000 pixels) that are derived from the same image, but with subtle differences in less than 1% of the pixels. I'd like to plot the two images side-by-side and highlight the regions of the array's that are different (by highlight I mean I want the pixels that differ to jump out, but still display the color that matches their value. I've been using rectangles that are unfilled to outline the edges of such pixels so far. I can do this very nicely in small images (~50x50) with:
fig=figure(figsize=(20,15))
ax1=fig.add_subplot(1,2,1)
imshow(image1,interpolation='nearest',origin='lower left')
colorbar()
ax2=fig.add_subplot(122,sharex=ax1, sharey=ax1)
imshow(image2,interpolation='nearest',origin='lower left')
colorbar()
#now show differences
Xspots=im1!=im2
Xx,Xy=nonzero(Xspots)
for x,y in zip(Xx,Xy):
rect=Rectangle((y-.5,x-.5),1,1,color='w',fill=False,ec='w')
ax1.add_patch(rect)
ax2.add_patch(rect)
However this doesn't work so well when the image is very large. Strange things happen, for example when I zoom in the patch disappears. Also, this way sucks because it takes forever to load things when I zoom in/out.
I feel like there must be a better way to do this, maybe one where there is only one patch that determines where all of the things are, rather than a whole bunch of patches. I could do a scatter plot on top of the imshow image, but I don't know how to fix it so that the points will stay exactly the size of the pixel when I zoom in/out.
Any ideas?
I would try something with the alpha channel:
import copy
N, M = 20, 40
test_data = np.random.rand(N, M)
mark_mask = np.random.rand(N, M) < .01 # mask 1%
# this is redundant in this case, but in general you need it
my_norm = matplotlib.colors.Normalize(vmin=0, vmax=1)
# grab a copy of the color map
my_cmap = copy.copy(cm.get_cmap('cubehelix'))
c_data= my_cmap(my_norm(test_data))
c_data[:, :, 3] = .5 # make everything half alpha
c_data[mark_mask, 3] = 1 # reset the marked pixels as full opacity
# plot it
figure()
imshow(c_data, interpolation='none')
No idea if this will work with your data or not.
Is there a way to place patterns into selected areas on an imshow graph? To be precise, I need to make it so that, in addition to the numerical-data-carrying colored squares, I also have different patterns in other squares indicate different failure modes for the experiment (and also generate a key explaining the meaning of these different patterns). An example of a pattern that would be useful would be various types of crosshatches. I need to be able to do this without disrupting the main color-numerical data relationship on the graph.
Due to the back-end I am working within for the GUI containing the graph, I cannot utilize patches (they fail to pickle and make it from the back-end to the front-end via the multiprocessing package). I was wondering if anyone knew of another way to do this.
grid = np.ma.array(grid, mask=np.isnan(grid))
ax.imshow(grid, interpolation='nearest', aspect='equal', vmax = private.vmax, vmin = private.vmin)
# Up to here works fine and draws the graph showing only the data with white spaces for any point that failed
if show_fail and faildat != []:
faildat = faildat[np.lexsort((faildat[:,yind],faildat[:,xind]))]
fails = []
for i in range(len(faildat)): #gives coordinates with failures as (x,y)
fails.append((faildat[i,1],faildat[i,0]))
for F in fails:
ax.FUNCTION NEEDED HERE
ax.minorticks_off()
ax.set_xticks(range(len(placex)))
ax.set_yticks(range(len(placey)))
ax.set_xticklabels(placex)
ax.set_yticklabels(placey, rotation = 0)
ax.colorbar()
ax.show()