Pygame with strange variable issues - a couple variables are increasing for unknown reason - variables

I am having a really strange problem with Pygame, and it's got me stumped for the last few (OK, more like 5) hours. There are free programs out there for making photo mosaics, but ever since my early days tinkering with VB5, I've wanted to write my own version. You know how that is. I have all kinds of cool parts written for loading source images, finding color averages and everything. But here I'm stuck and confused. So very stuck and confused.
This part of the program converts a 'target image' (the one that will be made up of small source images) to smaller blocks of color that other source images will try to match and then replace eventually. But for some reason, the size of the blocks keeps increasing with every iteration. I've tried so many different things that I've had to go through the script and delete out a bunch of things and add a couple more comments before posting.
The target img is a 1280x800 random google image, but any other picture should work just the same. Watch as the Y size of the blit increases with every block going down, and the X size increases as new rows are made. I hard coded in a set size for the solid color rectangle (2 pixels across, much smaller than I'll use), but for whatever reason this keeps increasing. The first row of blits is so small right now that it's hard to see. That quickly changes.
**Here's the link to what I am the image I'm using (http://www.travelimg.org/wallpapers/2012/01/iceland-golden-falls-druffix-europe-golden-falls-golden-falls-iceland-natur-waterfall-waterfalls-800x1280.jpg), but any other pic/size renamed to target.jpg should do the same.
If anyone can point me in the right direction it would be much appreciated. I want to cover this whole source pic in nice 12x12 blocks of solid color to start with. I can't figure out what is changing these block sizes as it goes.
import pygame
import os
from time import sleep
okformats = ['png','jpg','bmp','pcx','tif','lbm','pbm','pgm','ppm','xpm']
targetimg = 'C:\\Python27\\mosaic\\target.jpg'
if targetimg[-3:] not in okformats:
print 'That format is unsupported, get ready for some errors...'
else:
print 'Loading...'
pygame.init()
screen = pygame.display.set_mode((100,100)) #picked a size just to start it out
clock = pygame.time.Clock() #possibly not needed in this script
targetpic = pygame.image.load(targetimg).convert()
targetrect = targetpic.get_rect() #returns something like [0,0,1280,800]
targetsize = targetrect[2:]
targetw = targetrect[2]
targeth = targetrect[3]
numpicsx = 100 #number of pictures that make up the width
sourceratio = 1 #testing with square pics for now
picxsize = targetw/numpicsx
numpicsy = targeth/(picxsize*sourceratio)
picysize = targeth/numpicsy
print 'Blitting target image'
screen = pygame.display.set_mode(targetsize)
screen.fill((255,255,255)) #set to white in case of transparency
screen.blit(targetpic,(0,0))
#update screen
pygame.display.update()
pygame.display.flip()
clock.tick(30)
SLOWDOWN = .1 #temp slow down to watch it
print numpicsx #here are some print statements just to show all the starting values are correct
print numpicsy
print '---'
print picxsize
print picysize
sleep(1)
for x in xrange(numpicsx):
for y in xrange(numpicsy):
currentrect = [x*picxsize,y*picysize,x*picxsize+picxsize,y*picysize+picysize]
avgc = pygame.transform.average_color((targetpic), currentrect) #average color
avgc = avgc[:3] #drops out the alpha if there is one
pygame.draw.rect(screen, avgc, currentrect)
#pygame.draw.rect(screen, avgc, (currentrect[0],currentrect[1],currentrect[0]+2,currentrect[1]+2)) #hard coded 2s (rather than 12s in this case) to help pin point the problem
pygame.display.update()
pygame.display.flip()
clock.tick(30) #probably not needed
sleep(SLOWDOWN)
print 'Done./nSleeping then quitting...'
sleep(3)
pygame.quit()

A friend of mine took a look at my code and showed me the problem. I was thinking that the rect format for drawing was (x1,y1,x2,y2), but it's actually (x,y,width,height). This is the new line:
currentrect = [x*picxsize,y*picysize,picxsize,picysize]
I also dropped the clock.tick(30) lines to speed it all up.

Related

VTK cutter output

I am seeking a solution of connecting all the lines that have the same slope and share a common point. For example, after I load a STL file and cut it using a plane, the cutter output includes the points defining the contour. Connecting them one by one forms a (or multiple) polyline. However, some lines can be merged when their slopes are the same and they share a common point. E.g., [[0,0,0],[0,0,1]] and [[0,0,1],[0,0,2]] can be represented by one single line [[0,0,0],[0,0,2]].
I wrote a function that can analyse all the lines and connect them if they can be merged. But when the number of lines are huge, this process is slow. I am thinking in the VTK pipeline, is there a way to do the line merging?
Cheers!
plane = vtk.vtkPlane()
plane.SetOrigin([0,0,5])
plane.SetNormal([0,0,1])
cutter = vtk.vtkCutter()
cutter.SetCutFunction(plane)
cutter.SetInput(triangleFilter.GetOutput())
cutter.Update()
cutStrips = vtk.vtkStripper()
cutStrips.SetInputConnection(cutter.GetOutputPort())
cutStrips.Update()
cleanDataFilter = vtk.vtkCleanPolyData()
cleanDataFilter.AddInput(cutStrips.GetOutput())
cleanDataFilter.Update()
cleanData = cleanDataFilter.GetOutput()
print cleanData.GetPoint(0)
print cleanData.GetPoint(1)
print cleanData.GetPoint(2)
print cleanData.GetPoint(3)
print cleanData.GetPoint(4)
The output is:
(0.0, 0.0, 5.0)
(5.0, 0.0, 5.0)
(10.0, 0.0, 5.0)
(10.0, 5.0, 5.0)
(10.0, 10.0, 5.0)
Connect the above points one by one will form a polyline representing the cut result. As we can see, the line [point0, point1] and [point1, point2] can be merged.
Below is the code for merging the lines:
Assume that the LINES are represented by list: [[(p0),(p1)],[(p1),(p2)],[(p2),(p3)],...]
appended = 0
CurrentLine = LINES[0]
CurrentConnectedLine = CurrentLine
tempLineCollection = LINES[1:len(LINES)]
while True:
for HL in tempLineCollection:
QCoreApplication.processEvents()
if checkParallelAndConnect(CurrentConnectedLine, HL):
appended = 1
LINES.remove(HL)
CurrentConnectedLine = ConnectLines(CurrentConnectedLine, HL)
processedPool.append(CurrentConnectedLine)
if len(tempLineCollection) == 1:
processedPool.append(tempLineCollection[0])
LINES.remove(CurrentLine)
if len(LINES) >= 2:
CurrentLine = LINES[0]
CurrentConnectedLine = CurrentLine
tempLineCollection = LINES[1:len(LINES)]
appended = 0
else:
break
Solution:
I figured out a way of further accelerating this process using some vtk data structure. I found out that a polyline line will be stored in a cell, which can be checked by using GetCellType(). Since the point order for a polyline is sorted already, We do not need to search globally which lines are colinear with the current one. For each point on the polyline, I just need to check the point[i-1], point[i], point[i+1]. And if they are colinear, the end of the line will be updated to the next point. This process continues until the end of the polyline is reached. The speed increases by a huge amount compared with the global search approach.
Not sure if it is the main source of slowness (depends on how many positive hits on the colinearity you have), but removing items from a vector is costly (O(n)), since it requires reorganizing the rest of the vector, you should avoid it. But even without hits on colinearity, the LINES.remove(CurrentLine) call is surely slowing things down and there isn't really any need for it - just leave the vector untouched, write the final results to a new vector (processedPool) and get rid of the LINES vector in the end. You can modify your algorithm by making a bool array (vector), initialized at "false" for each item, then when you remove a line, you don't actually remove it, but only mark it as "true" and you skip all lines for which you have "true", i.e. something like this (I don't speak python so the syntax is not accurate):
wasRemoved = bool vector of the size of LINES initialized at false for each entry
for CurrentLineIndex = 0; CurrentLineIndex < sizeof(LINES); CurrentLineIndex++
if (wasRemoved[CurrentLineIndex])
continue // skip a segment that was already removed
CurrentConnectedLine = LINES[CurrentLineIndex]
for HLIndex = CurrentLineIndex + 1; HLIndex < sizeof(LINES); HLIndex++:
if (wasRemoved[HLIndex])
continue;
HL = LINES[HLIndex]
QCoreApplication.processEvents()
if checkParallelAndConnect(CurrentConnectedLine, HL):
wasRemoved[HLIndex] = true
CurrentConnectedLine = ConnectLines(CurrentConnectedLine, HL)
processedPool.append(CurrentConnectedLine)
wasRemoved[CurrentLineIndex] = true // this is technically not needed since you won't go back in the vector anyway
LINES = processedPool
BTW, the really correct data structure for LINES to use for that kind of algorithm would be a linked list, since then you would have O(1) complexity for removal and you wouldn't need the boolean array. But a quick googling showed that that's not how lists are implemented in Python, also don't know if it would not interfere with other parts of your program. Alternatively, using a set might make it faster (though I would expect times similar to my "bool array" solution), see python 2.7 set and list remove time complexity
If this does not do the trick, I suggest you measure times of individual parts of the program to find the bottleneck.

Maya: Having trouble writing a script to cut a mesh into equal pieces

I want to split a mesh into sections based on a number of vertices. Essentially, I want a mesh cut into sections of 300 verts each with a remainder section of whatever is left over.
I've done this for the most part (i can get verts/faces, etc) but I'm having trouble figuring a graceful way of iterating through the extracted meshes.
I'm using polyChipOff which has no return value for the faces it chipped, so it's entirely new objects that are created that i have no handle to so i can't just continue chipping away from the previous piece as it no longer exists.
Any advice on how to go about this better?
I've thought of either iterating through all meshes in the scene for new ones (cache them at the start) or using a scriptJob to detect new objects being made. Both of those seem very hacky so was curious if anyone had advice.
You can try this method:
import maya.cmds as cmds
shape = cmds.listRelatives(p=True)
object = cmds.listRelatives(a, p=True)
selectedFace = cmds.ls(sl=True)
cmds.select(object[0] + '.f[:]', tgl=True)
unselecetedFace = cmds.ls(sl=True)
duplicated = cmds.duplicate(object, un=True)[0]
cmds.delete(duplicated, ch=True)
cmds.delete(selectedFace)
for i in range(len(unselecetedFace)):
unselecetedFace[i] = unselecetedFace[i].replace(object[0],duplicated)
cmds.delete(unselecetedFace)
cmds.select(duplicated)

Implementing real time plot with Qt5 charts

I am new to Qt and trying to implement a real time plot using QSplineSeries with Qt 5.7. I need to scroll the x axis as new data comes in every 100ms. It seems the CPU usage reaches 100% if I do not purge the old data which was appended to the series, using graphSeriesX->remove(0). I found two ways of scrolling the x axis.
const uint8_t X_RANGE_COUNT = 50;
const uint8_t X_RANGE_MAX = X_RANGE_COUNT - 1;
qreal y = (axisX->max() - axisX->min()) / axisX->tickCount();
m_x += y;
if (m_x > axisX->max()) {
axisX->setMax(m_x);
axisX->setMin(m_x - 100);
}
if (graphSeries1->count() > X_RANGE_COUNT) {
graphSeries1->remove(0);
graphSeries2->remove(0);
graphSeries3->remove(0);
}
The problem with the above is that m_x is of type qreal and at some time if I keep the demo running continuously, it will reach it's MAX value and the axisX->setMax call will fail making the plot not work anymore. What would be the correct way to fix this use case?
qreal x = plotArea().width() / X_RANGE_MAX;
chart->scroll(x, 0)
if (graphSeries1->count() > X_RANGE_COUNT) {
graphSeries1->remove(0);
graphSeries2->remove(0);
graphSeries3->remove(0);
}
However it's not clear to me how can I use the graphSeriesX->remove(0) call in this scenario. The graph will keep getting wiped out since once the series get appended with X_RANGE_COUNT values, the if block will always be true removing 0th value but the scroll somehow does not work the way manually setting maximum for x axis works and after a while I have no graph. scroll works if I do not call remove but then my CPU usage reaches 100%.
Can someone point me in the right direction on how to use scroll while using remove to keep the CPU usage low?
It seems like the best way to update data for a QChart is through void QXYSeries::replace(QVector<QPointF> points). From the documentation, it's much faster than clearing all the data (and don't forget to use a vector instead of a list). The audio example from the documentation does exactly that. Updating the axes with setMin, setMax and setRange all seem to use a lot of CPU. I'll try to see if there's a way around that.
What do you mean by "does not work the way manually setting maximum for x axis works"? The Second method you have shown works if you define x-axis range to be between 0 and X_RANGE_MAX. Is this not what you are after?
Something like: chart->axisX()->setRange(0, X_RANGE_MAX);

libgdx camera position using viewport

I am rather experiences libgdx developer but I struggle with one issue for some time so I decided to ask here.
I use FillViewport, TiledMap, Scene2d and OrtographicCamera. I want the camera to follow my player instance but there are bounds defined (equal to mapsize). It means that camera will never ever leave outside of map, so when player comes to an end of the map camera stops following and he goes to the edge of the screen itself. Maybe sounds complicated but it's simple and I am sure that you know what I mean, it's used in every game.
I calculated 4 values:
minCameraX = camera.viewportWidth / 2;
minCameraY = camera.viewportHeight / 2;
maxCameraX = mapSize.x camera.viewportWidth / 2;
maxCameraY = mapSize.y - camera.viewportHeight / 2;
I removed not necessary stuff like unit conversion, camera.zoom etc. Then I set the camera position like this:
camera.position.set(Math.min(maxCameraX, Math.max(posX, minCameraX)), Math.min(maxCameraY, Math.max(posY, minCameraY)), 0);
(posX, posY is player position) which is basically setting camera to player position but if it's to high or too low it sets it to max or min defined before in right axis. (I also tries MathUtils.clamp() and it works the same.
Everything is perfect until now. Problem occures when aspect ratio changes. By default I use 1280x768 but my phone has 1280x720. Because of that bottom and top edges of the screen are cut off because of the way how FillViewport works. Because of that part of my map is cut off.
I tried to modify maximums and minimums, calculate differences in ratio and adding them to calculations, changing camera size, different viewports and some other stuff but without success.
Can you guys help?
Thanks
I tried solution of noone and Tenfour04 from comments above. Both are not perfect but I am satisified enough i guess:
noone:
camera.position.x = MathUtils.clamp(camera.position.x, screenWidth/2 + leftGutter, UnitConverter.toBox2dUnits(mapSize.x) - screenWidth/2 + rightGutter);
camera.position.y = MathUtils.clamp(camera.position.y, screenHeight/2 + bottomGutter, UnitConverter.toBox2dUnits(mapSize.y) - screenHeight/2 - topGutter);
It worked however only for small spectrum of resolutions. For strange resolutions where aspect ratio is much different than default one I've seen white stripes after border. It means that whole border was printer and some part of the world outside of border. I don't know why
Tenfour04:
I changed viewport to ExtendViewport. Nothing is cut off but in different aspect ratios I also can see world outside of borders.
Solution for both is to clear screen with same color as the border is and background of level separatly which gave satisfying effect in both cases.
It stil has some limitations. As border is part of the world (tiled blocks) it's ok when it has same color. In case border has different colors, rendering one color outside of borders won't be a solution.
Thanks noone and Tenfour04 and I am still opened for suggestions:)
Here are some screenshots:
https://www.dropbox.com/sh/00h947wkzo73zxa/AAADHehAF4WI8aJ8bu4YzB9Va?dl=0
Why don't you use FitViewport instead of FullViewport? That way it won't cut off your screen, right?
It is a little bit late, but I have a solution for you without compromises!
Here width and height are world size in pixels. I use this code with FillViewport and everything works excellent!
float playerX = player.getBody().getPosition().x*PPM;
float playerY = player.getBody().getPosition().y*PPM;
float visibleW = viewport.getWorldWidth()/2 + (float)viewport.getScreenX()/(float)viewport.getScreenWidth()*viewport.getWorldWidth();//half of world visible
float visibleH = viewport.getWorldHeight()/2 + (float)viewport.getScreenY()/(float)viewport.getScreenHeight()*viewport.getWorldHeight();
float cameraPosx=0;
float cameraPosy=0;
if(playerX<visibleW){
cameraPosx = visibleW;
}
else if(playerX>width-visibleW){
cameraPosx = width-visibleW;
}
else{
cameraPosx = playerX;
}
if(playerY<visibleH){
cameraPosy = visibleH;
}
else if(playerY>height-visibleH){
cameraPosy = height-visibleH;
}
else{
cameraPosy = playerY;
}
camera.position.set(cameraPosx,cameraPosy,0);
camera.update();

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?