PyQt5: set coordinates for items in graphics scene - pyqt5

I have a scene = QGraphicsScene() and I added an ellipse via scene.addEllipse(100, 100, 10, 10, greenPen, greenBrush). The brush and the pen are set before. I add the QGraphicsScene right after to a QGraphicsView with MyGraphicsView.setScene(scene). All of this works except the position of the ellipse is always the center. The first 2 parameters in the addEllipse() function should be the coordinates (in this case 100, 100), but no matter what I put there, the ellipse is always in the center. Any ideas?
EDIT: now I added 3 ellipses like this (the one in the description deleted):
scene.addEllipse(10, 10, 10, 10, greenPen, greenBrush)
scene.addEllipse(-100, -10, 30, 30, bluePen, blueBrush)
scene.addEllipse(-100, -100, 60, 60, bluePen, blueBrush)
and my result is this:
So clearly the coordinates work somehow, but I still don't get how exactly. Do I have to set an origin to the scene?
And if I do this:
particleList = scene.items()
print(particleList[0].x())
print(particleList[1].x())
print(particleList[2].x())
I get:
0.0
0.0
0.0
At this point I'm totally confused and I'd really appreciate some help.

An important thing that must be always kept in mind is that the position of a QGraphicsItem does not reflect its "top left" coordinates.
In fact, you can have a QGraphicsRectItem that has a QRectF positioned at (100, 100) but its position at (50, 50). This means that the rectangle will be shown at (150, 150). The position of the shape is relative to the position of the item.
All add[Shape]() functions of QGraphicsScene have this important note in their documentation:
Note that the item's geometry is provided in item coordinates, and its position is initialized to (0, 0).
Even if you create a QGraphicsEllipseItem with coordinates (-100, -100), it will still be positioned at (0, 0), and that's because the values in the addEllipse() (as with all other functions) only describe the coordinates of the shape.
Then, when a QGraphicsScene is created, its sceneRect() is not explicitly set, and by default it corresponds to the bounding rectangle of all items. When the scene is added to a view, the view automatically positions the scene according to the alignment(), which defaults to Qt.AlignCenter:
If the whole scene is visible in the view, (i.e., there are no visible scroll bars,) the view's alignment will decide where the scene will be rendered in the view. For example, if the alignment is Qt::AlignCenter, which is default, the scene will be centered in the view, and if the alignment is (Qt::AlignLeft | Qt::AlignTop), the scene will be rendered in the top-left corner of the view.
This also means that if you have items at negative coordinates or with their shapes at negative coordinates, the view will still show the scene centered to the center of the bounding rect of all items.
So, you either set the scene sceneRect or the view sceneRect, depending on the needs. If the view's sceneRect is not set, it defaults to the scene's sceneRect.
If you want to display the items according to their position while also ensuring that negative coordinates are correctly "outside" the center, you must decide the size of the visible sceneRect and set it accordingly:
boundingRect = scene.itemsBoundingRect()
scene.setSceneRect(0, 0, boundingRect.right(), boundingRect.bottom())

Related

SCNNode getBoundingBoxMin

I have a node that I'm extracting from an SCNScene. I've got some information about it, but I'm confused about one thing - how the bounding box is calculated. The node is positioned once it is loaded at vector 0,0,0 using:
[myNode setPosition: SCNVector3Make(0, 0, 0)];
However, the bounding box still reports a min.x of -1.
How can this be if I've just positioned it at 0? Additionally, it doesn't matter what vector values I give, it always has a min.x of -1 - despite the node actually moving around screen as expected.
It's hard to answer definitively without more information about what's in your scene and what you're doing with it, but it's probably one or both of these issues:
The bounding box tells you the extent of the node's content, not its position — e.g. if you have a cube 2 units wide positioned at (0, 0, 0), its bounding box min and max corners will be (-1, -1, -1) and (1, 1, 1).
Depending on what tricks you're using to move your node around, the node itself may not be changing position — SCNActions, CAAnimations, and physics all work on a separate copy of the node hierarchy. (Partly this is to support implicit animation — e.g. set the position of a node, and it'll appear to animate from the current position to the position you set, but you can still read back position and have it be the value you set it to.)
You get to this separate hierarchy with the node's presentationNode property — ask the presentation node for its bounding box and you'll see its position/extent as affected by actions/animations/physics and currently rendered.

How to change the anchor point from the top-left corner of a transformation matrix to the bottom-left corner?

Say, I have an image on an HTML page.
I apply an affine transformation to the image using CSS3 matrix function.
It looks like:
img#myimage {
transform: matrix(a, b, c, d, tx, ty);
/* use -webkit-transform, -moz-transform etc. */
}
The origin of an HTML page is the top-left corner and the y-axis is inverted.
I'm trying to put the same image in an environment (cocos2d) where the origin is the bottom-left corner and the y-axis is upright.
To get the same result in the other environment, I need to transform the origin somehow and reflect that in the resulting CGAffineTransform.
It would be great if I can get some help with the matrix math that goes here. (I'm not so good with matrices.)
The following formula would work,
for converting the position from CSS3 to Cocos2d:
(screen Size - "y" position in CSS3 - height of object)
Explanation:
To make the origin for the Cocos environment same as for the CSS3 environment we would only have to add the screen size to the cocos2d's bodies y co-ordinate.
Eg. The screen size is (100,100) and the body is a point object if you place it at (0,0) in CSS3 it would be at the top left corner. If we add the screen size to the y co-ordinates for cocos2d the object would be placed at (0,100) which is the top-left corner for cocos2d as well
To make the co-ordinates same, since the Y axis is inverted, we have to subtract the "Y" co-ordinate given in CSS3 from the Screen Size for Cocos2d. Suppose we place the same point object in the previous example at (0,10) in CSS3 we would place it at (0, 100 - 10) in cocos2d which would be the same positions on the screen
Since our body would NOT always be a point object we have to take care of its anchor point as well. If suppose the body's height is 20 and we place it at (0,10) in CSS3 then it would be placed at the top-left position and would be coming down because the Y axis is inverted
Hence we would also have to subtract the body's total height from the screen size and "y" co-ordinate to place it at the same position which would be (0, 100 - 10 - 20) putting the body at the same place in cocos2d environment
I hope I am correct and clear :)

How to set MKPinAnnotation so that bottom of image points to location instead of center of image

How can I move the pin image so that the bottom of the image points to the location (like the default pin). Currently the center of the image points to that location (San Francisco).
You'll need to calculate the number of pixels to offset and then set the centerOffset property.
By default, the center point of an annotation view is placed at the
coordinate point of the associated annotation. You can use this
property to reposition the annotation view as needed. This x and y
offset values are measured in pixels. Positive offset values move the
annotation view down and to the right, while negative values move it
up and to the left.

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

Applying a scale and translate transformation to UIBezierPath

I have a UIBezierPath and I would like to:
Move to any coordinate on the UIView
Make bigger or smaller
I am drawing the UIBezierPath based off of a list of predefined coordinates. I implemented this code:
CGAffineTransform move = CGAffineTransformMakeTranslation(0, 0);
CGAffineTransform moveAndScale = CGAffineTransformScale(move, 1.0f, 1.0f);
[shape applyTransform:moveAndScale];
I have also tried scaling and then moving the shape, it seems to make little to no difference.
Using this code:
[shape moveToPoint:CGPointMake(0, 0)];
I start drawing the shape at (0, 0), but this is what happens. I assume this is because a line is being drawn from 0, 0 to the next point in the list.
When I set the move transformation to (0, 0) this is where it draws. Here, moveToPoint is set to the first coordinate pair in the list. As you can see, it is not at 0, 0.
Finally, increasing the 1.0f moves the shape off the screen completely, no matter where the I tell the shape to move.
Can someone help me understand why the shape is not drawing at 0, 0 and why it moves off the screen when I scale it.
(As requested by the OP in a comment above)
I might be wrong on this one, but doesn't this code
CGAffineTransformMakeTranslation(0, 0);
just say that something should be moved 0 pixels along the x-axis and 0 pixels along the y-axis? (reference) It won't actually move anything to origo (0, 0), as it seems you are trying to do.
Also, it seems like you have slightly misunderstood how to properly use moveToPoint:.. Think of it as a way to move your cursor, but without actually drawing anything. It is just a way to say 'I want to start drawing at this point'. The drawing itself can be performed by other methods. If you wanted to e.g. draw a square with sides of length L, then you could do something like this..
// 'shape' is a UIBezierPath
NSInteger L = 100;
CGPoint origin = CGPointMake(50, 50);
[shape moveToPoint:origin]; // Initial point to draw from
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y)]; // Draw from origin to the right
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y+L)]; // Draw a vertical line
[shape addLineToPoint:CGPointMake(origin.x, origin.y+L)]; // Draw bottom line
[shape addLineToPoint:origin]; // Draw vertical line back to origin
Note that this code is not tested at all, but it should give you the idea of how to use moveToPoint: and addLineToPoint:.
You need to be careful of the order you apply the transforms in and you should think about concatenating the transforms together and applying them in one go.
The order is important as each transform affects all x,y positions in the path. So, the translation is affected by the scale. Reverse the order and the path will be scaled and then moved.
Also, the coordinate system is important, particularly if you are scaling. Ensure you draw around 0,0 and then scale and then translate. This is easiest if you normalise the points. Normalising for lat/long values means dividing latitude by 90 and longitude by 180 (this will actually give you a range -1..1). When doing this you should first scale the path, then translate it to the centre of the view, then apply your desired translation.