Issues setting UIEdgeInsetsMake for UNEVEN image - objective-c

I need HELP setting up resizableImageWithCapInsets:UIEdgeInsetsMake for an UNEVEN image. I have used this successfully with EVEN images, it works fine but having lot of difficulty setting correct values for a particular image. I have a callout bubble image (attached) of size 49 X 158 and using following values for resizableImageWithCapInsets:UIEdgeInsetsMake:
dialogueBubbleImage = [[UIImage imageNamed:#"BubbleBottomRightLong_1.png"]
resizableImageWithCapInsets:UIEdgeInsetsMake(20, 23, 138, 23)];//49 × 158. UIEdgeInsetsMake: CGFloat top, left, bottom, right;
Whole idea is to display a label with text inside the white box area, keeping the callout arrow as-is.
Here is the image I am using:

what exactly is the problem? your value for top is too high, you are leaving a 0px vertical area to tile. ideally you would have a single pixel left to tile, so i would choose values like:
dialogueBubbleImage = [[UIImage imageNamed:#"BubbleBottomRightLong_1.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(12, 24, 145, 24)];

Related

PyQt5: set coordinates for items in graphics scene

I have a scene = QGraphicsScene() and I added an ellipse via scene.addEllipse(100, 100, 10, 10, greenPen, greenBrush). The brush and the pen are set before. I add the QGraphicsScene right after to a QGraphicsView with MyGraphicsView.setScene(scene). All of this works except the position of the ellipse is always the center. The first 2 parameters in the addEllipse() function should be the coordinates (in this case 100, 100), but no matter what I put there, the ellipse is always in the center. Any ideas?
EDIT: now I added 3 ellipses like this (the one in the description deleted):
scene.addEllipse(10, 10, 10, 10, greenPen, greenBrush)
scene.addEllipse(-100, -10, 30, 30, bluePen, blueBrush)
scene.addEllipse(-100, -100, 60, 60, bluePen, blueBrush)
and my result is this:
So clearly the coordinates work somehow, but I still don't get how exactly. Do I have to set an origin to the scene?
And if I do this:
particleList = scene.items()
print(particleList[0].x())
print(particleList[1].x())
print(particleList[2].x())
I get:
0.0
0.0
0.0
At this point I'm totally confused and I'd really appreciate some help.
An important thing that must be always kept in mind is that the position of a QGraphicsItem does not reflect its "top left" coordinates.
In fact, you can have a QGraphicsRectItem that has a QRectF positioned at (100, 100) but its position at (50, 50). This means that the rectangle will be shown at (150, 150). The position of the shape is relative to the position of the item.
All add[Shape]() functions of QGraphicsScene have this important note in their documentation:
Note that the item's geometry is provided in item coordinates, and its position is initialized to (0, 0).
Even if you create a QGraphicsEllipseItem with coordinates (-100, -100), it will still be positioned at (0, 0), and that's because the values in the addEllipse() (as with all other functions) only describe the coordinates of the shape.
Then, when a QGraphicsScene is created, its sceneRect() is not explicitly set, and by default it corresponds to the bounding rectangle of all items. When the scene is added to a view, the view automatically positions the scene according to the alignment(), which defaults to Qt.AlignCenter:
If the whole scene is visible in the view, (i.e., there are no visible scroll bars,) the view's alignment will decide where the scene will be rendered in the view. For example, if the alignment is Qt::AlignCenter, which is default, the scene will be centered in the view, and if the alignment is (Qt::AlignLeft | Qt::AlignTop), the scene will be rendered in the top-left corner of the view.
This also means that if you have items at negative coordinates or with their shapes at negative coordinates, the view will still show the scene centered to the center of the bounding rect of all items.
So, you either set the scene sceneRect or the view sceneRect, depending on the needs. If the view's sceneRect is not set, it defaults to the scene's sceneRect.
If you want to display the items according to their position while also ensuring that negative coordinates are correctly "outside" the center, you must decide the size of the visible sceneRect and set it accordingly:
boundingRect = scene.itemsBoundingRect()
scene.setSceneRect(0, 0, boundingRect.right(), boundingRect.bottom())

How does positioning work on overlay image in cloudinary?

given the url(image) below as an example
https://res.cloudinary.com/demo/image/upload/w_220,h_140,c_fill/l_brown_sheep,w_220,h_140,c_fill,x_220,y_140/l_horses,w_220,h_140,c_fill,x_220,y_140/yellow_tulip.jpg
From what I understand, the first image yellow_tulip is drawn on (0, 0) which is the top left corner. The second image brown_sheep draws from (220, 140), which is the right bottom corner of yellow_tulip because (0, 0) starts from top left of canvas.
Everything makes sense from what I understand til the third image kicks in. horses also starts from (220, 140) but how come it starts from the center of second image brown_sheep? I'm really confused.
The dimensions of the image changes when you apply the overlay changes so that should be taken into consideration when applying the x and y coordinates.
The coordinates are calculated from the center of the image but since the size of the canvas in the first image is 220 by 140, setting the brown sheep overlay's coordinates to 220 by 140 will double the size of the canvas to 440 by 280.
Meaning the following URL is now 440 by 280 https://res.cloudinary.com/demo/image/upload/w_220,h_140,c_fill/l_brown_sheep,w_220,h_140,c_fill,x_220,y_140/l_horses,w_220,h_140,c_fill/yellow_tulip.jpg
To now overlay the horsed over the brown sheep you will need to recalculate the dimensions to the following- https://res.cloudinary.com/demo/image/upload/w_220,h_140,c_fill/l_brown_sheep,w_220,h_140,c_fill,x_220,y_140/l_horses,w_220,h_140,c_fill,x_110,y_70/yellow_tulip.jpg
Or
https://res.cloudinary.com/demo/image/upload/w_220,h_140,c_fill/l_brown_sheep,w_220,h_140,c_fill,x_220,y_140/l_horses,w_220,h_140,c_fill,x_330,y_210/yellow_tulip.jpg

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

Draw rotated text to parent coordinate system

I have a UIView, which I'm drawing manually in the 'drawRect'-Function.
It is basically a coordinate system, which has 'Values' on the Y-Axis and 'Time' on the 'X-Axis'.
Due to space issues, I want the Timestamps to be vertical, instead of horizontal.
For this purpose, I use:
CGContextSaveGState(ctx); //Saves the current graphic context state
CGContextRotateCTM(ctx, M_PI_2); //Rotates the context by 90° clockwise
strPos = CGContextConvertPointToUserSpace(ctx, strPos); //SHOULD convert to Usercoordinates
[str drawAtPoint:strPos withFont:fnt]; //Draws the text to the rotated CTM
CGContextRestoreGState(ctx); //Restores the CTM to the previous state.
ctx (CGContextRef), strPos (CGPoint) and str (NSString) are variables, that have been initialized properly and correctly for 'horizontal text', with a width of the text height.
While this code works flawlessly on the iPhone 3, it gives me a complete mess on the iPhone 4 (Retina), because the CGContextConvertPointToUserSpace function produces completely different results, even though the coordinate system of the iPhone is supposed to remain the same.
I also tried using CGAffineTransform, but only with the same results.
To summarize my question: How do I draw a text to a calculated position in the parent coordinate system (0, 0 being top left)?
After studying the Apple docs regarding Quartz 2D once more, I came to realize, that the rotation by Pi/2 moves all my writing off screen to the left.
I can make the writing appear in a vertical line by translating the CTM by +height.
I'll keep trying, but would still be happy to get an answer.
Edit: Thanks to lawicko's heads-up I was able to fix the problem. See Answer for details.
I would like to thank lawicko for pointing this out.
During my tests I made two mistakes...but he is of course correct. Using CGContextShowTextAtPoint is the most simple solution, since it doesn't require the rotation of the entire CTM.
Again, THANK you.
Now, for the actual answer to my question.
To draw a rotated text at position x/y, the following code works for me.
CGAffineTransform rot = CGAffineTransformMakeRotation(M_PI_2); //Creates the rotation
CGContextSelectFont(ctx, "TrebuchetMS", 10, kCGEncodingMacRoman); //Selects the font
CGContextSetTextMatrix(ctx, CGAffineTransformScale(rot, 1, -1)); //Mirrors the rotated text, so it will be displayed correctly.
CGContextShowTextAtPoint(ctx, strPos.x, strPos.y, TS, 5); //Draws the text
ctx is the CGContext, strPos the desired position in the parent coordinate system, TS a char array.
Again, thank you lawicko.
I probably would've searched forever if not for your suggestion.
Maybe this answer will help someone else, who comes across the same problem.

Why is UILabel's text blurry on iPad if width is not even?

Following phenomenon: my text is "Search". I create a UILabel of SmallSystemFontSize and call sizeToFit:.
The result is 39 units wide and the text looks kind of blurry.
If I adjust the width to 40 it looks perfect.
I read that the text gets blurry if you hit sub pixels, meaning the width would be something like 39.5, but it seems it has to be even.
Can somebody confirm or even explain what is going on ?
In my case, having set shouldRasterize = YES on the CGLayer of the UILabel's superview was the culprit. Removing that line made the text nice and crisp.
UIView items are positioned by their center which for a size that is odd is on a half pixel, 19.5 for a width of 39.. This alignment causes pixel averaging that causes the fuzziness.
One way is to make it an even width.
Another is to place it by the center at an even point use:
#property(nonatomic) CGPoint center
Example, for a desired position of label; at (10, 10, 39, 19) one could use:
label.center = CGPointMake(50, 20);