I'm drawing a map of hexagons inside a UIView and the next step is to give it in isometric view. To do this, I transformed the map using the CGAffineTransform, rotation and scaling, reaching my goal.
Now, if the map becomes bigger, when i rotate it, the lower left corner goes out my screen, this is the frame before and after the transformation:
2012-03-07 17:08:06.160 PratoFiorito[941:f803] X: 0.000000 - Y: 0.000000 || Width: 1408.734619 - Height: 1640.000000
2012-03-07 17:08:06.163 PratoFiorito[941:f803] X: -373.523132 - Y: 281.054779 || Width: 2155.781006 - Height: 1077.890503
I simply can't understand what is the new point of origin and how I can calculate it to replace the view correctly. Somebody can help me?
Related
I want my vue Konva Text element to completely fill the given height, like i expect of a rectangle.
This is issue becomes obvious when pairing with text images, (converting svg text to canvas) that properly match the given dimension
<v-text :config={
x: 50,
y: 50,
width: 1000,
height: 60,
fontSize: 60,
fontStyle: 'bold',
fontFamily 'Campton Book'
text: 'WELT'
}
/>
<v-rect
:config="{ x: 50, y: 50, fill: 'black', height: 60, width: 200 }"
/>
Second Part, is there any way to always pixel perfectly align the left side with the border? the x coordinate matches the border
Is this due to font constraints? What am I missing?
I tried to get the height of the text node to fix this positioning but this is the given height passed down as props
Text is defined as having parts above and below the baseline. Above is termed 'ascenders' amd below is 'descenders', which are required for lower case letters like j y g.
Setting the text fontSize to 60 does not say 'whatever the string, make it fill a space 60px high'. Instead it says 'Make text in a 60px font', which makes space for the descenders because they will generally be required.
If you know for sure that the text will be all caps, then a solution is to measure the height used and increase the font size by a computed factor so that the font fills the line height.
To do this you'll need to get the glyph measurements as follows:
const lineHeight = 60; // following your code
// make your text shape here...left out for brevity
const metrics = ctx.measureText('YOUR CAPS TEXT');
capsHeight = Math.abs(metrics.actualBoundingBoxAscent)
fontSize = lineHeight * lineHeight / capsHeight;
If I got that right, your 60px case should give a value around 75. That's based on the convention that ascenders are 80% of the line height. Now you set the font size of your shape to this new value and you should be filling the entire line height.
Regarding the left-alignment, this relies on what the font gurus call the a-b-c widths. The left gap is the a-space, the b is the character width (where the ink falls) and the c-space is the same as the a-space but on the right hand side.
Sadly unless someone else can tell me I am wrong, you don't get a-b-c widths in the canvas TextMetric. There is a workaround which is rather convoluted but viable. You would draw the text in black on an off-screen canvas filled with a transparent background. Then get the canvas pixel data and walk horizontal lines from the left of the canvas inspecting pixels and looking for the first colored pixel. Once you find it you have the measurement to offset the text shape horizontally.
I have an app written with RXSwift which processes 500+ days of HealthKit data to draw a chart for the user.
The chart image is drawn incrementally using the code below. Starting with a black screen, previous image is drawn in the graphics context, then a new segment is drawn over this image with certain offset. The combined image is saved and the process repeats around 70+ times. Each time the image is saved, so the user sees the update. The result is a single chart image which the user can export from the app.
Even with autorelease pool, I see spikes of memory usage up to 1Gb, which prevents me from doing other resource intensive processing.
How can I optimize incremental drawing of very large (1440 × 5000 pixels) image?
When image is displayed or saved at 3x scale, it is actually 4320 × 15360.
Is there a better way than trying to draw over an image?
autoreleasepool {
//activeEnergyCanvas is custom data processing class
let newActiveEnergySegment = activeEnergyCanvas.draw(in: CGRect(x: 0, y: 0, width: 1440, height: days * 10), with: energyPalette)
let size = CGSize(width: 1440, height: height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
//draw existing image
self.activeEnergyImage.draw(in: CGRect(origin: CGPoint(x: 0, y: 0),
size: size))
//calculate where to draw smaller image over larger one
let offsetRect = CGRect(origin: CGPoint(x: 0, y: offset * 10),
size: newActiveEnergySegment.size)
newActiveEnergySegment.draw(in: offsetRect)
//get the combined image
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//assign combined image to be displayed
if let unwrappedImage = newImage {
self.activeEnergyImage = unwrappedImage
}
}
Turns out my mistake was in passing invalid drawing scale (0.0) when creating graphics context, which defaulted to drawing at the device's native screen scale.
In case of iPhone 8 it was 3.0 The result is needing extreme amounts of memory to draw, zoom and export these images. Even if all debug logging prints that image is 1440 pixels wide, the actual canvas ends up being 1440 * 3.0 = 4320.
Passing 1.0 as the drawing scale makes the image more fuzzy, but reduces memory usage to less than 200mb.
// UIGraphicsBeginImageContext() <- also uses #3x scale, even when all display size printouts show
let drawingScale: CGFloat = 1.0
UIGraphicsBeginImageContextWithOptions(size, true, drawingScale)
In the fragment function of a Metal Shader file, is there a way to redefine the "bounds" of the texture with respect to what the sample will consider it's normalized coordinates to be?
By default, a value of 0,0 for the sample is the top-left "pixel" and 1,1 is the bottom right "pixel" of the texture. However, I'm re-using textures for drawing and at any given render pass there's only a portion of the texture that contains the relevant data.
For example, in a texture of width: 500 and height: 500, I might have only copied data into the region of 0,0,250,250. In my fragment function, I'd like the sampler to interpret a normalized coordinate of 1.0 to be 250 and not 500. Is that possible?
I realize I can just change the sampler to use pixel addressing, but that comes with a few restrictions as noted in the Metal Shader Specification.
No, but if you know the region you want to sample from, it's quite easy to do a little math in the shader to fix up your sampling coordinates. This is used often with texture atlases.
Suppose you have an image that's 500x500 and you want to sample the bottom-right 125x125 region (just to make things more interesting). You could pass this sampling region in as a float4, storing the bounds as (left, top, width, height) in the xyzw components. In this case, the bounds would be (375, 375, 125, 125). Your incoming texture coordinates are "normalized" with respect to this square. The shader simply scales and biases these coordinates into texel coordinates, then normalizes them to the dimensions of the whole texture:
fragment float4 fragment_main(FragmentParams in [[stage_in]],
texture2d<float, access::sample> tex2d [[texture(0)]],
sampler sampler2d [[sampler(0)]],
// ...
constant float4 &spriteBounds [[buffer(0)]])
{
// original coordinates, normalized with respect to subimage
float2 texCoords = in.texCoords;
// texture dimensions
float2 texSize = float2(tex2d.get_width(), tex2d.get_height());
// adjusted texture coordinates, normalized with respect to full texture
texCoords = (texCoords * spriteBounds.zw + spriteBounds.xy) / texSize;
// sample color at modified coordinates
float4 color = tex2d.sample(sampler2d, texCoords);
// ...
}
I want to have a label which should be displayed upside down. That means after creating the label I want to turn it around 90 degrees. That works but now the label is anywhere. I don't know HOW the label is rotated. Maybe one could help me. The code is the following:
let label = CreatorClass.createLabelWithFrame(CGRect(x: 10, y: 10, width: 150, height: 15), text: "aString", size: 12.0, bold: false, textAlignment: .Left, textColor: UIColor.whiteColor(), addToView: self)
label.transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
CreatorClass creates a label and add it to a certain view (it adds to self because this code is called in a subclass of UIView). Actually it's self-explanatory I think.
you label rotate around its center, that means:
You label rotate around : x = 10 + 75 = 85
y = 10 + 7.5 = 17.5
This is also the center of the new position, what has a width of 15.0 and height of 150;
The new rect of your view, after the transfer is:
x = 85 - 7.5 = 77.5;
y = 17.5 - 75 = -57.5;
width = 15 , height = 150.
It could be out of the self bounds.
You might want to go back your label to its init position, only need to put it to init position:
label.frame = CGRect(x: 10, y: 10, width: 15, height: 150)
I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow: