How do i draw truncated text with ATSUI?? I have a legacy app which uses QuickDraw API. it uses StringWidth , TruncString and DrawString functions. I could replace the StringWidth and DrawString with ATSUI's ATSUMeasureTextImage and ATSUDrawText. However i could not find a way to truncate a string so that it fits into a rect.
I used Quartz with QuickDraw before choosing ATSUI. Quartz does not provide any functions to estimate the size (in pixels) of the drawn text.
CGContextSelectFont(cgContext,
"Geneva", 12.0, kCGEncodingMacRoman);
CGContextSetTextMatrix(cgContext,
CGAffineTransformMake(1.0,0.0, 0.0,
-1.0, 0.0, 0.0));
CGContextShowTextAtPoint(cgContext,
inPoint.h, inPoint.v + 12.00, (const
char*)&(inString[1]), inString[0]);
Is there any function in ATSUI that does string truncation like TruncString...?? if not how do i draw a string truncated to a rect.
Thanks,
Abhinay.
You want to use HIThemeGetTextDimensions to measure the string with a truncation policy. It will give you the rectangle width and height and the baseline; you can make a CGRect with that width and height and set its origin to wherever you want the text.
Amazingly, this function appears to still be supported in 64-bit, although it has never been documented (there has never been any reference documentation at all for HITheme). Look it up in the headers for details.
Related
I am trying to write code that can crop an existing image down to some specified size/region. I am working with DICOM images, and the API I am using allows me to get pixel values directly. I've placed pixel values of the area of interest within the image into an array of floats (dstImage, below).
Where I'm encountering trouble is with the actual construction/creation of the new, cropped image file using this pixel data. The source image is grayscale, however all of the examples I have found online (like this one) have been for RGB images. I tried to follow the example in that link, adjusting for grayscale and trying numerous different values, but I continue to get errors on the CGBitmapContextCreate line of code and still do not clearly understand what those values are supposed to be.
My intensity values for the source image go above 255, so my impression is that this is not 8-bit Grayscale, but 16-bit Grayscale.
Here is my code:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context;
context = CGBitmapContextCreate(dstImage, // pixel data from the region of interest
dstWidth, // width of the region of interest
dstHeight, // height of the region of interest
16, // bits per component
2 * dstWidth, // bytes per row
colorSpace,
kCGImageAlphaNoneSkipLast);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("test.png"),
kCFURLPOSIXPathStyle,
false);
CFStringRef type = kUTTypePNG;
CGImageDestinationRef dest = CGImageDestinationCreateWithURL(url,
type,
1,
0);
CGImageDestinationAddImage(dest,
cgImage,
0);
CFRelease(cgImage);
CFRelease(context);
CGImageDestinationFinalize(dest);
free(dstImage);
The error I keep receiving is:
CGBitmapContextCreate: unsupported parameter combination: 16 integer bits/component; 32 bits/pixel; 1-component color space; kCGImageAlphaNoneSkipLast; 42 bytes/row.
The ultimate goal is to create an image file from the pixel data in dstImage and save it to the hard drive. Help on this would be greatly appreciated as would insight into how to determine what values I should be using in the CGBitmapContextCreate call.
Thank you
First, you should familiarize yourself with the "Supported Pixel Formats" section of Quartz 2D Programming Guide: Graphics Contexts.
If your image data is in an array of float values, then it's 32-bits-per-component, not 16. Therefore, you have to use kCGImageAlphaNone | kCGBitmapFloatComponents.
However, I believe that Core Graphics will interpret floating-point components as being between 0.0 and 1.0. If your values are outside of that, you may need to convert them using something like (value - minimumValue) / (maximumValue - minimumValue). An alternative may be to use CGColorSpaceCreateCalibratedGray() or to create a CGImage using CGImageCreate() and specifying an appropriate decode parameter and then create a bitmap context from that using CGBitmapContextCreateImage().
In fact, if you're not drawing into your bitmap context, you should just be creating a CGImage instead, anyway.
I'm using the following code to scale down my image:
NSImage * smallImage = [[NSImage alloc] initWithSize:CGSizeMake(width, height)];
[smallImage lockFocus];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationHigh];
[image drawInRect:CGRectMake(0, 0, width, height)
fromRect:NSZeroRect
operation:NSCompositeCopy
fraction:1.0];
[smallImage unlockFocus];
Basically, this works fine, but if I set the width and height to exactly as the original one, and compare the images pixel by pixel, there are still some pixels changed.
And since my app is pixel-sensitive, I need to make sure every pixel is correct, so I'm wondering how can I keep pixels as they are during such scale down, is it possible?
Yes, NSImage will change the image data in various ways. It attempts to optimize the "payload" image data according to the size needed for its graphical representation on the UI.
Scaling it down and up again is generally not a good idea.
AFAIK you can only avoid that by keeping the original image data somehere else (e.g. on disk or in a separate NSData container or so).
If you need to apply calcluations or manipulations on the image data which needs to be 100% accurate down to each pixel, then work with NSData or C strings/byte arrays only. Avoid NSImage unless
a) the result is for presentations on the device only
b) you really need functionality that comes with NSImage objects.
I am explaining the problems in principle, not scientific.
Pixels have a fixed size, for technical reasons.
No, you can't keep your pixels, when scaling down.
An example to explain: Pixelsize in square 0,25 inch. Now you want to fill a square wich 1,1 inch. It's impossible. How many pixels should be used? 4 = too less, 5 too much. Now in the COCOA libs or wherever it happens, a decision is made: better more pixels = enlarging square size, or less = reducing square size. That's out of control for you.
Another problem is - also out of control for you - the way how measures are computed.
An example: 1 inch is nearly 2.54 cm, so 1.27 is 0.5 inch, but what is 1.25 cm? Values, not only measures are internally computed using one measure-unit: I think it's inch (as DOUBLE, with fixed number of digits after the period). When using the unit cm it is internally recomputed in inch, some mathematical operations are done (e.g. How many pixels are neccessary for the square?) and the result is sent back, maybe recomputed in cm. That also happens when using INTEGER, internally computed as DOUBLE and returned as INTEGERS. Funny things = unexpected values happen from that, especially after divisions, which are used for scaling down!
By the way: If an image is scaled, often new pixels are created for the scaled image. For example, if you have 4 pixels: 2 red, 2 blue, the new ONE has a mixed color, somehow violet. There is no way back. So always work on copies of an image!
I have a UIBezierPath and I would like to:
Move to any coordinate on the UIView
Make bigger or smaller
I am drawing the UIBezierPath based off of a list of predefined coordinates. I implemented this code:
CGAffineTransform move = CGAffineTransformMakeTranslation(0, 0);
CGAffineTransform moveAndScale = CGAffineTransformScale(move, 1.0f, 1.0f);
[shape applyTransform:moveAndScale];
I have also tried scaling and then moving the shape, it seems to make little to no difference.
Using this code:
[shape moveToPoint:CGPointMake(0, 0)];
I start drawing the shape at (0, 0), but this is what happens. I assume this is because a line is being drawn from 0, 0 to the next point in the list.
When I set the move transformation to (0, 0) this is where it draws. Here, moveToPoint is set to the first coordinate pair in the list. As you can see, it is not at 0, 0.
Finally, increasing the 1.0f moves the shape off the screen completely, no matter where the I tell the shape to move.
Can someone help me understand why the shape is not drawing at 0, 0 and why it moves off the screen when I scale it.
(As requested by the OP in a comment above)
I might be wrong on this one, but doesn't this code
CGAffineTransformMakeTranslation(0, 0);
just say that something should be moved 0 pixels along the x-axis and 0 pixels along the y-axis? (reference) It won't actually move anything to origo (0, 0), as it seems you are trying to do.
Also, it seems like you have slightly misunderstood how to properly use moveToPoint:.. Think of it as a way to move your cursor, but without actually drawing anything. It is just a way to say 'I want to start drawing at this point'. The drawing itself can be performed by other methods. If you wanted to e.g. draw a square with sides of length L, then you could do something like this..
// 'shape' is a UIBezierPath
NSInteger L = 100;
CGPoint origin = CGPointMake(50, 50);
[shape moveToPoint:origin]; // Initial point to draw from
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y)]; // Draw from origin to the right
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y+L)]; // Draw a vertical line
[shape addLineToPoint:CGPointMake(origin.x, origin.y+L)]; // Draw bottom line
[shape addLineToPoint:origin]; // Draw vertical line back to origin
Note that this code is not tested at all, but it should give you the idea of how to use moveToPoint: and addLineToPoint:.
You need to be careful of the order you apply the transforms in and you should think about concatenating the transforms together and applying them in one go.
The order is important as each transform affects all x,y positions in the path. So, the translation is affected by the scale. Reverse the order and the path will be scaled and then moved.
Also, the coordinate system is important, particularly if you are scaling. Ensure you draw around 0,0 and then scale and then translate. This is easiest if you normalise the points. Normalising for lat/long values means dividing latitude by 90 and longitude by 180 (this will actually give you a range -1..1). When doing this you should first scale the path, then translate it to the centre of the view, then apply your desired translation.
I want to draw tiled images and then transform them by using the usual panning and zooming gestures. The problem that brings me here is that, whenever I have a scaling transformation of a large number of decimal places, a thin line of pixels (1 or 2) appears in the middle of the tiles. I managed to isolate the problem like this:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);//rect from drawRect:
float scale = 0.7;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(50, 50, 100, 100), testImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(150, 50, 100, 100), testImage);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
With a 0.7 scale, the two images appear correctly tiled:
With a 0.777777 scale (changing line 6 to "float scale = 0.777777;"), the visual artifact appears:
Is there any way to avoid this problem? This happens with CGImage, CGLayer and primitive forms such as a rectangle. It also happens on MacOSx.
Thanks for the help!
edit: Added that this also happens with a primitive form, like CGContextFillRect
edit2: It also happens on MacOSx!
Quartz has a floating point coordinate system, so scaling may result in values that are not on pixel boundaries, resulting in visible antialiasing at the edges. If you don't want that, you have two options:
Adjust your scale factor so that all your scaled coordinates are integral. This may not always be possible, especially if you're drawing lots of things.
Disable anti-aliasing for your graphics context using CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), false);. This will result in crisp pixel boundaries, but anything but straight lines might not look very good.
When all is said and done, iOS is dealing with discrete pixels on integer boundaries. When your frames are reduced 0.7, the 50 is reduced to 35, right on a pixel boundary. At 0.777777 it is not - so iOS adapts and moves/shrinks/blends whatever.
You really have two choices. If you want to use scaling of the context, then round the desired value up or down so that it results in integral scaled frame values (your code shows 50 as the standard multiplication value.)
Otherwise, you can not scale the context, but scale the content one by one, and use CGIntegralRect to round all dimensions up or down as needed.
EDIT: If my suspicion is right, there is yet another option for you. Lets say you want a scale factor of .77777 and a frame of 50,50,100,100. You take the 50, multiply it by the scale, then round the return value up or down. Then you recompute the new frame by using that value divided by 0.7777 to get some fractional value, that when scaled by 0.7777 returns an integer. Quartz is really good at figuring out that you mean an integral value, so small rounding errors are ignored. I'd bet anything this will work just fine for you.
Which is better to use? I prefer CGRect.size.width cause it looks nicer. But, my colleague says CGRectGetWidth is better.
CGRectGetWidth/Height will normalize the width or height before returning them. Normalization is basically just checking if the width or height is negative, and negating it to make it positive if so.
Answered here
A rect's width and height can be negative. I have no idea when this would be true in practice, but according to Apple docs:
CGGeometry Reference defines structures for geometric primitives and
functions that operate on them. The data structure CGPoint represents
a point in a two-dimensional coordinate system. The data structure
CGRect represents the location and dimensions of a rectangle. The data
structure CGSize represents the dimensions of width and height.
The height and width stored in a CGRect data structure can be
negative. For example, a rectangle with an origin of [0.0, 0.0] and a
size of [10.0,10.0] is exactly equivalent to a rectangle with an
origin of [10.0, 10.0] and a size of [-10.0,-10.0]. Your application
can standardize a rectangle—that is, ensure that the height and width
are stored as positive values—by calling the CGRectStandardize
function. All functions described in this reference that take CGRect
data structures as inputs implicitly standardize those rectangles
before calculating their results. For this reason, your applications
should avoid directly reading and writing the data stored in the
CGRect data structure. Instead, use the functions described here to
manipulate rectangles and to retrieve their characteristics.