I'm using a Metal render pass to snapshot my SceneKit scene attached to a SCNRenderer. The method is faster than using the UIImage-producing SCNRenderer.snapshot(), but the output of the two methods is different; my method produces a darker image. I thought this could be to do with either a color-space difference, or alpha issue.
The image on the right shows my custom method, in which the color doesn't look right.
The color space seems to be the same in the UIImage produced by both the standard method's result, and my own (kCGColorSpaceModelRGB; sRGB IEC61966-2.1), so I don't think this is the issue.
I'll share elements of the custom render code that I believe are relevant.
I configure the MTLRenderPassDescriptor as follows:
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadAction.clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
I then create a texture to render into. I create a CGContext with:
bitsPerComponent: 8
bitsPerPixel: 32
colorSpace: CGColorSpaceCreateDeviceRGB()
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue
fillColor: UIColor.clear.cgColor
This is an area I'm concerned about. I've tried other color spaces, CGBitmapInfo and CGImageAlphaInfo flags, and other fill colors. The fill color does have an effect on the output, but I do need transparency, so clear does feel correct.
I create a MTLTextureDescriptor.texture2DDescriptor with .rgba8Unorm as the pixel format, with usage MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue).
I then go on to hand my texture to the render pass descriptor and run a render command.
renderPassDescriptor.colorAttachments[0].texture = texture
let commandBuffer = commandQueue.makeCommandBuffer()!
renderer.render(atTime: time, viewport: viewport, commandBuffer: commandBuffer,
passDescriptor: renderPassDescriptor)
commandBuffer.commit()
In my normal pipeline, I go on here to create a CVPixelBuffer, but I introduced the creation of a CGImage to be able to more easily preview the image in the Xcode debugger. I do this using the following:
var data = Array<UInt8>(repeatElement(0, count: 4*mtlTexture.width*mtlTexture.height))
mtlTexture.getBytes(&data, bytesPerRow: 4*mtlTexture.width, from: MTLRegionMake2D(0, 0, mtlTexture.width, mtlTexture.height), mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: (CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &data,
width: mtlTexture.width,
height: mtlTexture.height,
bitsPerComponent: 8,
bytesPerRow: 4*mtlTexture.width,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue)
return context?.makeImage()
And this CGImage (or the CVPixelBuffer) is where I first observe the darkened image. So I believe that either the initial Metal render pass is creating the color disparity, or, I'm always performing wrong conversions to each other format I use.
An issue that is perhaps related can be found here:
https://github.com/MetalPetal/MetalPetal/issues/76
That issue seems to be taking place in a render view, and I don't use a SceneView or anything called a renderView. I have a SCNRenderer and I turn snapshots into images to write to video buffers, but the color issue presents itself earlier than those steps. The post does mention that the render view should use the format bgra8Unorm_srgb, so I wonder if that should be introduced in my pipeline, but I just can't work out where it belongs. Changing the pixelFormat from rgba8Unorm to bgra8Unorm_srgb in my MTLTextureDescriptor doesn't seem to make any difference.
Does this effect look familiar to anyone, or can anyone shed light on this?
It should work if you chose CGColorSpaceCreateWithName(kCGColorSpaceSRGB) for the bitmap context and MTLPixelFormatRGBA8Unorm_sRGB for the texture format.
I'm making an application to easily scan to multiple page pdf files. The project is on GitHub, just in case you want to have a look at all of the project code.
I'm having an issue with scanning in black & white.
This is the method that gets called when I press the button to start scanning.
- (IBAction)scan:(id)sender {
//Get the selected scanner and it's functional unit
ICScannerDevice *scanner = [self selectedScanner];
ICScannerFunctionalUnit *unit = [scanner selectedFunctionalUnit];
//If there is no scan or overviewscan in progress
if (![unit overviewScanInProgress] && ![unit scanInProgress]) {
//Setup the functional unit and start the scan
[unit setScanArea:[self scanArea]];
[unit setResolution:[[unit supportedResolutions] indexGreaterThanOrEqualToIndex:[[resolutionPopUpButton selectedItem] tag]]];
[unit setBitDepth:ICScannerBitDepth8Bits];
[unit setMeasurementUnit:ICScannerMeasurementUnitCentimeters];
[unit setThresholdForBlackAndWhiteScanning:0];
[unit setUsesThresholdForBlackAndWhiteScanning:YES];
[unit setPixelDataType:[kindSegmentedControl selectedSegment]];
[scanner requestScan];
} else {
//Cancel the ongoing scan
[scanner cancelScan];
}
}
I'm setting the pixelDataType to an integer that I get from an NSSegmentedControl. When the first segment is selected this will return 0, which is the same as ICScannerPixelDataTypeBW.
However, despite everything working fine when the second and the third segment are selected (which are ICScannerPixelDataTypeGray and ICScannerPixelDataTypeRGB), the scanner does nothing when set to scan black & white.
There is very little documentation available on the scanning part of ImageCaptureCore, but I found those properties describing a threshold for black & white scanning on this website, but none of them worked for me.
I know this is a part of the ImageCaptureCore API that doesn't get used by many people very often, but I really hope someone knows, or at least can find out, a solution to my problem.
Edit:
I added - (void)device:(ICDevice *)device didEncounterError:(NSError *)error to my implementation and logged the error, which is:
2014-02-01 21:55:16.260 Scanner[4131:903] Error Domain=com.apple.ImageCaptureCore Code=-9933 UserInfo=0x1005763f0 "An error occurred during scanning."
With a little hint (intentional or not) from the other answer, I figured things out by myself.
What you have to do is to set bitDepth to ICScannerBitDepth1Bit, as what you're trying to scan is a 1 bit per pixel image.
This in turn disables scanning in grayscale or rgb.
Since I can't award bounties to my own questions, I will award the bounty to the question I got the hint from.
You can find the ImageCaptureCore error codes in ICCommonConstants.h.
The error you mention is
ICReturnReceivedUnsolicitedScannerErrorInfo = -9933
Maybe your scanner simply does not support scanning in Black & White? Since Grayscale works, do you even want to use BW?
From the headers:
/*!
#enum ICScannerPixelDataType
#abstract Pixel data types. Corresponds to "ICAP_PIXELTYPE" of the TWAIN Specification.
#constant ICScannerPixelDataTypeBW Monochrome 1 bit pixel image.
#constant ICScannerPixelDataTypeGray 8 bit pixel Gray color space.
#constant ICScannerPixelDataTypeRGB Color image RGB color space.
#constant ICScannerPixelDataTypePalette Indexed Color image.
#constant ICScannerPixelDataTypeCMY Color image in CMY color space.
#constant ICScannerPixelDataTypeCMYK Color image in CMYK color space.
#constant ICScannerPixelDataTypeYUV Color image in YUV color space.
#constant ICScannerPixelDataTypeYUVK Color image in YUVK color space.
#constant ICScannerPixelDataTypeCIEXYZ Color image in CIEXYZ color space.
*/
enum
{
ICScannerPixelDataTypeBW = 0,
ICScannerPixelDataTypeGray = 1,
ICScannerPixelDataTypeRGB = 2,
ICScannerPixelDataTypePalette = 3,
ICScannerPixelDataTypeCMY = 4,
ICScannerPixelDataTypeCMYK = 5,
ICScannerPixelDataTypeYUV = 6,
ICScannerPixelDataTypeYUVK = 7,
ICScannerPixelDataTypeCIEXYZ = 8
};
typedef NSUInteger ICScannerPixelDataType;
Do you really want to support a 1bit per pixel image?
I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;
I need to do an iphone app that can look for a image pattern in an image. (sth like this)
After numerous google searching, i feel that the only option i have in to use template matching function in the opencv which has been ported for objectiveC.
I found an excellent starting point for a simple opencv project in objectiveC from this github code.
But it is only using edge detection and face detection features in the openCV. I need an objectiveC example that uses the template matching function - "cvMatchTemplate" - in objectiveC for iPhone?
Below is the code I have at this moment: (at least it is not giving me error, but this piece of code, return a completely black image, i am expecting a result image where matched area will be brighter?)
IplImage *imgTemplate = [self CreateIplImageFromUIImage:[UIImage imageNamed:#"laughing_man.png"]];
IplImage *imgSource = [self CreateIplImageFromUIImage:imageView.image];
CvSize sizeTemplate = cvGetSize(imgTemplate);
CvSize sizeSrc = cvGetSize(imgSource);
CvSize sizeResult = cvSize(sizeSrc.width - sizeTemplate.width+1, sizeSrc.height-sizeTemplate.height + 1);
IplImage *imgResult = cvCreateImage(sizeResult, IPL_DEPTH_32F, 1);
cvMatchTemplate(imgSource, imgTemplate, imgResult, CV_TM_CCORR_NORMED);
cvReleaseImage(&imgSource);
cvReleaseImage(&imgTemplate);
imageView.image = [self UIImageFromIplImage:imgResult];
cvReleaseImage(&imgResult);
p/s: Or, should I try to recognize object using cvHaarDetectObjects?
The result from cvMatchTemplate is a 32-bit floating point image. In order to display the result, you'll need to convert that to an unsigned char, 8-bit image (IPL_DEPTH_8U).
The CV_TM_CCORR_NORMED method produces values between [0, 1] and cvConvertScale provides an easy way to do the scaling and type conversion. Try adding the following to your code:
IplImage* displayImgResult = cvCreateImage( cvGetSize( imgResult ), IPL_DEPTH_8U, 1);
cvConvertScale( imgResult, displayImgResult, 255, 0 );
imageView.image = [self UIImageFromIplImage:displayImgResult];
I'm using an NSMutableAttribtuedString in order to build a string with formatting, which I then pass to Core Text to render into a frame. The problem is, that I need to use superscript and subscript. Unless these characters are available in the font (most fonts don't support it), then setting the property kCTSuperscriptAttributeName does nothing at all.
So I guess I'm left with the only option, which is to fake it by changing the font size and moving the base line. I can do the font size bit, but don't know the code for altering the base line. Can anyone help please?
Thanks!
EDIT: I'm thinking, considering the amount of time I have available to sort this problem, of editing a font so that it's given a subscript "2"... Either that or finding a built-in iPad font which does. Does anyone know of any serif font with a subscript "2" I can use?
There is no baseline setting amongst the CTParagraphStyleSpecifiers or the defined string attribute name constants. I think it's therefore safe to conclude that CoreText does not itself support a baseline adjust property on text. There's a reference made to baseline placement in CTTypesetter, but I can't tie that to any ability to vary the baseline over the course of a line in the iPad's CoreText.
Hence, you probably need to interfere in the rendering process yourself. For example:
create a CTFramesetter, e.g. via CTFramesetterCreateWithAttributedString
get a CTFrame from that via CTFramesetterCreateFrame
use CTFrameGetLineOrigins and CTFrameGetLines to get an array of CTLines and where they should be drawn (ie, the text with suitable paragraph/line breaks and all your other kerning/leading/other positioning text attributes applied)
from those, for lines with no superscript or subscript, just use CTLineDraw and forget about it
for those with superscript or subscript, use CTLineGetGlyphRuns to get an array of CTRun objects describing the various glyphs on the line
on each run, use CTRunGetStringIndices to determine which source characters are in the run; if none that you want to superscript or subscript are included, just use CTRunDraw to draw the thing
otherwise, use CTRunGetGlyphs to break the run into individual glyphs and CTRunGetPositions to figure out where they would be drawn in the normal run of things
use CGContextShowGlyphsAtPoint as appropriate, having tweaked the text matrix for those you want in superscript or subscript
I haven't yet found a way to query whether a font has the relevant hints for automatic superscript/subscript generation, which makes things a bit tricky. If you're desperate and don't have a solution to that, it's probably easier just not to use CoreText's stuff at all — in which case you should probably define your own attribute (that's why [NS/CF]AttributedString allow arbitrary attributes to be applied, identified by string name) and use the normal NSString searching methods to identify regions that need to be printed in superscript or subscript from blind.
For performance reasons, binary search is probably the way to go on searching all lines, the runs within a line and the glyphs within a run for those you're interested in. Assuming you have a custom UIView subclass to draw CoreText content, it's probably smarter to do it ahead of time rather than upon every drawRect: (or the equivalent methods, if e.g. you're using a CATiledLayer).
Also, the CTRun methods have variants that request a pointer to a C array containing the things you're asking for copies of, possibly saving you a copy operation but not necessarily succeeding. Check the documentation. I've just made sure that I'm sketching a workable solution rather than necessarily plotting the absolutely optimal route through the CoreText API.
Here is some code based on Tommy's outline that does the job quite well (tested on only single lines though). Set the baseline on your attributed string with #"MDBaselineAdjust", and this code draws the line to offset, a CGPoint. To get superscript, also lower the font size a notch. Preview of what's possible: http://cloud.mochidev.com/IfPF (the line that reads "[Xe] 4f14...")
Hope this helps :)
NSAttributedString *string = ...;
CGPoint origin = ...;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);
CGSize suggestedSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0, string.length), NULL, CGSizeMake(CGFLOAT_MAX, CGFLOAT_MAX), NULL);
CGPathRef path = CGPathCreateWithRect(CGRectMake(origin.x, origin.y, suggestedSize.width, suggestedSize.height), NULL);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, string.length), path, NULL);
NSArray *lines = (NSArray *)CTFrameGetLines(frame);
if (lines.count) {
CGPoint *lineOrigins = malloc(lines.count * sizeof(CGPoint));
CTFrameGetLineOrigins(frame, CFRangeMake(0, lines.count), lineOrigins);
int i = 0;
for (id aLine in lines) {
NSArray *glyphRuns = (NSArray *)CTLineGetGlyphRuns((CTLineRef)aLine);
CGFloat width = origin.x+lineOrigins[i].x-lineOrigins[0].x;
for (id run in glyphRuns) {
CFRange range = CTRunGetStringRange((CTRunRef)run);
NSDictionary *dict = [string attributesAtIndex:range.location effectiveRange:NULL];
CGFloat baselineAdjust = [[dict objectForKey:#"MDBaselineAdjust"] doubleValue];
CGContextSetTextPosition(context, width, origin.y+baselineAdjust);
CTRunDraw((CTRunRef)run, context, CFRangeMake(0, 0));
}
i++;
}
free(lineOrigins);
}
CFRelease(frame);
CGPathRelease(path);
CFRelease(framesetter);
`
You can mimic subscripts now using TextKit in iOS7. Example:
NSMutableAttributedString *carbonDioxide = [[NSMutableAttributedString alloc] initWithString:#"CO2"];
[carbonDioxide addAttribute:NSFontAttributeName value:[UIFont systemFontOfSize:8] range:NSMakeRange(2, 1)];
[carbonDioxide addAttribute:NSBaselineOffsetAttributeName value:#(-2) range:NSMakeRange(2, 1)];
I've been having trouble with this myself. Apple's Core Text documentation claims that there has been support in iOS since version 3.2, but for some reason it still just doesn't work. Even in iOS 5... how very frustrating >.<
I managed to find a workaround if you only really care about superscript or subscript numbers. Say you have a block of text can might contain a "sub2" tag where you want a subscript number 2. Use NSRegularExpression to find the tags, and then use replacementStringForResult method on your regex object to replace each tag with unicode characters:
if ([match isEqualToString:#"<sub2/>"])
{
replacement = #"₂";
}
If you use the OSX character viewer, you can drop unicode characters right into your code. There's a set of characters in there called "Digits" which has all the superscript and subscript number characters. Just leave your cursor at the appropriate spot in your code window and double-click in the character viewer to insert the character you want.
With the right font, you could probably do this with any letter as well, but the character map only has a handful of non-numbers available for this that I've seen.
Alternatively you can just put the unicode characters in your source content, but in a lot of cases (like mine), that isn't possible.
Swift 4
Very loosely based off of Graham Perks' answer. I could not make his code work as is but after three hours of work I've created something that works great! If you'd prefer a full implementation of this along with a bunch of nifty other performance and feature add-ons (links, async drawing, etc), check out my single file library DYLabel. If not, read on.
I explain everything I'm doing in the comments. This is the draw method, to be called from drawRect:
/// Draw text on a given context. Supports superscript using NSBaselineOffsetAttributeName
///
/// This method works by drawing the text backwards (i.e. last line first). This is very very important because it's how we ensure superscripts don't overlap the text above it. In other words, we need to start from the bottom, get the height of the text we just drew, and then draw the next text above it. This could be done in a forward direction but you'd have to use lookahead which IMO is more work.
///
/// If you have to modify on this, remember that CT uses a mathmatical origin (i.e. 0,0 is bottom left like a cartisian plane)
/// - Parameters:
/// - context: A core graphics draw context
/// - attributedText: An attributed string
func drawText(context:CGContext, attributedText: NSAttributedString) {
//Create our CT boiler plate
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = bounds
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
//https://stackoverflow.com/a/27631737/1166266
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Move the draw head. Note that we're drawing from the unupdated drawYPositionFromOrigin. This is again thanks to CT cartisian plane where we draw from the bottom left of text too.
context.textPosition = CGPoint.init(x: lineOrigins[lineIndex].x, y: drawYPositionFromOrigin)
//Draw!
CTRunDraw(run, context, CFRangeMake(0, 0))
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
}
I also made a method which calculates the required height of the text given a width. It's exactly the same code except it doesn't draw anything.
/// Calculate the height if it were drawn using `drawText`
/// Uses the same code as drawText except it doesn't draw.
///
/// - Parameters:
/// - attributedText: The text to calculate the height of
/// - width: The constraining width
/// - estimationHeight: Optional paramater, default 30,000px. This is the container height used to layout the text. DO NOT USE CGFLOATMAX AS IT CORE TEXT CANNOT CREATE A FRAME OF THAT SIZE.
/// - Returns: The size required to fit the text
static func size(of attributedText:NSAttributedString,width:CGFloat, estimationHeight:CGFloat?=30000) -> CGSize {
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRect.init(x: 0, y: 0, width: width, height: estimationHeight!)
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Skip drawing since this is a height calculation
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
return CGSize.init(width: width, height: drawYPositionFromOrigin)
}
Like everything I write, I also did some benchmarks against some public libraries and system functions (even though they won't work here). I used a huge, complex string here to keep anyone from taking unfair shortcuts.
---HEIGHT CALCULATION---
Runtime for 1000 iterations (ms) BoundsForRect: 5415.030002593994
Runtime for 1000 iterations (ms) layoutManager: 5370.990991592407
Runtime for 1000 iterations (ms) CTFramesetterSuggestFrameSizeWithConstraints: 2372.151017189026
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame ObjC: 2300.302028656006
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame-Swift: 2313.6669397354126
Runtime for 1000 iterations (ms) THIS ANSWER size(of:): 2566.351056098938
---RENDER---
Runtime for 1000 iterations (ms) AttributedLabel: 35.032033920288086
Runtime for 1000 iterations (ms) UILabel: 45.948028564453125
Runtime for 1000 iterations (ms) TTTAttributedLabel: 301.1329174041748
Runtime for 1000 iterations (ms) THIS ANSWER: 20.398974418640137
So summary time: we did very well! size(of...) is nearly equal to stock CT layout which means that our addon for superscript is fairly cheap despite using a hash table lookup. We do, however, flat out win on draw calls. I suspect that this is due to the very expensive 30k pixel estimation frame we have to create. If we make a better estimate performance will be better. I've already been working for about three hours so I'm calling it quits and leaving that as an exercise to the reader.
I struggled with this problem as well. It turns out, as some of the posters above suggested, that none of the fonts that come with IOS support superscripting or subscripting. My solution was to purchase and install two custom superscript and subscript fonts (They were $9.99 each and here's a link to the site http://superscriptfont.com/).
Not really that hard to do. Just add the font files as resources and add info.plist entries for "Font provided by application".
The next step was to search for the appropriate tags in my NSAttributedString, remove the tags and apply the font to the text.
Works great!
A Swift 2 twist on Dimitri's answer; effectively implements NSBaselineOffsetAttributeName.
When coding I was in a UIView so had a reasonable bounds rect to use. His answer calculated its own rect.
func drawText(context context:CGContextRef, attributedText: NSAttributedString) {
// All this CoreText iteration just to add support for superscripting.
// NSBaselineOffsetAttributeName isn't supported by CoreText. So we manully iterate through
// all the text ranges, rendering each, and offsetting the baseline where needed.
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRectOffset(bounds, 0, 0)
let path = CGPathCreateWithRect(textRect, nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
// All the lines of text we'll render...
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
// And their origin coordinates...
var lineOrigins = [CGPoint](count: lineCount, repeatedValue: CGPointZero)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
for lineIndex in 0..<lineCount {
let lineObject = lines[lineIndex]
// Each run of glyphs we'll render...
let glyphRuns = CTLineGetGlyphRuns(lineObject as! CTLine) as [AnyObject]
for r in glyphRuns {
let run = r as! CTRun
let runRange = CTRunGetStringRange(run)
// What attributes are in the NSAttributedString here? If we find NSBaselineOffsetAttributeName,
// adjust the baseline.
let attrs = attributedText.attributesAtIndex(runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attrs[NSBaselineOffsetAttributeName as String] as? NSNumber {
baselineAdjustment = CGFloat(adjust.floatValue)
}
CGContextSetTextPosition(context, lineOrigins[lineIndex].x, lineOrigins[lineIndex].y - 25 + baselineAdjustment)
CTRunDraw(run, context, CFRangeMake(0, 0))
}
}
}
With IOS 11, Apple introduced a new string attribute name:
kCTBaselineOffsetAttributeName which works with Core Text.
Note that the offset direction is different from NSBaselineOffsetAttributeName used with NSAttributedStrings on UILabels etc (a positive offset moves the baseline downwards).