Preserve line width while scaling all points in the context with CGAffineTransform - core-graphics

I have a CGPath in some coordinate system that I'd like to draw. Doing so involves scaling the old coordinate system onto the Context's one. For that purpose, I use CGContextConcatCTM() which does transform all the points as it should. But, as it is a scaling operation, the horizontal/vertical line widths get changed to. E.g. a scale of 10 in x-direction, but of 1 in y-direction would lead to vertical lines being 10 times as thick as horizontal ones. Is there a way to keep the ease of use of translation matrices (e.g. CGAffineTransform) but not scaling line widths at the same time, e.g. a function like CGPathApplyAffineTransformToPoints?
Cheers
MrMage

Do the transform when you add the path, but then remove the transform before you stroke the path. Instead of this:
CGContextSaveGState(ctx);
CGContextScaleCTM(ctx, 10, 10); // scale path 10x
CGContextAddPath(ctx, somePath);
CGContextSetStrokeColorWithColor(ctx, someColor);
CGContextSetLineWidth(ctx, someWidth); // uh-oh, line width is 10x, too
CGContextStrokePath(ctx);
CGContextRestoreGState(ctx); // back to normal
Do this:
CGContextSaveGState(ctx);
CGContextScaleCTM(ctx, 10, 10); // scale path 10x
CGContextAddPath(ctx, somePath);
CGContextRestoreGState(ctx); // back to normal
CGContextSetStrokeColorWithColor(ctx, someColor);
CGContextSetLineWidth(ctx, someWidth);
CGContextStrokePath(ctx);

You can use CGPathApply to iterate through the elements in a path. It's a little bit more complex than just a one-liner but if you package it up in a simple helper function, it might be useful for you. Here is one version that creates a new path and transforms it:
typedef struct {
CGMutablePathRef path;
CGAffineTransform transform;
} PathTransformInfo;
static void
PathTransformer(void *info, const CGPathElement *element)
{
PathTransformInfo *transformerInfo = info;
switch (element->type) {
case kCGPathElementMoveToPoint:
CGPathMoveToPoint(transformerInfo->path, &transformerInfo->transform,
element->points[0].x, element->points[0].y);
break;
case kCGPathElementAddLineToPoint:
CGPathAddLineToPoint(transformerInfo->path, &transformerInfo->transform,
element->points[0].x, element->points[0].y);
break;
case kCGPathElementAddQuadCurveToPoint:
CGPathAddQuadCurveToPoint(transformerInfo->path, &transformerInfo->transform,
element->points[0].x, element->points[0].y,
element->points[1].x, element->points[1].y);
break;
case kCGPathElementAddCurveToPoint:
CGPathAddCurveToPoint(transformerInfo->path, &transformerInfo->transform,
element->points[0].x, element->points[0].y,
element->points[1].x, element->points[1].y,
element->points[2].x, element->points[2].y);
break;
case kCGPathElementCloseSubpath:
CGPathCloseSubpath(transformerInfo->path);
break;
}
}
To use it you would do (this is the part I would put inside a helper function):
PathTransformInfo info;
info.path = CGPathCreateMutable();
info.transform = CGAffineTransformMakeScale(2, 1);
CGPathApply(originalPath, &info, PathTransformer);
The transformed path is in info.path at this point.

Related

Cesium: Having the camera in an entity's first person view

I would like to have my camera follow the first-person view of a moving entity. I do not believe that trackedEntity will work for this use case because I don't want to look at the entity, but I want to look out from it. I would also like the user to be able to use the mouse to turn the camera with respect to the moving entity (for example, to look out the left window of a moving plane).
In a traditional game engine, I would do this by attaching the camera to the entity, so it would move with it, but retain its own local transform with respect to the entity so that it was free to move with respect to the entity.
The only way I can think of right now is to keep track of the "user-controlled" transform separately and multiply it with the entity transform at every clock tick. Is there a better way?
Have a look at Cesium's Cardboard sandcastle example. Here you are on board of a hot-air balloon and perceive the world from there. After scrolling out, you can pan with the mouse to look around. Since the calculations are quite complicated, I cannot give any details how it works, but it seems that the camera view is aligned to the moving direction of the entity. The essential part of the script is:
// Set initial camera position and orientation to be when in the model's reference frame.
var camera = viewer.camera;
camera.position = new Cesium.Cartesian3(0.25, 0.0, 0.0);
camera.direction = new Cesium.Cartesian3(1.0, 0.0, 0.0);
camera.up = new Cesium.Cartesian3(0.0, 0.0, 1.0);
camera.right = new Cesium.Cartesian3(0.0, -1.0, 0.0);
viewer.scene.postUpdate.addEventListener(function (scene, time) {
var position = entity.position.getValue(time);
if (!Cesium.defined(position)) {
return;
}
var transform;
if (!Cesium.defined(entity.orientation)) {
transform = Cesium.Transforms.eastNorthUpToFixedFrame(position);
} else {
var orientation = entity.orientation.getValue(time);
if (!Cesium.defined(orientation)) {
return;
}
transform = Cesium.Matrix4.fromRotationTranslation(
Cesium.Matrix3.fromQuaternion(orientation),
position
);
}
// Save camera state
var offset = Cesium.Cartesian3.clone(camera.position);
var direction = Cesium.Cartesian3.clone(camera.direction);
var up = Cesium.Cartesian3.clone(camera.up);
// Set camera to be in model's reference frame.
camera.lookAtTransform(transform);
// Reset the camera state to the saved state so it appears fixed in the model's frame.
Cesium.Cartesian3.clone(offset, camera.position);
Cesium.Cartesian3.clone(direction, camera.direction);
Cesium.Cartesian3.clone(up, camera.up);
Cesium.Cartesian3.cross(direction, up, camera.right);
});
Maybe you can try to modify the camera vectors or multiply the transform with another rotation matrix to simulate turning one's head (to look left/right/back) while being in the initial perspective. For instance, you can try to combine the example above with code from a repository called Cesium First Person Camera Controller.
Had to figure this out myself as well.
Camera.setView and self-defined utility functions are your friend.
E.g. Here is a naive implementation of rotation (does not work well when the pitch of the camera is too "birds-eye-view" like):
Cesium.Camera.prototype.rotateView = function(rotation) {
let { heading, pitch, roll } = rotation;
heading = this.heading + (heading || 0);
pitch = this.pitch + (pitch || 0);
roll = this.roll + (roll || 0);
const destination = this.position;
this.setView({
destination,
orientation: {
heading,
pitch,
roll
}
});
};
Similarly you can update the position with the position of the entity by providing destination.

GLSL 3.2 mapping shader arguments

I'm trying to create a high-level Objective-C OpenGL shader wrapper that allows me to execute various GL shaders without a lot of GL code that clutters the application logic.
So something like this for a shader with two 'in' arguments to create a quad with a different color in every corner:
OpenGLShader* theShader = [OpenGLShaderManager shaderWithName:#"MyShader"];
glUseProgram(theShader.program);
float colorsForQuad[4] = {{1.0f, 0.0f, 0.0f, 1.0f}, {0.0f, 1.0 ....}}
theShader.arguments[#"inColor"] setValue:colorsForQuad forNumberOfVertices:4];
float positionsForQuad[4] = {{-1.0f, -1.0f, 0.0f, 1.0f}, {-1.0f, 1.0f, ....}}
theShader.arguments[#"inPosition"] setValue:positionsForQuad forNumberOfVertices:4];
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
The setValue:forNumberOfVertices: function looks like this:
int bytesForGLType = numBytesForGLType(self.openGLValueType);
glBindVertexArray(self.vertexArrayObject);
GetError();
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBufferObject);
GetError();
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_STATIC_DRAW);
GetError();
glEnableVertexAttribArray((GLuint)self.boundLocation);
GetError();
glVertexAttribPointer((GLuint)self.boundLocation, numVertices,
GL_FLOAT, GL_FALSE, 0, 0);
I think the problem is that each argument has its own VAO and VBO but the shader needs the data of all arguments when it is executed.
I can obviously only bind one buffer at a time.
The examples I've seen so far only use one VAO and one VBO and create a C structure containing all the data needed.
This however would make my current modular approach much harder.
Isn't there any option to have OpenGL copy the data so it doesn't need to be available and bound when glDraw... is called?
Edit
I found out that using a shared Vertex Array Object is enough to solve the issue.
However, I would appreciate some more insight on when things are actually copied to the GPU.
The glVertexAttribPointer function takes these parameters:
void glVertexAttribPointer(
GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
I think the problem is that you insert the number of vertices into the "size" parameter. That is not what it is meant for. It controls how many components will the attribute have. When the vertex attribute is meant to be vec3 "size" should be set to 3, when using floats it should be 1 and so on.
EDIT: As Reto Koradi pointed out, using 0 as "stride" is fine in this case.

Changing the alpha/opacity channel on a texture using GLKit

I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;

AndEngine visible rectangle

I'm trying to create an additional visibility rectangle on a main Scene.
So I'v a main Camera 480x800 that is showing me a scene as it is and i'd like to attach an aditional entity or a scene that will have a rectangle of visibility.
So if I'll drag items inside it they will not dissapier in a single moment they will dissapier gradually.
As described in my previous comment you could stamp out a square alpha hole in your background Sprite. You could do this simply with an image editor, adding alpha pixels or you could do it dynamically as follows,
//set the background to white - so we can see our square alpha
//cut out later
mScene.setBackground(new ColorBackground(1.0f, 1.0f, 1.0f));
//Create and load bitmap texture atlas
BitmapTextureAtlas mBitmapBGTextureAtlas = new BitmapTextureAtlas(1024, 1024, TextureOptions.BILINEAR_PREMULTIPLYALPHA);
mActivity.getEngine().getTextureManager().loadTextures(mBitmapBGTextureAtlas);
//Get image in assets and decode into bitmap
InputStream ims;
try {
ims = mActivity.getAssets().open("gfx/my_backgound.jpg");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return;
}
Bitmap Bitmap_bg = BitmapFactory.decodeStream(ims);
//In my case the image is different than the height and width of the camera
//so store the ratio of size and height that the image will be resized to
float XScale = Bitmap_bg.getWidth()/mCamera.getWidth();
float YScale = Bitmap_bg.getHeight()/mCamera.getHeight();
//Cut out the alpha square, if our camera is 480x800, the square will appear
//at (40,200) and will be size 400x400
Bitmap_bg = cutSquareOutOfBitmap(Bitmap_bg, 40 * XScale, 200 * YScale, 400 * XScale , 400 * YScale);
//Get our edited bitmap into a region of the texture atlas
BitmapTextureAtlasSource source = new BitmapTextureAtlasSource(Bitmap_bg);
mBackground = BitmapTextureAtlasTextureRegionFactory.createFromSource(mBitmapBGTextureAtlas, source, 0, 0);
Bitmap_bg.recycle();
//Finally, create our background sprite with this new texture region
Sprite mBackgroundSprite = new Sprite(0, 0, mCamera.getWidth(), mCamera.getHeight(), mBackground);
mBackgroundSprite.setZIndex(1);
mScene.attachChild(mBackgroundSprite);
And the function cutSquareOutOfBitmap()
public static Bitmap cutSquareOutOfBitmap(Bitmap MyImage, float Xpos, float Ypos, float Width, float Height) {
Bitmap mBitmap = MyImage.copy(Bitmap.Config.ARGB_8888, true);
Paint mPaint = new Paint();
Canvas mCanvas = new Canvas(mBitmap );
mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
mPaint .setXfermode(new PorterDuffXfermode(Mode.SRC_OUT));
mPaint .setColor(Color.TRANSPARENT);
mCanvas.drawBitmap(mBitmap , 0, 0, null);
mCanvas.drawRect(Xpos, Ypos, Xpos + Width, Ypos + Height, mPaint );
return mBitmap ;
}
If you run this - not a lot to look at but a big white square, however, this is a transparent region, the square is actually the background we set earlier.
To demonstrate how the contents will be obscured, you could create a scrollable area, as mentioned in my previous comment I wrote a small container class you are welcome to use,
Custom ScrollView in andengine
Underneath the first code block in this answer, after,
mScene.attachChild(mBackgroundSprite);
You could now add,
//Now we can use the ShapeScrollContainer just as an example so the user can
//scroll our container shapes around
//Create it around the same area as the cut out
ShapeScrollContainer mShapeScrollContainer = new ShapeScrollContainer(40, 200, 400, 400, new IShapeScrollContainerTouchListener() {
#Override
public void OnContentClicked(Shape pShape) {
// TODO Auto-generated method stub
//Add code here for content click event
}
});
//Disable the ShapeScrollContainer ability to change the visibility
//of contents - we no longer require this as the background will
//cover them outside of the bounds of the ShapeScrollContainer itself
mShapeScrollContainer.SetContentVisiblitiyControl(false);
//Disable alpha
mShapeScrollContainer.SetAlphaVisiblitiyControl(false);
//Allow user to scroll both horizontally and vertically
mShapeScrollContainer.SetScrollableDirections(true, true);
//Don't allow the user to scroll to no where
mShapeScrollContainer.SetScrollLock(true);
//Allow use to scroll half the container over in either direction
mShapeScrollContainer.SetScrollLockPadding(50.0f,50.0f);
//Attach the container to the scene and register the event listener
mScene.registerTouchArea(mShapeScrollContainer);
mScene.attachChild(mShapeScrollContainer);
//Finally add some content to the container, what ever extends Shape,
//Sprite, Animated Sprite, Text, ChangeableText e.t.c.
Rectangle mRectangle = new Rectangle(200, 360, 80, 80);
mRectangle.setColor(0.0f, 1.0f, 0.0f);
mRectangle.setZIndex(0);
//Attach to the scene and the ShapeScrollContainer
mScene.attachChild(mRectangle);
mShapeScrollContainer.Add(mRectangle);
Rectangle mRectangle2 = new Rectangle(40, 360, 80, 80);
mRectangle2.setColor(0.0f, 0.0f, 1.0f);
mRectangle2.setZIndex(0);
mScene.attachChild(mRectangle2);
mShapeScrollContainer.Add(mRectangle2);
Rectangle mRectangle3 = new Rectangle(360, 360, 80, 80);
mRectangle3.setColor(1.0f, 1.0f, 0.0f);
mRectangle3.setZIndex(0);
mScene.attachChild(mRectangle3);
mShapeScrollContainer.Add(mRectangle3);
//And sort the order in which shapes are rendered
mScene.sortChildren();
Now you should get something like the following after scrolling,
As another alternative if you are going for a simpler background you could forgo the bitmap manipulation and simply make the square with four surrounding rectangles making up the sides to the edge of the screen.
Or you could physically split your background into 4 surrounding rectangles and a central square with an image editor. Then create 5 sprites, set the z order of the four rectangles to 2, the square to 0 and any content sprites to 1.
Hope this is of use.

How can I fake superscript and subscript with Core Text and an Attributed String?

I'm using an NSMutableAttribtuedString in order to build a string with formatting, which I then pass to Core Text to render into a frame. The problem is, that I need to use superscript and subscript. Unless these characters are available in the font (most fonts don't support it), then setting the property kCTSuperscriptAttributeName does nothing at all.
So I guess I'm left with the only option, which is to fake it by changing the font size and moving the base line. I can do the font size bit, but don't know the code for altering the base line. Can anyone help please?
Thanks!
EDIT: I'm thinking, considering the amount of time I have available to sort this problem, of editing a font so that it's given a subscript "2"... Either that or finding a built-in iPad font which does. Does anyone know of any serif font with a subscript "2" I can use?
There is no baseline setting amongst the CTParagraphStyleSpecifiers or the defined string attribute name constants. I think it's therefore safe to conclude that CoreText does not itself support a baseline adjust property on text. There's a reference made to baseline placement in CTTypesetter, but I can't tie that to any ability to vary the baseline over the course of a line in the iPad's CoreText.
Hence, you probably need to interfere in the rendering process yourself. For example:
create a CTFramesetter, e.g. via CTFramesetterCreateWithAttributedString
get a CTFrame from that via CTFramesetterCreateFrame
use CTFrameGetLineOrigins and CTFrameGetLines to get an array of CTLines and where they should be drawn (ie, the text with suitable paragraph/line breaks and all your other kerning/leading/other positioning text attributes applied)
from those, for lines with no superscript or subscript, just use CTLineDraw and forget about it
for those with superscript or subscript, use CTLineGetGlyphRuns to get an array of CTRun objects describing the various glyphs on the line
on each run, use CTRunGetStringIndices to determine which source characters are in the run; if none that you want to superscript or subscript are included, just use CTRunDraw to draw the thing
otherwise, use CTRunGetGlyphs to break the run into individual glyphs and CTRunGetPositions to figure out where they would be drawn in the normal run of things
use CGContextShowGlyphsAtPoint as appropriate, having tweaked the text matrix for those you want in superscript or subscript
I haven't yet found a way to query whether a font has the relevant hints for automatic superscript/subscript generation, which makes things a bit tricky. If you're desperate and don't have a solution to that, it's probably easier just not to use CoreText's stuff at all — in which case you should probably define your own attribute (that's why [NS/CF]AttributedString allow arbitrary attributes to be applied, identified by string name) and use the normal NSString searching methods to identify regions that need to be printed in superscript or subscript from blind.
For performance reasons, binary search is probably the way to go on searching all lines, the runs within a line and the glyphs within a run for those you're interested in. Assuming you have a custom UIView subclass to draw CoreText content, it's probably smarter to do it ahead of time rather than upon every drawRect: (or the equivalent methods, if e.g. you're using a CATiledLayer).
Also, the CTRun methods have variants that request a pointer to a C array containing the things you're asking for copies of, possibly saving you a copy operation but not necessarily succeeding. Check the documentation. I've just made sure that I'm sketching a workable solution rather than necessarily plotting the absolutely optimal route through the CoreText API.
Here is some code based on Tommy's outline that does the job quite well (tested on only single lines though). Set the baseline on your attributed string with #"MDBaselineAdjust", and this code draws the line to offset, a CGPoint. To get superscript, also lower the font size a notch. Preview of what's possible: http://cloud.mochidev.com/IfPF (the line that reads "[Xe] 4f14...")
Hope this helps :)
NSAttributedString *string = ...;
CGPoint origin = ...;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);
CGSize suggestedSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0, string.length), NULL, CGSizeMake(CGFLOAT_MAX, CGFLOAT_MAX), NULL);
CGPathRef path = CGPathCreateWithRect(CGRectMake(origin.x, origin.y, suggestedSize.width, suggestedSize.height), NULL);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, string.length), path, NULL);
NSArray *lines = (NSArray *)CTFrameGetLines(frame);
if (lines.count) {
CGPoint *lineOrigins = malloc(lines.count * sizeof(CGPoint));
CTFrameGetLineOrigins(frame, CFRangeMake(0, lines.count), lineOrigins);
int i = 0;
for (id aLine in lines) {
NSArray *glyphRuns = (NSArray *)CTLineGetGlyphRuns((CTLineRef)aLine);
CGFloat width = origin.x+lineOrigins[i].x-lineOrigins[0].x;
for (id run in glyphRuns) {
CFRange range = CTRunGetStringRange((CTRunRef)run);
NSDictionary *dict = [string attributesAtIndex:range.location effectiveRange:NULL];
CGFloat baselineAdjust = [[dict objectForKey:#"MDBaselineAdjust"] doubleValue];
CGContextSetTextPosition(context, width, origin.y+baselineAdjust);
CTRunDraw((CTRunRef)run, context, CFRangeMake(0, 0));
}
i++;
}
free(lineOrigins);
}
CFRelease(frame);
CGPathRelease(path);
CFRelease(framesetter);
`
You can mimic subscripts now using TextKit in iOS7. Example:
NSMutableAttributedString *carbonDioxide = [[NSMutableAttributedString alloc] initWithString:#"CO2"];
[carbonDioxide addAttribute:NSFontAttributeName value:[UIFont systemFontOfSize:8] range:NSMakeRange(2, 1)];
[carbonDioxide addAttribute:NSBaselineOffsetAttributeName value:#(-2) range:NSMakeRange(2, 1)];
I've been having trouble with this myself. Apple's Core Text documentation claims that there has been support in iOS since version 3.2, but for some reason it still just doesn't work. Even in iOS 5... how very frustrating >.<
I managed to find a workaround if you only really care about superscript or subscript numbers. Say you have a block of text can might contain a "sub2" tag where you want a subscript number 2. Use NSRegularExpression to find the tags, and then use replacementStringForResult method on your regex object to replace each tag with unicode characters:
if ([match isEqualToString:#"<sub2/>"])
{
replacement = #"₂";
}
If you use the OSX character viewer, you can drop unicode characters right into your code. There's a set of characters in there called "Digits" which has all the superscript and subscript number characters. Just leave your cursor at the appropriate spot in your code window and double-click in the character viewer to insert the character you want.
With the right font, you could probably do this with any letter as well, but the character map only has a handful of non-numbers available for this that I've seen.
Alternatively you can just put the unicode characters in your source content, but in a lot of cases (like mine), that isn't possible.
Swift 4
Very loosely based off of Graham Perks' answer. I could not make his code work as is but after three hours of work I've created something that works great! If you'd prefer a full implementation of this along with a bunch of nifty other performance and feature add-ons (links, async drawing, etc), check out my single file library DYLabel. If not, read on.
I explain everything I'm doing in the comments. This is the draw method, to be called from drawRect:
/// Draw text on a given context. Supports superscript using NSBaselineOffsetAttributeName
///
/// This method works by drawing the text backwards (i.e. last line first). This is very very important because it's how we ensure superscripts don't overlap the text above it. In other words, we need to start from the bottom, get the height of the text we just drew, and then draw the next text above it. This could be done in a forward direction but you'd have to use lookahead which IMO is more work.
///
/// If you have to modify on this, remember that CT uses a mathmatical origin (i.e. 0,0 is bottom left like a cartisian plane)
/// - Parameters:
/// - context: A core graphics draw context
/// - attributedText: An attributed string
func drawText(context:CGContext, attributedText: NSAttributedString) {
//Create our CT boiler plate
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = bounds
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
//https://stackoverflow.com/a/27631737/1166266
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Move the draw head. Note that we're drawing from the unupdated drawYPositionFromOrigin. This is again thanks to CT cartisian plane where we draw from the bottom left of text too.
context.textPosition = CGPoint.init(x: lineOrigins[lineIndex].x, y: drawYPositionFromOrigin)
//Draw!
CTRunDraw(run, context, CFRangeMake(0, 0))
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
}
I also made a method which calculates the required height of the text given a width. It's exactly the same code except it doesn't draw anything.
/// Calculate the height if it were drawn using `drawText`
/// Uses the same code as drawText except it doesn't draw.
///
/// - Parameters:
/// - attributedText: The text to calculate the height of
/// - width: The constraining width
/// - estimationHeight: Optional paramater, default 30,000px. This is the container height used to layout the text. DO NOT USE CGFLOATMAX AS IT CORE TEXT CANNOT CREATE A FRAME OF THAT SIZE.
/// - Returns: The size required to fit the text
static func size(of attributedText:NSAttributedString,width:CGFloat, estimationHeight:CGFloat?=30000) -> CGSize {
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRect.init(x: 0, y: 0, width: width, height: estimationHeight!)
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Skip drawing since this is a height calculation
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
return CGSize.init(width: width, height: drawYPositionFromOrigin)
}
Like everything I write, I also did some benchmarks against some public libraries and system functions (even though they won't work here). I used a huge, complex string here to keep anyone from taking unfair shortcuts.
---HEIGHT CALCULATION---
Runtime for 1000 iterations (ms) BoundsForRect: 5415.030002593994
Runtime for 1000 iterations (ms) layoutManager: 5370.990991592407
Runtime for 1000 iterations (ms) CTFramesetterSuggestFrameSizeWithConstraints: 2372.151017189026
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame ObjC: 2300.302028656006
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame-Swift: 2313.6669397354126
Runtime for 1000 iterations (ms) THIS ANSWER size(of:): 2566.351056098938
---RENDER---
Runtime for 1000 iterations (ms) AttributedLabel: 35.032033920288086
Runtime for 1000 iterations (ms) UILabel: 45.948028564453125
Runtime for 1000 iterations (ms) TTTAttributedLabel: 301.1329174041748
Runtime for 1000 iterations (ms) THIS ANSWER: 20.398974418640137
So summary time: we did very well! size(of...) is nearly equal to stock CT layout which means that our addon for superscript is fairly cheap despite using a hash table lookup. We do, however, flat out win on draw calls. I suspect that this is due to the very expensive 30k pixel estimation frame we have to create. If we make a better estimate performance will be better. I've already been working for about three hours so I'm calling it quits and leaving that as an exercise to the reader.
I struggled with this problem as well. It turns out, as some of the posters above suggested, that none of the fonts that come with IOS support superscripting or subscripting. My solution was to purchase and install two custom superscript and subscript fonts (They were $9.99 each and here's a link to the site http://superscriptfont.com/).
Not really that hard to do. Just add the font files as resources and add info.plist entries for "Font provided by application".
The next step was to search for the appropriate tags in my NSAttributedString, remove the tags and apply the font to the text.
Works great!
A Swift 2 twist on Dimitri's answer; effectively implements NSBaselineOffsetAttributeName.
When coding I was in a UIView so had a reasonable bounds rect to use. His answer calculated its own rect.
func drawText(context context:CGContextRef, attributedText: NSAttributedString) {
// All this CoreText iteration just to add support for superscripting.
// NSBaselineOffsetAttributeName isn't supported by CoreText. So we manully iterate through
// all the text ranges, rendering each, and offsetting the baseline where needed.
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRectOffset(bounds, 0, 0)
let path = CGPathCreateWithRect(textRect, nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
// All the lines of text we'll render...
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
// And their origin coordinates...
var lineOrigins = [CGPoint](count: lineCount, repeatedValue: CGPointZero)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
for lineIndex in 0..<lineCount {
let lineObject = lines[lineIndex]
// Each run of glyphs we'll render...
let glyphRuns = CTLineGetGlyphRuns(lineObject as! CTLine) as [AnyObject]
for r in glyphRuns {
let run = r as! CTRun
let runRange = CTRunGetStringRange(run)
// What attributes are in the NSAttributedString here? If we find NSBaselineOffsetAttributeName,
// adjust the baseline.
let attrs = attributedText.attributesAtIndex(runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attrs[NSBaselineOffsetAttributeName as String] as? NSNumber {
baselineAdjustment = CGFloat(adjust.floatValue)
}
CGContextSetTextPosition(context, lineOrigins[lineIndex].x, lineOrigins[lineIndex].y - 25 + baselineAdjustment)
CTRunDraw(run, context, CFRangeMake(0, 0))
}
}
}
With IOS 11, Apple introduced a new string attribute name:
kCTBaselineOffsetAttributeName which works with Core Text.
Note that the offset direction is different from NSBaselineOffsetAttributeName used with NSAttributedStrings on UILabels etc (a positive offset moves the baseline downwards).