How to programmatically add an active graphics layer to a map? - arcgis

I'm writing a WPF applicating, in C#, using ArcObjects.
I have an ESRI.ArcGIS.Controls.AxMapControl on my form, and I'm trying to draw some graphics elements on top of it.
The map I'm developing with is a customer-provided mdf of the state of Georgia.
I'm trying an example I found here: How to interact with map elements.
public void AddTextElement(IMap map, double x, double y)
{
IGraphicsContainer graphicsContainer = map as IGraphicsContainer;
IElement element = new TextElementClass();
ITextElement textElement = element as ITextElement;
//Create a point as the shape of the element.
IPoint point = new PointClass();
point.X = x;
point.Y = y;
element.Geometry = point;
textElement.Text = "Hello World";
graphicsContainer.AddElement(element, 0);
//Flag the new text to invalidate.
IActiveView activeView = map as IActiveView;
activeView.PartialRefresh(esriViewDrawPhase.esriViewGraphics, null, null);
}
It took while to figure out how to project the lat/long of Atlanta to the coordinate system of the map, but I'm pretty sure that I've got it right. The x/y values I'm passing into AddTextElement() are clearly within the Atlanta area, according to the Location data I see when I use the Identify tool on the map.
But I'm not seeing the text. Everything seems to be working correctly, but I'm not seeing the text.
I can see a number of possibilities:
The layer I'm adding the TextElement to isn't visible, or doesn't exist.
I need to apply a spatial reference system to the point I'm setting as the TextElement's geometry
The text is drawing fine, but there's something wrong with the font - it's invisibly small, or in a transparent color, etc.
Haven't a clue, which.
I was hoping there was something obvious I was missing.
===
As I've continued to play with this, since my original posting, I've discovered that the problem is the scaling - the text is showing up where it should, only unreadably small.
This is what Rich Wawrzonek had suggested.
If I set a TextSymbol class, with a specified Size, the size does apply, and I see me text larger or smaller. Unfortunately, the text still resizes as the map zooms in and out, and my trying to set ScaleText = false doesn't fix it.
My latest attempt:
public void AddTextElement(IMap map, double x, double y, string text)
{
var textElement = new TextElementClass
{
Geometry = new PointClass() { X = x, Y = y },
Text = text,
ScaleText = false,
Symbol = new TextSymbolClass {Size = 25000}
};
(map as IGraphicsContainer)?.AddElement(textElement, 0);
(map as IActiveView)?.PartialRefresh(esriViewDrawPhase.esriViewGraphics, null, null);
}
I recognize that the above is organized very differently than the way is usually done with ESRI sample code. I find the way ESRI does it to be every difficult to read, but switch from one to another is pretty mechanical.
This is the same function, organized in a more traditional manner. The behavior should be identical, and I'm seeing exactly the same behavior - the text is drawn to a specified size, but scales as the map zooms.
public void AddTextElement(IMap map, double x, double y, string text)
{
IPoint point = new PointClass();
point.X = x;
point.Y = y;
ITextSymbol textSymbol = new TextSymbolClass();
textSymbol.Size = 25000;
var textElement = new TextElementClass();
textElement.Geometry = point;
textElement.Text = text;
textElement.ScaleText = false;
textElement.Symbol = textSymbol;
var iGraphicsContainer = map as IGraphicsContainer;
Debug.Assert(iGraphicsContainer != null, "iGraphicsContainer != null");
iGraphicsContainer.AddElement(textElement, 0);
var iActiveView = (map as IActiveView);
Debug.Assert(iActiveView != null, "iActiveView != null");
iActiveView.PartialRefresh(esriViewDrawPhase.esriViewGraphics, null, null);
}
Any ideas as to why ScaleText is being ignored?

You are only setting the geometry and the text of the text element. You also need to set the Symbol and ScaleText properties. The ScaleText property boolean will determine whether or not it scales with the map. The Symbol property needs to be created and set via the ITextSymbol interface.
See here for an example by Esri.

Related

Adding page number text to pdf copy gets flipped/mirrored with itext 7

So... I've been trying to use the example provided in the documentation of itext for merging documents and creating a TOC for the merged result. But the part that adds page number text to every page isn't working as I would expect. What happens is that the text added gets flipped over some horizontal axis as shown in the next picture:
Also, the java doc for the method used to set a fixed position to the added text (public T setFixedPosition(int pageNumber, float left, float bottom, float width)) doesn't make sense to me:
Sets values for a absolute repositioning of the Element. The coordinates specified correspond to the bottom-left corner of the element and it grows upwards.
But when I run setFixedPosition(pageNumber, 0, 0, 50) the text ends up in the upper left corner, again also flipped. And if I use the width and height from the page size of the source PdfDocument as parameters for left and bottom positions respectively it doesn't even reach bottom right corner.
I might be doing something wrong or misunderstanding something. Either way, here is the code I'm using:
private static int copyPdfPages(PdfDocument source, Document document, Integer start, Integer pages, Integer number) {
int oldC;
int max = start + pages - 1;
Text text;
for (oldC = start; oldC <= max; oldC++) {
text = new Text(String.format("Page %d", number));
PageSize pageSize = source.getDefaultPageSize();
source.copyPagesTo(oldC, oldC, document.getPdfDocument());
document.add(new Paragraph(text).setBorder(new SolidBorder(ColorConstants.RED, 1))
.setFixedPosition(number++, pageSize.getWidth() - 55, pageSize.getHeight() - 30, 50));
}
return oldC - start;
}
public static void main(String[] args) throws IOException {
String path = "/path/to/target";
FileOutputStream fos = new FileOutputStream(path);
PdfDocument pdfDocTgt = new PdfDocument(new PdfWriter(fos));
Document document = new Document(pdfDocTgt);
PdfDocument pdfDocSrc = new PdfDocument(new PdfReader(new FileInputStream("path/to/source")));
copyPdfPages(pdfDocSrc, document, 1, pdfDocSrc.getNumberOfPages(), 1);
pdfDocTgt.close();
pdfDocSrc.close();
document.flush();
document.flush();
fos.flush();
fos.close();
}
And here is the pdf source: https://drive.google.com/open?id=11_9ptuoRqS91hI3fDcs2FRsIUEiX0a84
Help please (and sorry about my english).
The problem
The problem is that Document.add assumes that the instructions in the current content of the current page at its end have the graphics state essentially restored to its initial state (or else that the effects of the differences on the output are desired).
In your sample PDF this assumption is not satisfied, in particular the page content instructions start with
0.750000 0.000000 0.000000 -0.750000 0.000000 841.920044 cm
which changes the current transformation matrix to
scale everything down to 75% and
flip the coordinate system vertically.
The former change causes your addition to not be in a page corner but instead instead somewhere more to the center; the latter causes it to be vertically mirrored and more to the bottom instead of to the top of the page.
The fix
If one does not know whether the current contents of the page have an essentially restored graphics state at the end (usually the case if one processes page contents one has not generated oneself), one should refrain from adding content via a Document instance but instead use a PdfCanvas generated with a constructor that wraps the current page content in a save-graphics-state ... restore-graphics-state envelop.
E.g. for your task:
private static int copyPdfPagesFixed(PdfDocument source, PdfDocument target, int start, int pages, int number) {
int oldC;
int max = start + pages - 1;
Text text;
for (oldC = start; oldC <= max; oldC++) {
text = new Text(String.format("Page %d", number));
source.copyPagesTo(oldC, oldC, target);
PdfPage newPage = target.getLastPage();
Rectangle pageSize = newPage.getCropBox();
try ( Canvas canvas = new Canvas(new PdfCanvas(newPage, true), target, pageSize) ) {
canvas.add(new Paragraph(text).setBorder(new SolidBorder(ColorConstants.RED, 1))
.setFixedPosition(number++, pageSize.getWidth() - 55, pageSize.getHeight() - 30, 50));
}
}
return oldC - start;
}
(AddPagenumberToCopy method)
The PdfCanvas constructor used above is documented as
/**
* Convenience method for fast PdfCanvas creation by a certain page.
*
* #param page page to create canvas from.
* #param wrapOldContent true to wrap all old content streams into q/Q operators so that the state of old
* content streams would not affect the new one
*/
public PdfCanvas(PdfPage page, boolean wrapOldContent)
Used like this
try ( PdfDocument pdfDocSrc = new PdfDocument(new PdfReader(SOURCE));
PdfDocument pdfDocTgt = new PdfDocument(new PdfWriter(TARGET)) ) {
copyPdfPagesFixed(pdfDocSrc, pdfDocTgt, 1, pdfDocSrc.getNumberOfPages(), 1);
}
(AddPagenumberToCopy test testLikeAibanezFixed)
the top of the first result page looks like this:

Adobe Photoshop Scripting - How to Select Bounding Box Around Current Selection?

Does anyone know whether it's possible, in Photoshop extend script, to convert an irregular selection (e.g. magic wand tool selection) into a rectangular selection encompassing the top, left, bottom and right bounds of the selection?
Here it is, I have documented the code so you can modify it later if you need. Also, check page 166 and following of Photoshop's JS reference manual, you may read more about selections - you can set feather, extend/intersect/etc. the selection if you need to.
Made for CS6, should work with latter.
#target photoshop
if (documents.length == 0) {
alert("nothing opened");
} else {
// start
//setup
var file = app.activeDocument;
var selec = file.selection;
//run
var bnds = selec.bounds; // get the bounds of current selection
var // save the particular pixel values
xLeft = bnds[0],
yTop = bnds[1],
xRight = bnds[2],
yBottom = bnds[3];
var newRect = [ [xLeft,yTop], [xLeft,yBottom], [xRight,yBottom], [xRight,yTop] ]; // set coords for selection, counter-clockwise
selec.deselect;
selec.select(newRect);
// end
}

flash cc createjs hittest works without hit

the rect should be alpha=0.1 once the circle touches the rect . but if statement not working . it becomes 0.1 opacity without hitting
/* js
var circle = new lib.mycircle();
stage.addChild(circle);
var rect = new lib.myrect();
stage.addChild(rect);
rect.x=200;
rect.y=300;
circle.addEventListener('mousedown', downF);
function downF(e) {
stage.addEventListener('stagemousemove', moveF);
stage.addEventListener('stagemouseup', upF);
};
function upF(e) {
stage.removeAllEventListeners();
}
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
}
if(circle.hitTest(rect))
{
rect.alpha = 0.1;
}
stage.update();
*/
The way you have used hitTest is incorrect. The hitTest method does not check object to object. It takes an x and y coordinate, and determines if that point in its own coordinate system has a filled pixel.
I modified your example to make it more correct, though it doesn't actually do what you are expecting:
circle.addEventListener('pressmove', moveF);
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
if (rect.hitTest(circle.x, circle.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
stage.update();
}
Key points:
Reintroduced the pressmove. It works fine.
Moved the circle update above the hitTest check. Otherwise you are checking where it was last time
Moved the stage update to last. It should be the last thing you update. Note however that you can remove it completely, because you have a Ticker listener on the stage in your HTML file, which constantly updates the stage.
Added the else statement to turn the alpha back to 1 if the hitTest fails.
Then, the most important point is that I changed the hitTest to be on the rectangle instead. This essentially says: "Is there a filled pixel at the supplied x and y position inside the rectangle?" Since the rectangle bounds are -49.4, -37.9, 99, 76, this will be true when the circle's coordinates are within those ranges - which is just when it is at the top left of the canvas. If you replace your code with mine, you can see this behaviour.
So, to get it working more like you want, you can do a few things.
Transform your coordinates. Use localToGlobal, or just cheat and use localToLocal. This takes [0,0] in the circle, and converts that coordinate to the rectangle's coordinate space.
Example:
var p = rect.localToLocal(0, 0, circle);
if (rect.hitTest(p.x, p.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
Don't use hitTest. Use getObjectsUnderPoint, pass the circle's x/y coordinate, and check if the rectangle is in the returned list.
Hope that helps. As I mentioned in a comment above, you can not do full shape collision, just point collision (a single point on an object).

How can I fake superscript and subscript with Core Text and an Attributed String?

I'm using an NSMutableAttribtuedString in order to build a string with formatting, which I then pass to Core Text to render into a frame. The problem is, that I need to use superscript and subscript. Unless these characters are available in the font (most fonts don't support it), then setting the property kCTSuperscriptAttributeName does nothing at all.
So I guess I'm left with the only option, which is to fake it by changing the font size and moving the base line. I can do the font size bit, but don't know the code for altering the base line. Can anyone help please?
Thanks!
EDIT: I'm thinking, considering the amount of time I have available to sort this problem, of editing a font so that it's given a subscript "2"... Either that or finding a built-in iPad font which does. Does anyone know of any serif font with a subscript "2" I can use?
There is no baseline setting amongst the CTParagraphStyleSpecifiers or the defined string attribute name constants. I think it's therefore safe to conclude that CoreText does not itself support a baseline adjust property on text. There's a reference made to baseline placement in CTTypesetter, but I can't tie that to any ability to vary the baseline over the course of a line in the iPad's CoreText.
Hence, you probably need to interfere in the rendering process yourself. For example:
create a CTFramesetter, e.g. via CTFramesetterCreateWithAttributedString
get a CTFrame from that via CTFramesetterCreateFrame
use CTFrameGetLineOrigins and CTFrameGetLines to get an array of CTLines and where they should be drawn (ie, the text with suitable paragraph/line breaks and all your other kerning/leading/other positioning text attributes applied)
from those, for lines with no superscript or subscript, just use CTLineDraw and forget about it
for those with superscript or subscript, use CTLineGetGlyphRuns to get an array of CTRun objects describing the various glyphs on the line
on each run, use CTRunGetStringIndices to determine which source characters are in the run; if none that you want to superscript or subscript are included, just use CTRunDraw to draw the thing
otherwise, use CTRunGetGlyphs to break the run into individual glyphs and CTRunGetPositions to figure out where they would be drawn in the normal run of things
use CGContextShowGlyphsAtPoint as appropriate, having tweaked the text matrix for those you want in superscript or subscript
I haven't yet found a way to query whether a font has the relevant hints for automatic superscript/subscript generation, which makes things a bit tricky. If you're desperate and don't have a solution to that, it's probably easier just not to use CoreText's stuff at all — in which case you should probably define your own attribute (that's why [NS/CF]AttributedString allow arbitrary attributes to be applied, identified by string name) and use the normal NSString searching methods to identify regions that need to be printed in superscript or subscript from blind.
For performance reasons, binary search is probably the way to go on searching all lines, the runs within a line and the glyphs within a run for those you're interested in. Assuming you have a custom UIView subclass to draw CoreText content, it's probably smarter to do it ahead of time rather than upon every drawRect: (or the equivalent methods, if e.g. you're using a CATiledLayer).
Also, the CTRun methods have variants that request a pointer to a C array containing the things you're asking for copies of, possibly saving you a copy operation but not necessarily succeeding. Check the documentation. I've just made sure that I'm sketching a workable solution rather than necessarily plotting the absolutely optimal route through the CoreText API.
Here is some code based on Tommy's outline that does the job quite well (tested on only single lines though). Set the baseline on your attributed string with #"MDBaselineAdjust", and this code draws the line to offset, a CGPoint. To get superscript, also lower the font size a notch. Preview of what's possible: http://cloud.mochidev.com/IfPF (the line that reads "[Xe] 4f14...")
Hope this helps :)
NSAttributedString *string = ...;
CGPoint origin = ...;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);
CGSize suggestedSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0, string.length), NULL, CGSizeMake(CGFLOAT_MAX, CGFLOAT_MAX), NULL);
CGPathRef path = CGPathCreateWithRect(CGRectMake(origin.x, origin.y, suggestedSize.width, suggestedSize.height), NULL);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, string.length), path, NULL);
NSArray *lines = (NSArray *)CTFrameGetLines(frame);
if (lines.count) {
CGPoint *lineOrigins = malloc(lines.count * sizeof(CGPoint));
CTFrameGetLineOrigins(frame, CFRangeMake(0, lines.count), lineOrigins);
int i = 0;
for (id aLine in lines) {
NSArray *glyphRuns = (NSArray *)CTLineGetGlyphRuns((CTLineRef)aLine);
CGFloat width = origin.x+lineOrigins[i].x-lineOrigins[0].x;
for (id run in glyphRuns) {
CFRange range = CTRunGetStringRange((CTRunRef)run);
NSDictionary *dict = [string attributesAtIndex:range.location effectiveRange:NULL];
CGFloat baselineAdjust = [[dict objectForKey:#"MDBaselineAdjust"] doubleValue];
CGContextSetTextPosition(context, width, origin.y+baselineAdjust);
CTRunDraw((CTRunRef)run, context, CFRangeMake(0, 0));
}
i++;
}
free(lineOrigins);
}
CFRelease(frame);
CGPathRelease(path);
CFRelease(framesetter);
`
You can mimic subscripts now using TextKit in iOS7. Example:
NSMutableAttributedString *carbonDioxide = [[NSMutableAttributedString alloc] initWithString:#"CO2"];
[carbonDioxide addAttribute:NSFontAttributeName value:[UIFont systemFontOfSize:8] range:NSMakeRange(2, 1)];
[carbonDioxide addAttribute:NSBaselineOffsetAttributeName value:#(-2) range:NSMakeRange(2, 1)];
I've been having trouble with this myself. Apple's Core Text documentation claims that there has been support in iOS since version 3.2, but for some reason it still just doesn't work. Even in iOS 5... how very frustrating >.<
I managed to find a workaround if you only really care about superscript or subscript numbers. Say you have a block of text can might contain a "sub2" tag where you want a subscript number 2. Use NSRegularExpression to find the tags, and then use replacementStringForResult method on your regex object to replace each tag with unicode characters:
if ([match isEqualToString:#"<sub2/>"])
{
replacement = #"₂";
}
If you use the OSX character viewer, you can drop unicode characters right into your code. There's a set of characters in there called "Digits" which has all the superscript and subscript number characters. Just leave your cursor at the appropriate spot in your code window and double-click in the character viewer to insert the character you want.
With the right font, you could probably do this with any letter as well, but the character map only has a handful of non-numbers available for this that I've seen.
Alternatively you can just put the unicode characters in your source content, but in a lot of cases (like mine), that isn't possible.
Swift 4
Very loosely based off of Graham Perks' answer. I could not make his code work as is but after three hours of work I've created something that works great! If you'd prefer a full implementation of this along with a bunch of nifty other performance and feature add-ons (links, async drawing, etc), check out my single file library DYLabel. If not, read on.
I explain everything I'm doing in the comments. This is the draw method, to be called from drawRect:
/// Draw text on a given context. Supports superscript using NSBaselineOffsetAttributeName
///
/// This method works by drawing the text backwards (i.e. last line first). This is very very important because it's how we ensure superscripts don't overlap the text above it. In other words, we need to start from the bottom, get the height of the text we just drew, and then draw the next text above it. This could be done in a forward direction but you'd have to use lookahead which IMO is more work.
///
/// If you have to modify on this, remember that CT uses a mathmatical origin (i.e. 0,0 is bottom left like a cartisian plane)
/// - Parameters:
/// - context: A core graphics draw context
/// - attributedText: An attributed string
func drawText(context:CGContext, attributedText: NSAttributedString) {
//Create our CT boiler plate
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = bounds
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
//https://stackoverflow.com/a/27631737/1166266
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Move the draw head. Note that we're drawing from the unupdated drawYPositionFromOrigin. This is again thanks to CT cartisian plane where we draw from the bottom left of text too.
context.textPosition = CGPoint.init(x: lineOrigins[lineIndex].x, y: drawYPositionFromOrigin)
//Draw!
CTRunDraw(run, context, CFRangeMake(0, 0))
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
}
I also made a method which calculates the required height of the text given a width. It's exactly the same code except it doesn't draw anything.
/// Calculate the height if it were drawn using `drawText`
/// Uses the same code as drawText except it doesn't draw.
///
/// - Parameters:
/// - attributedText: The text to calculate the height of
/// - width: The constraining width
/// - estimationHeight: Optional paramater, default 30,000px. This is the container height used to layout the text. DO NOT USE CGFLOATMAX AS IT CORE TEXT CANNOT CREATE A FRAME OF THAT SIZE.
/// - Returns: The size required to fit the text
static func size(of attributedText:NSAttributedString,width:CGFloat, estimationHeight:CGFloat?=30000) -> CGSize {
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRect.init(x: 0, y: 0, width: width, height: estimationHeight!)
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Skip drawing since this is a height calculation
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
return CGSize.init(width: width, height: drawYPositionFromOrigin)
}
Like everything I write, I also did some benchmarks against some public libraries and system functions (even though they won't work here). I used a huge, complex string here to keep anyone from taking unfair shortcuts.
---HEIGHT CALCULATION---
Runtime for 1000 iterations (ms) BoundsForRect: 5415.030002593994
Runtime for 1000 iterations (ms) layoutManager: 5370.990991592407
Runtime for 1000 iterations (ms) CTFramesetterSuggestFrameSizeWithConstraints: 2372.151017189026
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame ObjC: 2300.302028656006
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame-Swift: 2313.6669397354126
Runtime for 1000 iterations (ms) THIS ANSWER size(of:): 2566.351056098938
---RENDER---
Runtime for 1000 iterations (ms) AttributedLabel: 35.032033920288086
Runtime for 1000 iterations (ms) UILabel: 45.948028564453125
Runtime for 1000 iterations (ms) TTTAttributedLabel: 301.1329174041748
Runtime for 1000 iterations (ms) THIS ANSWER: 20.398974418640137
So summary time: we did very well! size(of...) is nearly equal to stock CT layout which means that our addon for superscript is fairly cheap despite using a hash table lookup. We do, however, flat out win on draw calls. I suspect that this is due to the very expensive 30k pixel estimation frame we have to create. If we make a better estimate performance will be better. I've already been working for about three hours so I'm calling it quits and leaving that as an exercise to the reader.
I struggled with this problem as well. It turns out, as some of the posters above suggested, that none of the fonts that come with IOS support superscripting or subscripting. My solution was to purchase and install two custom superscript and subscript fonts (They were $9.99 each and here's a link to the site http://superscriptfont.com/).
Not really that hard to do. Just add the font files as resources and add info.plist entries for "Font provided by application".
The next step was to search for the appropriate tags in my NSAttributedString, remove the tags and apply the font to the text.
Works great!
A Swift 2 twist on Dimitri's answer; effectively implements NSBaselineOffsetAttributeName.
When coding I was in a UIView so had a reasonable bounds rect to use. His answer calculated its own rect.
func drawText(context context:CGContextRef, attributedText: NSAttributedString) {
// All this CoreText iteration just to add support for superscripting.
// NSBaselineOffsetAttributeName isn't supported by CoreText. So we manully iterate through
// all the text ranges, rendering each, and offsetting the baseline where needed.
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRectOffset(bounds, 0, 0)
let path = CGPathCreateWithRect(textRect, nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
// All the lines of text we'll render...
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
// And their origin coordinates...
var lineOrigins = [CGPoint](count: lineCount, repeatedValue: CGPointZero)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
for lineIndex in 0..<lineCount {
let lineObject = lines[lineIndex]
// Each run of glyphs we'll render...
let glyphRuns = CTLineGetGlyphRuns(lineObject as! CTLine) as [AnyObject]
for r in glyphRuns {
let run = r as! CTRun
let runRange = CTRunGetStringRange(run)
// What attributes are in the NSAttributedString here? If we find NSBaselineOffsetAttributeName,
// adjust the baseline.
let attrs = attributedText.attributesAtIndex(runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attrs[NSBaselineOffsetAttributeName as String] as? NSNumber {
baselineAdjustment = CGFloat(adjust.floatValue)
}
CGContextSetTextPosition(context, lineOrigins[lineIndex].x, lineOrigins[lineIndex].y - 25 + baselineAdjustment)
CTRunDraw(run, context, CFRangeMake(0, 0))
}
}
}
With IOS 11, Apple introduced a new string attribute name:
kCTBaselineOffsetAttributeName which works with Core Text.
Note that the offset direction is different from NSBaselineOffsetAttributeName used with NSAttributedStrings on UILabels etc (a positive offset moves the baseline downwards).

How to detect an image border programmatically?

I'm searching for a program which detects the border of a image,
for example I have a square and the program detects the X/Y-Coords
Example:
alt text http://img709.imageshack.us/img709/1341/22444641.png
This is a very simple edge detector. It is suitable for binary images. It just calculates the differences between horizontal and vertical pixels like image.pos[1,1] = image.pos[1,1] - image.pos[1,2] and the same for vertical differences. Bear in mind that you also need to normalize it in the range of values 0..255.
But! if you just need a program, use Adobe Photoshop.
Code written in C#.
public void SimpleEdgeDetection()
{
BitmapData data = Util.SetImageToProcess(image);
if (image.PixelFormat != PixelFormat.Format8bppIndexed)
return;
unsafe
{
byte* ptr1 = (byte *)data.Scan0;
byte* ptr2;
int offset = data.Stride - data.Width;
int height = data.Height - 1;
int px;
for (int y = 0; y < height; y++)
{
ptr2 = (byte*)ptr1 + data.Stride;
for (int x = 0; x < data.Width; x++, ptr1++, ptr2++)
{
px = Math.Abs(ptr1[0] - ptr1[1]) + Math.Abs(ptr1[0] - ptr2[0]);
if (px > Util.MaxGrayLevel) px = Util.MaxGrayLevel;
ptr1[0] = (byte)px;
}
ptr1 += offset;
}
}
image.UnlockBits(data);
}
Method from Util Class
static public BitmapData SetImageToProcess(Bitmap image)
{
if (image != null)
return image.LockBits(
new Rectangle(0, 0, image.Width, image.Height),
ImageLockMode.ReadWrite,
image.PixelFormat);
return null;
}
If you need more explanation or algorithm just ask with more information without being so general.
It depends what you want to do with the border, if you are looking at getting just the values of the edges of the region, use an algorithm called the Connected Components Region. You must know the value of the region prior to using the algorithm. This will navigate around the border and collect the outside region. If you are trying to detect just the outside lines get the gradient of the image and it will reveal where the lines are. To do this convolve the image with an edge detection filter such as Prewitt, Sobel, etc.
You can use any image processing library such as Opencv. which is in c++ or python.
You should look for edge detection functions such as Canny edge detection.
Of course this would require some diving into image processing.
The example image you gave should be straight forward to detect, how noisy/varied are the images going to be?
A shape recognition algorithm might help you out, providing it has a solid border of some kind, and the background colour is a solid one.
From the sounds of it, you just want a blob extraction algorithm. After that, the lowest/highest values for x/y will give you the coordinates of the corners.