context.Path returning nil - core-graphics

I have the following code that draws into a CALayer subclass' context.
override func draw(in con: CGContext) {
// super.draw(in: con) //with/out makes no diff
let endAng = CGFloat(Float.pi * 2)
con.addArc(center: position,
radius: 30,
startAngle: 0,
endAngle: endAng,
clockwise: false)
con.setStrokeColor(UIColor.red.cgColor)
con.setLineWidth(CGFloat(thickness / 5))
con.strokePath()
self.path = con.path
In that last line, I'm trying to save the path to do more drawing with it when a user goes into another mode. But after the assignement self.path == nil
The docs simply say:
Returns a path object built from the current path information in a graphics context.
Why, if I just had been adding path components to my CALayer subclass' context is the path getter returning a nil path? The documentation here does not help me debug.
So, all the drawing functions like addArc say they add these shapes to the current path (and i indeed see them rendered in my layer), yet on the next line when I query for the current context's path, it is nil?

The documentation for the strokePath function says:
The current path is cleared as a side effect of calling this function.
Move your call to con.path earlier, before you stroke the path.

Related

NSTextBlock backgroundColor is not drawn

I have an NSTextBlock subclass that has a specific backgroundColor set. Now when I add a custom paragraph style to a range of text like this
let block = MyTextBlock()
block.backgroundColor = myBackgroundColor
let pstyle = NSMutableParagraphStyle()
pstyle.textBlocks = [block]
attributedText.addAttribute(.paragraphStyle, value: pstyle, range: textRange)
// Append the attributed string to the text views textStorage ...
the text block is shown without a background color. I know that the text blocks works, because rectForLayout gets called, but when I try to override drawBackground it never gets called.
Do I have to do something else for NSTextBlocks to draw their background?
PS: Borders also seem to be ignored. I also tried to find a sample on Github, but that also doesn't draw any backgrounds, despite having a background color set.
After trying everything, I finally managed to get the background to show up, by setting
setValue(100, type: .percentageValueType, for: .width)
It seems that the drawing logic expects some value for the content size. Really nice documentation job there. This requirement is nowhere to be found.

CAShapeLayer - bounds

do you have to set the bounds of a CAShapeLayer?
I'm creating a shape layer and assigning it a path via a UIBezierPath, the shape is a simple circle of the size of the view.
I'm not setting any position or bounds on the layer, is it wrong?
class View: UIView {
...
var backgroundLayer: CAShapeLayer!
func setup() {
// call from init
backgroundLayer = CAShapeLayer()
backgroundLayer.strokeColor = UIColor.redColor()
backgroundLayer.lineWidth = 3
backgroundLayer.fillColor = UIColor.clearColor().CGColor
layer.addSublayer(backgroundLayer)
...
}
override func layoutSubviews() {
super.layoutSubviews()
backgroundLayer?.path = circlePath(100)
...
}
func circlePath(progress: Int) -> CGPath {
let path = UIBezierPath()
let inverseProgress = 1 - CGFloat(progress) / 100
let endAngleOffset = CGFloat(2 * M_PI) * inverseProgress
path.addArcWithCenter(localCenter, radius: radius, startAngle: CGFloat(-M_PI), endAngle: CGFloat(M_PI) - endAngleOffset, clockwise: true)
return path.CGPath
}
...
}
As you've already seen, the layer will display just fine even without setting the bounds. So, you don't "have to" set it, but not having a bounds (or having a bounds that is different than the path's bounding box) can sometimes be confusing when doing layout or transforms.
When it comes to layout, positioning, and transformation there are a few different coordinates to consider.
The layer is positioned relative to it's parent's coordinate system
The path is positioned relative to the shape layer's coordinate system
Each point in the path is relative to the origin (0,0) of the path.
The shape layer is transformed relative to its center, and the position of the shape layer is also in the center of its bounds. This means that if the shape layer has no bounds (0×0) size, then any transformation (e.g. rotation) happens around the origin of the path (0,0), as opposed to the center of the path. It also means that when setting the position of the shape layer, one is conceptually positioning the origin of the path, as opposed to the center of the path. However, if the origin of the path happens to be the center of the path's bounding box (for example a circle centered around (0,0)) then this isn't really an issue.
So, to recap: you don't have to set a bounds, but sometimes (depending on the path) positioning or transformation might be clearer when it's set.

Animating UIVisualEffectView Blur Radius?

As the title says it, is there a way to animate a UIVisualEffectView's blur radius? I have a dynamic background behind the view so the ImageEffects addition can't be used... The only thing that can do this as far as I know is to animate the opacity but iOS complains saying that doing that breaks the EffectView so it definitely seems like a bad idea... Any help would be gladly appreciated.
The answer is yes. Here's an example for animating from no blur -> blur:
// When creating your view...
let blurView = UIVisualEffectView()
// Later, when you want to animate...
UIView.animateWithDuration(1.0) { () -> Void in
blurView.effect = UIBlurEffect(style: .Dark)
}
This will animate the blur radius from zero (totally transparent, or rather - no blur effect at all) to the default radius (fully blurred) over the duration of one second. And to do the reverse animation:
UIView.animateWithDuration(1.0) { () -> Void in
blurView.effect = nil
}
The resulting animations transform the blur radius smoothly, even though you're actually adding/removing the blur effect entirely - UIKit just knows what to do behind the scenes.
Note that this wasn't always possible: Until recently (not sure when), a UIVisualEffectView had to be initialized with a UIVisualEffect, and the effect property was read-only. Now, effect is both optional and read/write (though the documentation isn't updated...), and UIVisualEffectView includes an empty initializer, enabling us to perform these animations.
The only restriction is that you cannot manually assign a custom blur radius to a UIVisualEffectView - you can only animate between 'no blur' and 'fully blurred'.
EDIT: In case anybody is interested, I've created a subclass of UIVisualEffectView that gives you full control over blur radius. The caveat is that it uses a private UIKit API, so you probably shouldn't submit apps for review using it. However, it's still interesting and useful for prototypes or internal applications:
https://github.com/collinhundley/APCustomBlurView

How can I fake superscript and subscript with Core Text and an Attributed String?

I'm using an NSMutableAttribtuedString in order to build a string with formatting, which I then pass to Core Text to render into a frame. The problem is, that I need to use superscript and subscript. Unless these characters are available in the font (most fonts don't support it), then setting the property kCTSuperscriptAttributeName does nothing at all.
So I guess I'm left with the only option, which is to fake it by changing the font size and moving the base line. I can do the font size bit, but don't know the code for altering the base line. Can anyone help please?
Thanks!
EDIT: I'm thinking, considering the amount of time I have available to sort this problem, of editing a font so that it's given a subscript "2"... Either that or finding a built-in iPad font which does. Does anyone know of any serif font with a subscript "2" I can use?
There is no baseline setting amongst the CTParagraphStyleSpecifiers or the defined string attribute name constants. I think it's therefore safe to conclude that CoreText does not itself support a baseline adjust property on text. There's a reference made to baseline placement in CTTypesetter, but I can't tie that to any ability to vary the baseline over the course of a line in the iPad's CoreText.
Hence, you probably need to interfere in the rendering process yourself. For example:
create a CTFramesetter, e.g. via CTFramesetterCreateWithAttributedString
get a CTFrame from that via CTFramesetterCreateFrame
use CTFrameGetLineOrigins and CTFrameGetLines to get an array of CTLines and where they should be drawn (ie, the text with suitable paragraph/line breaks and all your other kerning/leading/other positioning text attributes applied)
from those, for lines with no superscript or subscript, just use CTLineDraw and forget about it
for those with superscript or subscript, use CTLineGetGlyphRuns to get an array of CTRun objects describing the various glyphs on the line
on each run, use CTRunGetStringIndices to determine which source characters are in the run; if none that you want to superscript or subscript are included, just use CTRunDraw to draw the thing
otherwise, use CTRunGetGlyphs to break the run into individual glyphs and CTRunGetPositions to figure out where they would be drawn in the normal run of things
use CGContextShowGlyphsAtPoint as appropriate, having tweaked the text matrix for those you want in superscript or subscript
I haven't yet found a way to query whether a font has the relevant hints for automatic superscript/subscript generation, which makes things a bit tricky. If you're desperate and don't have a solution to that, it's probably easier just not to use CoreText's stuff at all — in which case you should probably define your own attribute (that's why [NS/CF]AttributedString allow arbitrary attributes to be applied, identified by string name) and use the normal NSString searching methods to identify regions that need to be printed in superscript or subscript from blind.
For performance reasons, binary search is probably the way to go on searching all lines, the runs within a line and the glyphs within a run for those you're interested in. Assuming you have a custom UIView subclass to draw CoreText content, it's probably smarter to do it ahead of time rather than upon every drawRect: (or the equivalent methods, if e.g. you're using a CATiledLayer).
Also, the CTRun methods have variants that request a pointer to a C array containing the things you're asking for copies of, possibly saving you a copy operation but not necessarily succeeding. Check the documentation. I've just made sure that I'm sketching a workable solution rather than necessarily plotting the absolutely optimal route through the CoreText API.
Here is some code based on Tommy's outline that does the job quite well (tested on only single lines though). Set the baseline on your attributed string with #"MDBaselineAdjust", and this code draws the line to offset, a CGPoint. To get superscript, also lower the font size a notch. Preview of what's possible: http://cloud.mochidev.com/IfPF (the line that reads "[Xe] 4f14...")
Hope this helps :)
NSAttributedString *string = ...;
CGPoint origin = ...;
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)string);
CGSize suggestedSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0, string.length), NULL, CGSizeMake(CGFLOAT_MAX, CGFLOAT_MAX), NULL);
CGPathRef path = CGPathCreateWithRect(CGRectMake(origin.x, origin.y, suggestedSize.width, suggestedSize.height), NULL);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, string.length), path, NULL);
NSArray *lines = (NSArray *)CTFrameGetLines(frame);
if (lines.count) {
CGPoint *lineOrigins = malloc(lines.count * sizeof(CGPoint));
CTFrameGetLineOrigins(frame, CFRangeMake(0, lines.count), lineOrigins);
int i = 0;
for (id aLine in lines) {
NSArray *glyphRuns = (NSArray *)CTLineGetGlyphRuns((CTLineRef)aLine);
CGFloat width = origin.x+lineOrigins[i].x-lineOrigins[0].x;
for (id run in glyphRuns) {
CFRange range = CTRunGetStringRange((CTRunRef)run);
NSDictionary *dict = [string attributesAtIndex:range.location effectiveRange:NULL];
CGFloat baselineAdjust = [[dict objectForKey:#"MDBaselineAdjust"] doubleValue];
CGContextSetTextPosition(context, width, origin.y+baselineAdjust);
CTRunDraw((CTRunRef)run, context, CFRangeMake(0, 0));
}
i++;
}
free(lineOrigins);
}
CFRelease(frame);
CGPathRelease(path);
CFRelease(framesetter);
`
You can mimic subscripts now using TextKit in iOS7. Example:
NSMutableAttributedString *carbonDioxide = [[NSMutableAttributedString alloc] initWithString:#"CO2"];
[carbonDioxide addAttribute:NSFontAttributeName value:[UIFont systemFontOfSize:8] range:NSMakeRange(2, 1)];
[carbonDioxide addAttribute:NSBaselineOffsetAttributeName value:#(-2) range:NSMakeRange(2, 1)];
I've been having trouble with this myself. Apple's Core Text documentation claims that there has been support in iOS since version 3.2, but for some reason it still just doesn't work. Even in iOS 5... how very frustrating >.<
I managed to find a workaround if you only really care about superscript or subscript numbers. Say you have a block of text can might contain a "sub2" tag where you want a subscript number 2. Use NSRegularExpression to find the tags, and then use replacementStringForResult method on your regex object to replace each tag with unicode characters:
if ([match isEqualToString:#"<sub2/>"])
{
replacement = #"₂";
}
If you use the OSX character viewer, you can drop unicode characters right into your code. There's a set of characters in there called "Digits" which has all the superscript and subscript number characters. Just leave your cursor at the appropriate spot in your code window and double-click in the character viewer to insert the character you want.
With the right font, you could probably do this with any letter as well, but the character map only has a handful of non-numbers available for this that I've seen.
Alternatively you can just put the unicode characters in your source content, but in a lot of cases (like mine), that isn't possible.
Swift 4
Very loosely based off of Graham Perks' answer. I could not make his code work as is but after three hours of work I've created something that works great! If you'd prefer a full implementation of this along with a bunch of nifty other performance and feature add-ons (links, async drawing, etc), check out my single file library DYLabel. If not, read on.
I explain everything I'm doing in the comments. This is the draw method, to be called from drawRect:
/// Draw text on a given context. Supports superscript using NSBaselineOffsetAttributeName
///
/// This method works by drawing the text backwards (i.e. last line first). This is very very important because it's how we ensure superscripts don't overlap the text above it. In other words, we need to start from the bottom, get the height of the text we just drew, and then draw the next text above it. This could be done in a forward direction but you'd have to use lookahead which IMO is more work.
///
/// If you have to modify on this, remember that CT uses a mathmatical origin (i.e. 0,0 is bottom left like a cartisian plane)
/// - Parameters:
/// - context: A core graphics draw context
/// - attributedText: An attributed string
func drawText(context:CGContext, attributedText: NSAttributedString) {
//Create our CT boiler plate
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = bounds
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
//https://stackoverflow.com/a/27631737/1166266
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Move the draw head. Note that we're drawing from the unupdated drawYPositionFromOrigin. This is again thanks to CT cartisian plane where we draw from the bottom left of text too.
context.textPosition = CGPoint.init(x: lineOrigins[lineIndex].x, y: drawYPositionFromOrigin)
//Draw!
CTRunDraw(run, context, CFRangeMake(0, 0))
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
}
I also made a method which calculates the required height of the text given a width. It's exactly the same code except it doesn't draw anything.
/// Calculate the height if it were drawn using `drawText`
/// Uses the same code as drawText except it doesn't draw.
///
/// - Parameters:
/// - attributedText: The text to calculate the height of
/// - width: The constraining width
/// - estimationHeight: Optional paramater, default 30,000px. This is the container height used to layout the text. DO NOT USE CGFLOATMAX AS IT CORE TEXT CANNOT CREATE A FRAME OF THAT SIZE.
/// - Returns: The size required to fit the text
static func size(of attributedText:NSAttributedString,width:CGFloat, estimationHeight:CGFloat?=30000) -> CGSize {
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRect.init(x: 0, y: 0, width: width, height: estimationHeight!)
let path = CGPath(rect: textRect, transform: nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
//Fetch our lines, bridging to swift from CFArray
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
//Get the line origin coordinates. These are used for calculating stock line height (w/o baseline modifications)
var lineOrigins = [CGPoint](repeating: CGPoint.zero, count: lineCount)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
//Since we're starting from the bottom of the container we need get our bottom offset/padding (so text isn't slammed to the bottom or cut off)
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
if lineCount > 0 {
CTLineGetTypographicBounds(lines.last as! CTLine, &ascent, &descent, &leading)
}
//This variable holds the current draw position, relative to CT origin of the bottom left
var drawYPositionFromOrigin:CGFloat = descent
//Again, draw the lines in reverse so we don't need look ahead
for lineIndex in (0..<lineCount).reversed() {
//Calculate the current line height so we can accurately move the position up later
let lastLinePosition = lineIndex > 0 ? lineOrigins[lineIndex - 1].y: textRect.height
let currentLineHeight = lastLinePosition - lineOrigins[lineIndex].y
//Throughout the loop below this variable will be updated to the tallest value for the current line
var maxLineHeight:CGFloat = currentLineHeight
//Grab the current run glyph. This is used for attributed string interop
let glyphRuns = CTLineGetGlyphRuns(lines[lineIndex] as! CTLine) as [AnyObject]
for run in glyphRuns {
let run = run as! CTRun
//Convert the format range to something we can match to our string
let runRange = CTRunGetStringRange(run)
let attribuetsAtPosition = attributedText.attributes(at: runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attribuetsAtPosition[NSAttributedStringKey.baselineOffset] as? NSNumber {
//We have a baseline offset!
baselineAdjustment = CGFloat(adjust.floatValue)
}
//Check if this glyph run is tallest, and move it if it is
maxLineHeight = max(currentLineHeight + baselineAdjustment, maxLineHeight)
//Skip drawing since this is a height calculation
}
//Move our position because we've completed the drawing of the line which is at most `maxLineHeight`
drawYPositionFromOrigin += maxLineHeight
}
return CGSize.init(width: width, height: drawYPositionFromOrigin)
}
Like everything I write, I also did some benchmarks against some public libraries and system functions (even though they won't work here). I used a huge, complex string here to keep anyone from taking unfair shortcuts.
---HEIGHT CALCULATION---
Runtime for 1000 iterations (ms) BoundsForRect: 5415.030002593994
Runtime for 1000 iterations (ms) layoutManager: 5370.990991592407
Runtime for 1000 iterations (ms) CTFramesetterSuggestFrameSizeWithConstraints: 2372.151017189026
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame ObjC: 2300.302028656006
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame-Swift: 2313.6669397354126
Runtime for 1000 iterations (ms) THIS ANSWER size(of:): 2566.351056098938
---RENDER---
Runtime for 1000 iterations (ms) AttributedLabel: 35.032033920288086
Runtime for 1000 iterations (ms) UILabel: 45.948028564453125
Runtime for 1000 iterations (ms) TTTAttributedLabel: 301.1329174041748
Runtime for 1000 iterations (ms) THIS ANSWER: 20.398974418640137
So summary time: we did very well! size(of...) is nearly equal to stock CT layout which means that our addon for superscript is fairly cheap despite using a hash table lookup. We do, however, flat out win on draw calls. I suspect that this is due to the very expensive 30k pixel estimation frame we have to create. If we make a better estimate performance will be better. I've already been working for about three hours so I'm calling it quits and leaving that as an exercise to the reader.
I struggled with this problem as well. It turns out, as some of the posters above suggested, that none of the fonts that come with IOS support superscripting or subscripting. My solution was to purchase and install two custom superscript and subscript fonts (They were $9.99 each and here's a link to the site http://superscriptfont.com/).
Not really that hard to do. Just add the font files as resources and add info.plist entries for "Font provided by application".
The next step was to search for the appropriate tags in my NSAttributedString, remove the tags and apply the font to the text.
Works great!
A Swift 2 twist on Dimitri's answer; effectively implements NSBaselineOffsetAttributeName.
When coding I was in a UIView so had a reasonable bounds rect to use. His answer calculated its own rect.
func drawText(context context:CGContextRef, attributedText: NSAttributedString) {
// All this CoreText iteration just to add support for superscripting.
// NSBaselineOffsetAttributeName isn't supported by CoreText. So we manully iterate through
// all the text ranges, rendering each, and offsetting the baseline where needed.
let framesetter = CTFramesetterCreateWithAttributedString(attributedText)
let textRect = CGRectOffset(bounds, 0, 0)
let path = CGPathCreateWithRect(textRect, nil)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, nil)
// All the lines of text we'll render...
let lines = CTFrameGetLines(frame) as [AnyObject]
let lineCount = lines.count
// And their origin coordinates...
var lineOrigins = [CGPoint](count: lineCount, repeatedValue: CGPointZero)
CTFrameGetLineOrigins(frame, CFRangeMake(0, 0), &lineOrigins);
for lineIndex in 0..<lineCount {
let lineObject = lines[lineIndex]
// Each run of glyphs we'll render...
let glyphRuns = CTLineGetGlyphRuns(lineObject as! CTLine) as [AnyObject]
for r in glyphRuns {
let run = r as! CTRun
let runRange = CTRunGetStringRange(run)
// What attributes are in the NSAttributedString here? If we find NSBaselineOffsetAttributeName,
// adjust the baseline.
let attrs = attributedText.attributesAtIndex(runRange.location, effectiveRange: nil)
var baselineAdjustment: CGFloat = 0.0
if let adjust = attrs[NSBaselineOffsetAttributeName as String] as? NSNumber {
baselineAdjustment = CGFloat(adjust.floatValue)
}
CGContextSetTextPosition(context, lineOrigins[lineIndex].x, lineOrigins[lineIndex].y - 25 + baselineAdjustment)
CTRunDraw(run, context, CFRangeMake(0, 0))
}
}
}
With IOS 11, Apple introduced a new string attribute name:
kCTBaselineOffsetAttributeName which works with Core Text.
Note that the offset direction is different from NSBaselineOffsetAttributeName used with NSAttributedStrings on UILabels etc (a positive offset moves the baseline downwards).

dojox.drawing.Drawing - custom tool to create rectangle with rounded corners

I'm working with dojox.drawing.Drawing to create a simple diagramming tool. I have created a custom tool to draw rounded rectangle by extending dojox.drawing.tools.Rect as shown below -
dojo.provide("dojox.drawing.tools.custom.RoundedRect");
dojo.require("dojox.drawing.tools.Rect");
dojox.drawing.tools.custom.RoundedRect = dojox.drawing.util.oo.declare(
dojox.drawing.tools.Rect,
function(options){
},
{
customType:"roundedrect"
}
);
dojox.drawing.tools.custom.RoundedRect.setup = {
name:"dojox.drawing.tools.custom.RoundedRect",
tooltip:"Rounded Rect",
iconClass:"iconRounded"
};
dojox.drawing.register(dojox.drawing.tools.custom.RoundedRect.setup, "tool");
I was able to add my tool to the toolbar and use it to draw a rectagle on canvas. Now, I would like to customize the rectangle created by my custom tool to have rounded corners, but I'm not able to figure out how.
I have checked the source of dojox.drawing.tools.Rect class as well as it's parent dojox.drawing.stencil.Rect class and I can see the actual rectangle being created in dojox.drawing.stencil.Rect as follows -
_create: function(/*String*/shp, /*StencilData*/d, /*Object*/sty){
// summary:
// Creates a dojox.gfx.shape based on passed arguments.
// Can be called many times by implementation to create
// multiple shapes in one stencil.
//
//console.log("render rect", d)
//console.log("rect sty:", sty)
this.remove(this[shp]);
this[shp] = this.container.createRect(d)
.setStroke(sty)
.setFill(sty.fill);
this._setNodeAtts(this[shp]);
}
In dojox.gfx, rounded corners can be added to a a rectangle by setting r property.
With this context, could anybody please provide answers to my following questions?
What's the mechanism in dojox.drawing to customize the appearance of rectangle to have
rounded corners?
In the code snippet above, StencilData is passed to createRect call. What's the mechanism to customize this data? Can the r property of a rectangle that governs rounded corners be set in this data?
Adding rounded rectangles programmatically is easy. In the tests folder you'll find test_shadows.html which has a line that adds a rectangle with rounded corners:
myDrawing.addStencil("rect", {data:{x:50, y:175, width:100, height:50, r:10}});
You create a data object with x,y,width,height, and a value for r (otherwise it defaults to 0).
If you wanted to do it by extending rect, the easiest way to do it would just be to set the value in the constructor function (data.r=10, for example), or you could create a pointsToData function to override Rect's version. Either you would have set the value for this.data.r, or the default:
pointsToData: function(/*Array*/p){
// summary:
// Converts points to data
p = p || this.points;
var s = p[0];
var e = p[2];
this.data = {
x: s.x,
y: s.y,
width: e.x-s.x,
height: e.y-s.y,
r:this.data.r || 10
};
return this.data;
},
In that example I give r the value 10 as the default, instead of 0 as it was before. This works because every time stencil goes to draw your rect, it converts canvas x,y points (all stencils remember their points) to data (which gfx uses to draw). In other words this function will always be called before rect renders.