NSColor and its hue component - objective-c

I don't quite understand why the hue component of NSColor behaves like it behaves. Here is something strange:
NSColor *c = [NSColor colorWithCalibratedHue:0.1
saturation:1.0
brightness:1.0
alpha:1.0];
CGFloat hue = 0.0;
[c getHue:&hue saturation:NULL brightness:NULL alpha:NULL];
NSLog(#"hue = %f", hue);
If you run this code you see "hue = 0.1" being logged. But if you run the following code:
NSColor *c = [NSColor colorWithCalibratedHue:0.0
saturation:1.0
brightness:1.0
alpha:1.0];
CGFloat hue = 0.0;
[c getHue:&hue saturation:NULL brightness:NULL alpha:NULL];
NSLog(#"hue = %f", hue);
You see "hue = 1.0" being logged. Is this a bug? I read a lot of documentation on Color Spaces and Colors in general and couldn't find an answer.

In color theory, hue is an angular unit, usually expressed in degrees modulo 360 (0° being the same as 360°).
NSColor maps 0° to the floating point value 0.0 and 360° to 1.0. Therefore, it's perfectly valid for getHue to return 1.0 instead of 0.0, because both values represent the same hue.

Related

Determine UIFont size with CGSize Points or Pixels

I can make a CGRect with an exact defined PointSize e.g (20x20) and on this way i can exactly calculate the real size (cm or inch) on the Screen.
-(void) drawRect:(CGRect)rect{
[super drawRect:rect];
CGRect rectangle = CGRectMake(0, 0, 20, 20);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, rectangle);
}
I would like to determine the size of the UIFont (always only one Charackter "A..Z" or "1..9") somehow in a simular way, so that i can calculate at least the real height on the Screen.
Is it possible to calculate the Font size from CGSize to UIfont Size, so that it really matches on the screen ?
I have found a Solution for this kind of Problem.
If you want to have an UIFont to fit a CGRect or a Label, it won't fit the CGRect in the most cases. The Reason are the Font specific paddings.
If you want a Font to fit a CGRect, you must know wheter to use a lower or Upper Case Letters, an then to set font.xHeigth or the font.capHeight as font.pointSize. Therefore you'll get a bigger font.pointsize.
This little CodeSnippet solved my Problem:
+(CGFloat) fitFontSize:(UIFont*) font{
CGFloat fontSize = font.pointSize;
bool fontSizeAdjusted = false;
while (!fontSizeAdjusted) {
CGFloat xHeight = font.xHeight;
if( fontSize-xHeight > 0.1 ){
font = [font fontWithSize:font.pointSize+1];
}else{
fontSizeAdjusted = true;
fontSize = font.pointSize;
}
}
return fontSize;
}
This Article helped me to learn more about UIFonts

How can I find the UIFont pointSize necessary to get a desired lineHeight or other metric?

You can ask a UIFont instance for its lineHeight metric:
UIFont *const font = [UIFont systemFontOfSize: 10];
CGFloat lineHeight = [font lineHeight];
If I want a particular lineHeight, (20, say), how can I create a UIFont with that size?
Answer
My limited analysis suggests, surprisingly to me, that linear interpolation can be used. Here's a category on UIFont that will do what's needed:
#interface UIFont (FontWithLineHeight)
+(UIFont*) systemFontWithLineHeight: (CGFloat) lineHeight;
#end
With an implementation:
#implementation UIFont (FontWithLineHeight)
+(UIFont*) systemFontWithLineHeight:(CGFloat)lineHeight
{
const CGFloat lineHeightForSize1 = [[UIFont systemFontOfSize: 1] lineHeight];
const CGFloat pointSize = lineHeight / lineHeightForSize1;
return [UIFont systemFontOfSize: pointSize];
}
#end
Used like this:
UIFont *const font = [UIFont systemFontWithLineHeight: 20];
NSLog(#"%f", [font lineHeight]);
Which outputs:
2014-01-08 16:04:19.614 SpikeCollectionView[6309:60b] 20.000000
Which is what was asked for.
Analysis
It seems that the lineHeight of a UIFont scales linearly with the pointSize. In other words, if you make the pointSize twice as much, then the lineHeight will be twice as much too. If you halve the pointSize you also halve the lineHeight. This means that interpolation can be used to find the pointSize that will provide a given lineHeight or other metric.
Here's code that shows the linearity:
const CGFloat lineHeight1 = [[UIFont systemFontOfSize: 1] lineHeight];
const CGFloat lineHeight10 = [[UIFont systemFontOfSize: 10] lineHeight];
const CGFloat lineHeight100 = [[UIFont systemFontOfSize: 100] lineHeight];
const CGFloat ratio1_10 = lineHeight10 / lineHeight1;
const CGFloat ratio10_100 = lineHeight100 / lineHeight10;
NSLog(#"%f", ratio1_10);
NSLog(#"%f", ratio10_100);
The output is:
2014-01-08 15:56:39.326 SpikeCollectionView[6273:60b] 9.999999
2014-01-08 15:56:39.329 SpikeCollectionView[6273:60b] 10.000001
Caution
I've only tested this for the system font on iOS 7. Other fonts may not exhibit linear scaling of their metrics under scalings to their pointSize. If someone could confirm whether this is guaranteed or when it won't work, that would be marvellous. If you try this for other UIFont and it doesn't work, please comment or extend this answer, or add another.
If linear scaling of the metrics is not guaranteed, it would be necessary to search for the required pointSize. You will need a "root finding" numeric algorithm. The Illinois Algorithm would work for this.

Drawing board/grid with Cocoa

I'm writing a small boardgame for Mac OS X using Cocoa. I the actual grid is drawn as follows:
- (void)drawRect:(NSRect)rect
{
for (int x=0; x < GRIDSIZE; x++) {
for (int y=0; y < GRIDSIZE; y++) {
float ix = x*cellWidth;
float iy = y*cellHeight;
NSColor *color = (x % 2 == y % 2) ? boardColors[0] : boardColors[1];
[color set];
NSRect r = NSMakeRect(ix, iy, cellWidth, cellHeight);
NSBezierPath *path = [NSBezierPath bezierPath];
[path appendBezierPathWithRect:r];
[path fill];
[path stroke];
}
}
}
This works great, except that I see some errors in colors between the tiles. I guess this is due to some antialiasing or similar. See screenshots below (hopefully you can also see the same problems... its some black lines where the tiles overlap):
Therefore I have these questions:
Is there any way I can remove these graphical artefacts while still maintaining a resizable/scalable board?
Should I rather use some other graphical library like Core Graphics or OpenGL?
Update:
const int GRIDSIZE = 16;
cellWidth = (frame.size.width / GRIDSIZE);
cellHeight = (frame.size.height / GRIDSIZE);
If you want crisp rectangles you need to align coordinates so that they match the underlying pixels. NSView has a method for this purpose: - (NSRect)backingAlignedRect:(NSRect)aRect options:(NSAlignmentOptions)options. Here's a complete example for drawing the grid:
const NSInteger GRIDSIZE = 16;
- (void)drawRect:(NSRect)dirtyRect {
for (NSUInteger x = 0; x < GRIDSIZE; x++) {
for (NSUInteger y = 0; y < GRIDSIZE; y++) {
NSColor *color = (x % 2 == y % 2) ? [NSColor greenColor] : [NSColor redColor];
[color set];
[NSBezierPath fillRect:[self rectOfCellAtColumn:x row:y]];
}
}
}
- (NSRect)rectOfCellAtColumn:(NSUInteger)column row:(NSUInteger)row {
NSRect frame = [self frame];
CGFloat cellWidth = frame.size.width / GRIDSIZE;
CGFloat cellHeight = frame.size.height / GRIDSIZE;
CGFloat x = column * cellWidth;
CGFloat y = row * cellHeight;
NSRect rect = NSMakeRect(x, y, cellWidth, cellHeight);
NSAlignmentOptions alignOpts = NSAlignMinXNearest | NSAlignMinYNearest |
NSAlignMaxXNearest | NSAlignMaxYNearest ;
return [self backingAlignedRect:rect options:alignOpts];
}
Note that you don't need stroke to draw a game board. To draw pixel aligned strokes you need to remember that coordinates in Cocoa actually point to lower left corners of pixels. To crisp lines you need to offset coordinates by half a pixel from integral coordinates so that coordinates point to centers of pixels. For example to draw a crisp border for a grid cell you can do this:
NSRect rect = NSInsetRect([self rectOfCellAtColumn:column row:row], 0.5, 0.5);
[NSBezierPath strokeRect:rect];
First, make sure your stroke color is not black or gray. (You're setting color but is that stroke or fill color? I can never remember.)
Second, what happens if you simply fill with green, then draw red squares over it, or vice-versa?
There are other ways to do what you want, too. You can use the CICheckerboardGenerator to make your background instead.
Alternately, you could also use a CGBitmapContext that you filled by hand.
First of all, if you don't actually want your rectangles to have a border, you shouldn't call [path stroke].
Second, creating a bezier path for filling a rectangle is overkill. You can do the same with NSRectFill(r). This function is probably more efficient and I suspect less prone to introduce rounding errors to your floats – I assume you realize that your floats must not have a fractional part if you want pixel-precise rectangles. I believe that if the width and height of your view is a multiple of GRIDSIZE and you use NSRectFill, the artifacts should go away.
Third, there's the obvious question as to how you want your board drawn if the view's width and height are not a multiple of GRIDSIZE. This is of course not an issue if the size of your view is fixed and a multiple of that constant. If it is not, however, you first have to clarify how you want the possible remainder of the width or height handled. Should there be a border? Should the last cell in the row or column take up the remainder? Or should it rather be distributed equally among the cells of the rows or columns? You might have to accept cells of varying width and/or height. What the best solution for your problem is, depends on your exact requirements.
You might also want to look into other ways of drawing a checkerboard, e.g. using CICheckerboardGenerator or creating a pattern color with an image ([NSColor colorWithPatternImage:yourImage]) and then filling the whole view with it.
There's also the possibility of (temporarily) turning off anti-aliasing. To do that, add the following line to the beginning of your drawing method:
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
My last observation is about your general approach. If your game is going to have more complicated graphics and animations, e.g. animated movement of pieces, you might be better off using OpenGL.
As of iOS 6, you can generate a checkerboard pattern using CICheckerboardGenerator.
You'll want to guard against the force unwraps in here, but here's the basic implementation:
var checkerboardImage: UIImage? {
let filter = CIFilter(name: "CICheckerboardGenerator")!
let width = NSNumber(value: Float(viewSize.width/16))
let center = CIVector(cgPoint: .zero)
let darkColor = CIColor.red
let lightColor = CIColor.green
let sharpness = NSNumber(value: 1.0)
filter.setDefaults()
filter.setValue(width, forKey: "inputWidth")
filter.setValue(center, forKey: "inputCenter")
filter.setValue(darkColor, forKey: "inputColor0")
filter.setValue(lightColor, forKey: "inputColor1")
filter.setValue(sharpness, forKey: "inputSharpness")
let context = CIContext(options: nil)
let cgImage = context.createCGImage(filter.outputImage!, from: viewSize)
let uiImage = UIImage(cgImage: cgImage!, scale: UIScreen.main.scale, orientation: UIImage.Orientation.up)
return uiImage
}
Apple Developer Docs
Your squares overlap. ix + CELLWIDTH is the same coordinate as ix in the next iteration of the loop.
You can fix this by setting the stroke color explicitly to transparent, or by not calling stroke.
[color set];
[[NSColor clearColor] setStroke];
or
[path fill];
// not [path stroke];

Problem with NSColors in cocoa

i am trying to compose colors using NSColor and when i am trying to create RGB color with the following values it just displays the white colors instead:
(r,g,b):(50,50,50)
(r,g,b):(100,100,100)
(r,g,b):(150,150,150)
(r,g,b):(200,200,200)
etc...
the code used to create the colors is:
// the code to genearet simple images with background colors
NSColor * myColor = [NSColor colorWithDeviceRed:100.0 green:100.0 blue:100.0 alpha:1.0];
NSImage* image1 = [[NSImage alloc] initWithSize:NSMakeSize(10.0, 100.0)];
NSRect imageBounds1 = NSMakeRect (0, 0, 10, 100);
[image1 lockFocus];
[myColor set];
NSRectFill (imageBounds1);
[image1 unlockFocus];
I couldn't find any resource or sample on the web, which provides some sort of help on my above queries.It's highly appreciated if someone could share his wisdom on how I can achieve this.
Thanks in advance..
If I recall correctly you'll want the range 0-1 as your RGB as well
NSColor components have values in [0..1] so you should normalize the values you have, e.g.:
NSColor * myColor = [NSColor colorWithDeviceRed:100.0/255 green:100.0/255 blue:100.0/255 alpha:1.0];
If you try to set values greater then 1 to colour components then they're interpreted as 1, so your code will be actually equivalent to
NSColor * myColor = [NSColor colorWithDeviceRed:1.0 green:1.0 blue:1.0 alpha:1.0];
Which creates white colour.
As stated in the documentation:
Values below 0.0 are interpreted as 0.0, and values above 1.0 are interpreted as 1.0
This means that your values (100,100,100) are going to be converted in (1.0,1.0,1.0) which is white. What you have to do is convert each channel value using the following equation:
100 : 255 = x : 1.0 => x = 100/255
where x is the value that you will use for the method
-(NSColor*)colorWithDeviceRed:CGFloat red green:CGFloat green blue:CGFloat blue alpha:CGFloat alpha];
You should have something like this in your code
[NSColor colorWithDeviceRed:100.0/255.0 green:100.0/255.0 blue:100.0/255.0 alpha:1.0];

Why is my CGGradient not working with a preset UIColor?

I have this working code:
NSMutableArray *shadowColors = [NSMutableArray arrayWithCapacity:2];
color = [UIColor colorWithRed:0 green:0 blue:0 alpha:1]; // Declaration using components
[shadowColors addObject:(id)[color CGColor]];
color = [UIColor colorWithRed:1 green:1 blue:1 alpha:0.0]; // Declaration using components
[shadowColors addObject:(id)[color CGColor]];
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGGradientRef gradient = CGGradientCreateWithColors(space, (CFArrayRef)shadowColors, NULL);
CGColorSpaceRelease(space);
CGFloat sw = 10.0; // shadow width
CGPoint top1 = CGPointMake(x, y+width/2.0);
CGPoint top2 = CGPointMake(x + sw, y+width/2.0);
CGPoint side1 = CGPointMake(x+width/2.0, y);
CGPoint side2 = CGPointMake(x+width/2.0, y+sw);
CGContextDrawLinearGradient(c, gradient, top1, top2, 0);
CGContextDrawLinearGradient(c, gradient, side1, side2, 0);
CGGradientRelease(gradient);
The color declarations are the part I'm interested in, lines 2 and 4. When I declare them as shown, they work just fine, but if I replace those two lines with the equivalent (I thought, at least) [UIColor blackColor] and [UIColor clearColor] then my gradients disappear. The colors I use don't make any difference, I can use greenColor and redColor and they still don't work.
Am I missing something or is this a bug in Apple's frameworks?
The code that doesn't work. And this is just the first section, everything else is the same.
NSMutableArray *shadowColors = [NSMutableArray arrayWithCapacity:2];
color = [UIColor blackColor];
[shadowColors addObject:(id)[color CGColor]];
color = [UIColor clearColor];
[shadowColors addObject:(id)[color CGColor]];
The code looks fine to me. blackColor and clearColor are probably both in a white color space, but the documentation says CGGradientCreateWithColors will convert the colors to the color space you pass in, so that shouldn't matter.
The only thing I can think of would be to try passing NULL for the color space, letting the gradient convert the colors to Generic RGB instead of Device RGB. This may work, but shouldn't make a difference—as far as I can see, it should work either way.
I suggest filing a bug.