Custom MKOverlayView Drawing - objective-c

I am trying to create a custom MKOverlayView class that draws the overlay since the normal MKOverlay and MKOverlayView does not suit my needs for the application I am building.
I am having a problem with the drawing of the polygon overlay, when I draw it; it looks no where near as good as if I let MKOverlayView draw it and by that i mean the edges are not sharp and are all pixelated when you zoom in on the map. Also the lines from point to point dont get drawn either for some reason.
Also when zooming in some of the polygons drawing gets clipped out until I zoom back out again.
here is my draw code
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context{
MapOverlay *newOverlay = (MapOverlay *)self.overlay;
CGColorRef ref2 = [newOverlay.strokeColor CGColor];
CGContextSetLineWidth(context, newOverlay.lineWidth);
CGColorRef ref = [newOverlay.fillColor CGColor];
int _countComponents = CGColorGetNumberOfComponents(ref);
if (_countComponents == 4) {
const CGFloat *_components2 = CGColorGetComponents(ref);
CGFloat red = _components2[0];
CGFloat green = _components2[1];
CGFloat blue = _components2[2];
CGFloat alpha = _components2[3];
CGContextSetRGBFillColor(context, red, green, blue, alpha);
}
for (int i=0; i<newOverlay.coordinates.count; i++){
if(i % 2 == 0) {
CLLocationCoordinate2D p;
p.latitude = (CLLocationDegrees)[(NSString *)[newOverlay.coordinates objectAtIndex:i+1]floatValue];
p.longitude = (CLLocationDegrees)[(NSString *)[newOverlay.coordinates objectAtIndex:i] floatValue];
//CLLocation *p = [[CLLocation alloc] initWithLatitude:(CLLocationDegrees)[(NSString *)[newOverlay.coordinates objectAtIndex:i+1] floatValue] longitude:(CLLocationDegrees)[(NSString *)[newOverlay.coordinates objectAtIndex:i] floatValue]];
MKMapPoint point = MKMapPointForCoordinate(p);
CGPoint point2 = [self pointForMapPoint:point];
if (i==0){
CGContextMoveToPoint(context, point2.x, point2.y);
}else{
CGContextAddLineToPoint(context, point2.x, point2.y);
}
}
}
CGContextDrawPath(context, kCGPathFillStroke);
//CGContextStrokePath(context);
}
I have found very little information on custom MKOverlayView classes and how to draw on the map but these were the 2 tutorials that i was using to do this
http://spitzkoff.com/craig/?p=65
http://i.ndigo.com.br/2012/05/ios-maps-with-image-overlays/
UPDATE
I have a feeling it might have something to do with the bounding box of the overlay because if I return the bounding box of the world it displays fine but thats obviously not efficient to have every possible overlay drawn.
here is how I found the bounding box of my overlay in my custom MKOverlay class
- (MKMapRect)boundingMapRect{
double maxY = 0;
double minY = 0;
double maxX = 0;
double minX = 0;
for(NSUInteger i = 0;i < coordinates.count; i++) {
if(i % 2 == 0) {
CLLocationCoordinate2D tempLoc;
tempLoc.latitude =(CLLocationDegrees)[(NSString *)[coordinates objectAtIndex:(NSUInteger)i + 1] floatValue];
tempLoc.longitude = (CLLocationDegrees)[(NSString *)[coordinates objectAtIndex:(NSUInteger)i] floatValue];
MKMapPoint tempPoint = MKMapPointForCoordinate(tempLoc);
if(i == 0){
minX = tempPoint.x;
minY = tempPoint.y;
if(tempPoint.x > maxX){
maxX = tempPoint.x;
}
if (tempPoint.y > maxY){
maxY = tempPoint.y;
}
}else{
if(tempPoint.x > maxX){
maxX = tempPoint.x;
}
if(tempPoint.x < minX){
minX = tempPoint.x;
}
if (tempPoint.y > maxY){
maxY = tempPoint.y;
}
if(tempPoint.y < minY){
minY = tempPoint.y;
}
}
}
}
MKMapRect b2 = MKMapRectMake(minX, maxY, minX-maxX, minY-maxY);
return b2;
//return MKMapRectWorld;
}

Related

Fastest way to check if NSImage is a template image

Let's say you have a NSImage with NSBitmapImageRep (raster image) of 16x16 pixels.
It can be a color image or contain only black pixels with alpha channel.
When it only has black pixels, I can set .isTemplate for the NSImage and handle it correspondingly.
The question is - how do you quickly detect it has black pixels only?
What is the fastest way to check if provided image is a template?
Here is how I do it. It works, but requires moving through all the pixels and check them one-by-one. Even with 16x16 size it takes about a second for 10-20 images to process. So I am looking for a more optimized approach:
+ (BOOL)detectImageIsTemplate:(NSImage *)image
{
BOOL result = NO;
if (image)
{
// If we have a valid image, assume it's a template until we face any non-black pixel
result = YES;
NSSize imageSize = image.size;
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef ctx = CGBitmapContextCreate(NULL,
imageSize.width,
imageSize.height,
8,
0,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSGraphicsContext* gctx = [NSGraphicsContext graphicsContextWithCGContext:ctx flipped:NO];
[NSGraphicsContext setCurrentContext:gctx];
[image drawInRect:imageRect];
// ......................................................
size_t width = CGBitmapContextGetWidth(ctx);
size_t height = CGBitmapContextGetHeight(ctx);
uint32_t* pixel = (uint32_t*)CGBitmapContextGetData(ctx);
for (unsigned y = 0; y < height; y++)
{
for (unsigned x = 0; x < width; x++)
{
uint32_t rgba = *pixel;
uint8_t red = (rgba & 0x000000ff) >> 0;
uint8_t green = (rgba & 0x0000ff00) >> 8;
uint8_t blue = (rgba & 0x00ff0000) >> 16;
if (red != 0 || green != 0 || blue != 0)
{
result = NO;
break;
}
pixel++; // Next pixel
}
if (result == NO) break;
}
// ......................................................
[NSGraphicsContext setCurrentContext:nil];
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
}
return result;
}
Pure black images are in your category, is color image or only pixels with alpha channel?
Why not judge the image type by the number of channels? RGBX or only A.

Rotate already detected rectangle with OpenCV Swift

I am working on Document edge detection using OpenCV in my iOS Project and successfully detected the edges of document.
Now, I want to rotate the image along with detected rectangle. I have referred this
Github project to detect the edges.
For that, I first rotated the image and trying to re-detect the edges by again finding the largest rectangle of the image. By unfortunately, it is not giving me exact rectangle.
Can I somebody suggest me something to detect the rotated document's edges, again or shall I rotate the detected rectangle along with image ?
Before Rotation Image
After Rotation Image
+(NSMutableArray *) getLargestSquarePoints: (UIImage *) image : (CGSize) size {
Mat imageMat;
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, cols, rows, 8, cvMat.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
imageMat = cvMat;
cv::resize(imageMat, imageMat, cvSize(size.width, size.height));
// UIImageToMat(image, imageMat);
std::vector<std::vector<cv::Point> >rectangle;
std::vector<cv::Point> largestRectangle;
getRectangles(imageMat, rectangle);
getlargestRectangle(rectangle, largestRectangle);
if (largestRectangle.size() == 4)
{
// Thanks to: https://stackoverflow.com/questions/20395547/sorting-an-array-of-x-and-y-vertice-points-ios-objective-c/20399468#20399468
NSArray *points = [NSArray array];
points = #[
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[0].x, (CGFloat)largestRectangle[0].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[1].x, (CGFloat)largestRectangle[1].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[2].x, (CGFloat)largestRectangle[2].y}],
[NSValue valueWithCGPoint:(CGPoint){(CGFloat)largestRectangle[3].x, (CGFloat)largestRectangle[3].y}] ];
CGPoint min = [points[0] CGPointValue];
CGPoint max = min;
for (NSValue *value in points) {
CGPoint point = [value CGPointValue];
min.x = fminf(point.x, min.x);
min.y = fminf(point.y, min.y);
max.x = fmaxf(point.x, max.x);
max.y = fmaxf(point.y, max.y);
}
CGPoint center = {
0.5f * (min.x + max.x),
0.5f * (min.y + max.y),
};
NSLog(#"center: %#", NSStringFromCGPoint(center));
NSNumber *(^angleFromPoint)(id) = ^(NSValue *value){
CGPoint point = [value CGPointValue];
CGFloat theta = atan2f(point.y - center.y, point.x - center.x);
CGFloat angle = fmodf(M_PI - M_PI_4 + theta, 2 * M_PI);
return #(angle);
};
NSArray *sortedPoints = [points sortedArrayUsingComparator:^NSComparisonResult(id a, id b) {
return [angleFromPoint(a) compare:angleFromPoint(b)];
}];
NSLog(#"sorted points: %#", sortedPoints);
NSMutableArray *squarePoints = [[NSMutableArray alloc] init];
[squarePoints addObject: [sortedPoints objectAtIndex:0]];
[squarePoints addObject: [sortedPoints objectAtIndex:1]];
[squarePoints addObject: [sortedPoints objectAtIndex:2]];
[squarePoints addObject: [sortedPoints objectAtIndex:3]];
imageMat.release();
return squarePoints;
}
else{
imageMat.release();
return nil;
}
}
void getRectangles(cv::Mat& image, std::vector<std::vector<cv::Point>>&rectangles) {
// blur will enhance edge detection
cv::Mat blurred(image);
GaussianBlur(image, blurred, cvSize(11,11), 0);
cv::Mat gray0(blurred.size(), CV_8U), gray;
std::vector<std::vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Canny(gray0, gray, 0, 50, 5);
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
std::vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(cv::Mat(approx))) > 1000 &&
isContourConvex(cv::Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
rectangles.push_back(approx);
}
}
}
}
}
void getlargestRectangle(const std::vector<std::vector<cv::Point> >&rectangles, std::vector<cv::Point>& largestRectangle)
{
if (!rectangles.size())
{
return;
}
double maxArea = 0;
int index = 0;
for (size_t i = 0; i < rectangles.size(); i++)
{
cv::Rect rectangle = boundingRect(cv::Mat(rectangles[i]));
double area = rectangle.width * rectangle.height;
if (maxArea < area)
{
maxArea = area;
index = i;
}
}
largestRectangle = rectangles[index];
}
double angle(cv::Point pt1, cv::Point pt2, cv::Point pt0) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
+(UIImage *) getTransformedImage: (CGFloat) newWidth : (CGFloat) newHeight : (UIImage *) origImage : (CGPoint [4]) corners : (CGSize) size {
cv::Mat imageMat;
CGColorSpaceRef colorSpace = CGImageGetColorSpace(origImage.CGImage);
CGFloat cols = size.width;
CGFloat rows = size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
// Pointer to backing data
cols,
// Width of bitmap
rows,
// Height of bitmap
8,
// Bits per component
cvMat.step[0],
// Bytes per row
colorSpace,
// Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), origImage.CGImage);
CGContextRelease(contextRef);
imageMat = cvMat;
cv::Mat newImageMat = cv::Mat( cvSize(newWidth,newHeight), CV_8UC4);
cv::Point2f src[4], dst[4];
src[0].x = corners[0].x;
src[0].y = corners[0].y;
src[1].x = corners[1].x;
src[1].y = corners[1].y;
src[2].x = corners[2].x;
src[2].y = corners[2].y;
src[3].x = corners[3].x;
src[3].y = corners[3].y;
dst[0].x = 0;
dst[0].y = -10;
dst[1].x = newWidth - 1;
dst[1].y = -10;
dst[2].x = newWidth - 1;
dst[2].y = newHeight + 1;
dst[3].x = 0;
dst[3].y = newHeight + 1;
dst[0].x = 0;
dst[0].y = 0;
dst[1].x = newWidth - 1;
dst[1].y = 0;
dst[2].x = newWidth - 1;
dst[2].y = newHeight - 1;
dst[3].x = 0;
dst[3].y = newHeight - 1;
cv::warpPerspective(imageMat, newImageMat, cv::getPerspectiveTransform(src, dst), cvSize(newWidth, newHeight));
//Transform to UIImage
NSData *data = [NSData dataWithBytes:newImageMat.data length:newImageMat.elemSize() * newImageMat.total()];
CGColorSpaceRef colorSpace2;
if (newImageMat.elemSize() == 1) {
colorSpace2 = CGColorSpaceCreateDeviceGray();
} else {
colorSpace2 = CGColorSpaceCreateDeviceGray();
// colorSpace2 = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGFloat width = newImageMat.cols;
CGFloat height = newImageMat.rows;
CGImageRef imageRef = CGImageCreate(width, height, 8, 8 * newImageMat.elemSize(),
newImageMat.step[0],
colorSpace2,
kCGImageAlphaNone | kCGBitmapByteOrderDefault, provider,
NULL, false, kCGRenderingIntentDefault);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace2);
return image;
}
If you use cv2.minAreaRect, it gives the best inclosing rectangle to a contour and the degrees, so you can rotate back.

Change color during the drawing on a NSBitmapImageRep

I implemented a methode that returns a NSBitmapImageRep. Onto that bitmap 10x2 rectangles should be drawn and each rectangle should be filled with the color cyan. But for each rectangle the cyan value should be increased by 12 (value starts at 0).
The result bitmap gets 20 rectangles, like expected. But the color doesn't differ between the rectangles. All rectangles have the same cyan value.
I have no idea what's the problem. Can somebody please give me a hint?
-(NSBitmapImageRep*)drawOntoBitmap
{
NSRect offscreenRect = NSMakeRect(0.0, 0.0, 1000.0, 400.0);
NSBitmapImageRep *image = nil;
image = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:offscreenRect.size.width
pixelsHigh:offscreenRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSDeviceCMYKColorSpace
bitmapFormat:0
bytesPerRow:(4 * offscreenRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:image]];
NSRect colorRect;
NSBezierPath *thePath;
int cyan = 0;
int x = 0;
int y = 0;
int w = 0;
int h = 0;
for (intj = 0; j<2; j++)
{
y = j * 200;
h = y + 200;
for (int i = 0; i<10; i++)
{
x = i * 100;
w = x + 100;
colorRect = NSMakeRect(x, y, w, h);
thePath = [NSBezierPath bezierPathWithRect: colorRect];
cyan += 12;
[[NSColor colorWithDeviceCyan:cyan magenta:0 yellow:0 black:0 alpha:100] set];
[thePath fill];
}
}
[NSGraphicsContext restoreGraphicsState];
return image;
}
For each rect the same color value is used and it's the last cyan value that's set after the both loops are passed.
OK, found out that the NSColor value has a range of 0.0 - 1.0.
So I have to set my cyan to float like that:
cyan += 12/255;
The value has to be smaller than 1.0.

Center row of Buttons in UIScrollView

I am adding buttons in a UIScrollView dynamically, but some views only have two buttons and some have 10. I want to center the buttons in the scroll view so It doesn't look like I built them from the left to right. I've tried several tricks from SO and nothing seems to work. setting the content offset was my first approach, but doesn't have the effect I want.
Here is the code I am using:
- (void)addButton:(UIButton *)button {
CGFloat contentWidth = 0.0f;
CGRect buttonFrame = button.frame;
if ([_scrollView.subviews count] == 0) {
buttonFrame.origin.x = self.contentIndicatorWidth + kButtonSeperator;
contentWidth = self.contentIndicatorWidth + buttonFrame.size.width + self.contentIndicatorWidth;
} else {
contentWidth = _scrollView.contentSize.width;
UIButton *lastButton = [_scrollView.subviews lastObject];
buttonFrame.origin.x = lastButton.frame.origin.x + lastButton.frame.size.width + kButtonSeperator;
contentWidth += buttonFrame.size.width + kButtonSeperator;
}
button.frame = buttonFrame;
[_scrollView addSubview:button];
_scrollView.contentSize = CGSizeMake(contentWidth *kContentMultiplicityFactor, _scrollView.frame.size.height);
[_buttons setValue:button forKey:button.titleLabel.text];
totalWidth = contentWidth;
// [_scrollView scrollRectToVisible:[self getButtonAtIndex:0].frame animated:YES];
}
After you add the buttons call this method:
- (void)centerButtons:(NSArray*)buttons {
CGSize scrollViewSize = _scrollView.bounds.size;
// Measure the total width taken by the buttons
CGFloat width = 0;
for (UIButton* b in buttons)
width += b.bounds.size.width + kButtonSeparator;
if (width > kButtonSeparator)
width -= kButtonSeparator;
// If the buttons width is shorter than the visible bounds, adjust origin
CGFloat origin = 0;
if (width < scrollViewSize.width)
origin = (scrollViewSize.width - width) / 2.f;
// Place buttons
CGFloat x = origin;
CGFloat y = 0;
for (UIButton* b in buttons) {
b.center = CGPointMake(x + b.bounds.size.width/2.f, y + b.bounds.size.height/2.f);
x += b.bounds.size.width + kButtonSeparator;
}
_scrollView.contentSize = CGSizeMake(MAX(scrollViewSize.width, width), scrollViewSize.height);
}
In the end, this isn't a very hard problem. I guess you add these buttons on the fly.
You have a scrollview called ScrollView and you have already put in content size.
So you have 10 buttons of different widths :
int btnCount = 10;
int totalWidth = 0;
int spacing = 5;
NSMUtableArray *buttons = [[NSMutableArray alloc] init];
for (int i=0; i<btnCount-1; i++) {
UIButton *b = [UIButton buttonWithType:UIButtonTypeRoundedRect];
[b setTitle:#"some title" forState:UIControlStateNormal];
[b sizeToFit];
CGRect frame = b.frame;
totalWidth += b.frame.size.width;
[buttons addObject:b];
}
totalWidth += (btnCount * spacing);
UIView *btnView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, totalWidth, 40)];
int priorButtonX = 0;
for (UIButton *b in buttons) {
CGRect frame = b.frame;
frame.origin.x = priorButtonX;
b.frame = frame;
priorButtonX = frame.origin.x + spacing;
[btnView addSubview:b];
}
int scrollSizeWidth = scrollView.contentSize.width;
int placeX = (scrollSizeWidth /2) - (btnView.frame.size.width /2)
CGRect btnViewFrame = btnView.frame;
btnViewFrame.origin.x = placeX;
bbtnView.frame = btnViewFrame;
[scrollView addSubView:btnView];
You might want to do something with Y placement, but that one is very simple. This code can be written in fewer lines, and use fewer variables, but this is done to make it easier to read.

How to draw stars using Quartz Core?

I'm trying to adapt an example provided by Apple in order to programmatically draw stars in line, the code is the following:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, aSize);
for (NSUInteger i=0; i<stars; i++)
{
CGContextSetFillColorWithColor(context, aColor);
CGContextSetStrokeColorWithColor(context, aColor);
float w = item.size.width;
double r = w / 2;
double theta = 2 * M_PI * (2.0 / 5.0); // 144 degrees
CGContextMoveToPoint(context, 0, r);
for (NSUInteger k=1; k<5; k++)
{
float x = r * sin(k * theta);
float y = r * cos(k * theta);
CGContextAddLineToPoint(context, x, y);
}
CGContextClosePath(context);
CGContextFillPath(context);
}
The code above draws a perfect star, but is 1. displayed upside down 2. is black and without border. What I want to achive is to draw many stars on the same line and with the given style. I understand that I'm actually drawing the same path 5 times in the same position and that I have somehow to flip the context vertically, but after several tests I gave up! (I lack the necessary math and geometry skills :P)... could you please help me?
UPDATE:
Ok, thanks to CocoaFu, this is my refactored and working draw utility:
- (void)drawStars:(NSUInteger)count inContext:(CGContextRef)context;
{
// constants
const float w = self.itemSize.width;
const float r = w/2;
const double theta = 2 * M_PI * (2.0 / 5.0);
const float flip = -1.0f; // flip vertically (default star representation)
// drawing center for the star
float xCenter = r;
for (NSUInteger i=0; i<count; i++)
{
// get star style based on the index
CGContextSetFillColorWithColor(context, [self fillColorForItemAtIndex:i]);
CGContextSetStrokeColorWithColor(context, [self strokeColorForItemAtIndex:i]);
// update position
CGContextMoveToPoint(context, xCenter, r * flip + r);
// draw the necessary star lines
for (NSUInteger k=1; k<5; k++)
{
float x = r * sin(k * theta);
float y = r * cos(k * theta);
CGContextAddLineToPoint(context, x + xCenter, y * flip + r);
}
// update horizontal center for the next star
xCenter += w + self.itemMargin;
// draw current star
CGContextClosePath(context);
CGContextFillPath(context);
CGContextStrokePath(context);
}
}
Here is code that will draw 3 stars in a horizontal line, it's is not pretty but it may help:
-(void)drawRect:(CGRect)rect
{
int aSize = 100.0;
const CGFloat color[4] = { 0.0, 0.0, 1.0, 1.0 }; // Blue
CGColorRef aColor = CGColorCreate(CGColorSpaceCreateDeviceRGB(), color);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, aSize);
CGFloat xCenter = 100.0;
CGFloat yCenter = 100.0;
float w = 100.0;
double r = w / 2.0;
float flip = -1.0;
for (NSUInteger i=0; i<3; i++)
{
CGContextSetFillColorWithColor(context, aColor);
CGContextSetStrokeColorWithColor(context, aColor);
double theta = 2.0 * M_PI * (2.0 / 5.0); // 144 degrees
CGContextMoveToPoint(context, xCenter, r*flip+yCenter);
for (NSUInteger k=1; k<5; k++)
{
float x = r * sin(k * theta);
float y = r * cos(k * theta);
CGContextAddLineToPoint(context, x+xCenter, y*flip+yCenter);
}
xCenter += 150.0;
}
CGContextClosePath(context);
CGContextFillPath(context);
}
Here's an algorithm to implement what buddhabrot implied:
- (void)drawStarInContext:(CGContextRef)context withNumberOfPoints:(NSInteger)points center:(CGPoint)center innerRadius:(CGFloat)innerRadius outerRadius:(CGFloat)outerRadius fillColor:(UIColor *)fill strokeColor:(UIColor *)stroke strokeWidth:(CGFloat)strokeWidth {
CGFloat arcPerPoint = 2.0f * M_PI / points;
CGFloat theta = M_PI / 2.0f;
// Move to starting point (tip at 90 degrees on outside of star)
CGPoint pt = CGPointMake(center.x - (outerRadius * cosf(theta)), center.y - (outerRadius * sinf(theta)));
CGContextMoveToPoint(context, pt.x, pt.y);
for (int i = 0; i < points; i = i + 1) {
// Calculate next inner point (moving clockwise), accounting for crossing of 0 degrees
theta = theta - (arcPerPoint / 2.0f);
if (theta < 0.0f) {
theta = theta + (2 * M_PI);
}
pt = CGPointMake(center.x - (innerRadius * cosf(theta)), center.y - (innerRadius * sinf(theta)));
CGContextAddLineToPoint(context, pt.x, pt.y);
// Calculate next outer point (moving clockwise), accounting for crossing of 0 degrees
theta = theta - (arcPerPoint / 2.0f);
if (theta < 0.0f) {
theta = theta + (2 * M_PI);
}
pt = CGPointMake(center.x - (outerRadius * cosf(theta)), center.y - (outerRadius * sinf(theta)));
CGContextAddLineToPoint(context, pt.x, pt.y);
}
CGContextClosePath(context);
CGContextSetLineWidth(context, strokeWidth);
[fill setFill];
[stroke setStroke];
CGContextDrawPath(context, kCGPathFillStroke);
}
Works for me for most basic stars. Tested from 2 points (makes a good diamond!) up to 9 stars.
If you want a star with point down, change the subtraction to addition.
To draw multiples, create a loop and call this method multiple times, passing a new center each time. That should line them up nicely!
I prefer using a CAShaperLayer to implementing drawRect, as it can then be animated.
Here's a function that will create a path in the shape of a 5 point star:
func createStarPath(size: CGSize) -> CGPath {
let numberOfPoints: CGFloat = 5
let starRatio: CGFloat = 0.5
let steps: CGFloat = numberOfPoints * 2
let outerRadius: CGFloat = min(size.height, size.width) / 2
let innerRadius: CGFloat = outerRadius * starRatio
let stepAngle = CGFloat(2) * CGFloat(M_PI) / CGFloat(steps)
let center = CGPoint(x: size.width / 2, y: size.height / 2)
let path = CGPathCreateMutable()
for i in 0..<Int(steps) {
let radius = i % 2 == 0 ? outerRadius : innerRadius
let angle = CGFloat(i) * stepAngle - CGFloat(M_PI_2)
let x = radius * cos(angle) + center.x
let y = radius * sin(angle) + center.y
if i == 0 {
CGPathMoveToPoint(path, nil, x, y)
}
else {
CGPathAddLineToPoint(path, nil, x, y)
}
}
CGPathCloseSubpath(path)
return path
}
It can then be used with a CAShapeLayer like so:
let layer = CAShapeLayer()
layer.path = createStarPath(CGSize(width: 100, height: 100))
layer.lineWidth = 1
layer.strokeColor = UIColor.blackColor().CGColor
layer.fillColor = UIColor.yellowColor().CGColor
layer.fillRule = kCAFillRuleNonZero