objective c: save image pixel RGB values to Array - objective-c

I'm trying some experimental imagestuff on ipad and I'm trying to store every pixel's colordata into one array to increase performance for reading every pixel colordata,
right now I have a timer that calls my DrawRect as much as possible, in my DrawRect function I have this:
-(void)drawRect:(CGRect)rect
{
UIGraphicsBeginImageContext(self.frame.size);
[currentImage.image drawInRect:CGRectMake(0, 0, 768, 1004)];
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 0.3);
r_x = r_x + 1;
if (r_x == 768) {
r_x = 1;
r_y = r_y + 1;
}
if (r_y == 1004) {
NSLog(#"color = %#", mijnArray_kleur);
}
CGPoint point2_1 = CGPointMake(r_x, r_y);
GetColor *mycolor = [GetColor alloc];
UIColor *st = [mycolor getPixelColorAtLocation:point2_1];
[mijnArray_kleur addObject:st];
[mycolor release];
CGContextSetFillColorWithColor(UIGraphicsGetCurrentContext(), [st CGColor]);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(r_x,r_y,1,1));
}
and getPixelColorAtLocation is a custom class that returns the UIDeviceRGBColorSpace values of a pixel
with this it takes me about 4 hours (yes, hours :p) to complete one image, is there anything faster/improvements?
Thanks!
Thys

[Copied from comment for clarity] Not that I know objective C, at all, but it seems to me like your function iterates over 768 * 1004 values and thus draws that many rectangles. Guessing about 60 frames/second, this would take 3h 40min. Am I wrong here?

Related

CGContextFillRects: No matching function for call

I'm trying to optimize the performance in one of my components. The component needs to draw some (10 to 200) rectangles in it's drawRect method, which is triggered about 20 times per second.
Everything works when I use the CGContextFillRect method on each CGRect separately. I want to test if grouping the drawing into one single call with CGContextFillRects on an array of CGRects would increase performance.
The method CGContextFillRects gives me a compiler error No matching function for call to 'CGContextFillRects'.
This code is inside a .mm file. Should I import something before the CGContextFillRects method can be used?
This is what i'm trying to do:
- (void) drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, self.fillColor.CGColor);
//check if some objects are present
if (self.leftDrawBuffer && self.rightDrawBuffer){
UInt32 xPosForRect = self.leftPadding;
NSMutableArray *rectsToFill = [[NSMutableArray alloc] init];
for (int drawBufferLRIndex = 0; drawBufferLRIndex < 2; drawBufferLRIndex++){
Float32 *drawBuffer_ptr = self.leftDrawBuffer;
if (drawBufferLRIndex > 0){
drawBuffer_ptr = self.rightDrawBuffer;
}
for (int i=0; i< kAmountOfBarsPerChannel; i=i+1){
Float32 amp = drawBuffer_ptr[i];
Float32 blockNumber = 1.0f;
UInt32 yPosForRect = self.bounds.size.height - self.heightPerBlock;
while (blockNumber <= self.blocksPerLine && blockNumber / self.blocksPerLine < amp){
CGRect rect= CGRectMake(xPosForRect, yPosForRect, self.widthPerBlock, self.heightPerBlock);
[rectsToFill addObject:[NSValue valueWithCGRect:rect]];
//Using the method below works and gives me the expected result
//CGContextFillRect(context, rect);
blockNumber++;
yPosForRect -= self.heightPerBlock + self.vPaddingPerBlock;
}
xPosForRect += self.widthPerBlock + self.hPaddingPerBlock;
}
}
//This is the added code where i try to use CGContextFillRects
//1 -> transform to a c array of CGRects
const CGRect *cRects[rectsToFill.count];
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = &rect;
}
size_t size = rectsToFill.count;
//2 -> trigger the method to fill all rects at once
//this method gives me the compiler error 'No matching function for call to 'CGContextFillRects''
CGContextFillRects(context, cRects, size);
}
CGContextRestoreGState(context);
}
The problem is how you convert the rects to a C array. You make pointers to the rects that are temporarily stored on the stack. There are two problems with this. First, the rects are gone with each loop iteration, so you can't do that. Second, You should pass a pointer to an array of CGRects, not an array of pointers to CGRect.
This will likely solve it:
CGRect cRects[rectsToFill.count]; // Replace your lines from this
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = rect;
}
size_t size = rectsToFill.count;
CGContextFillRects(context, cRects, size); // To this
Please note the re-declaration of the cRects array and the change in the assignment.

sprite kit - objective c: slow fps when I create a lot nodes

I wanted to create a space background so I make a for loop to create the stars. Here is the code:
for (int i = 0; i<100; i++) {
SKShapeNode *star= [SKShapeNode shapeNodeWithPath:Path.CGPath];
star.fillColor = [UIColor whiteColor];
star.physicsBody = nil;
int xposition = arc4random()%960;
int yposition = arc4random()%640;
star.position = CGPointMake(xposition, yposition);
float size = (arc4random()%3 + 1)/10.0;
star.xScale = size;
star.yScale = size;
star.alpha = (arc4random()%10 + 1 )/ 10.0;
star.zPosition = -2;
[self addChild:star];
}
But it takes a lot from my cpu. when the code is activated the cpu at top 78%.(I check the code in the iPhone simulator);
Somebody know how to fix it? thanks.
Your physics bodies continue to calculate even when off of the screen. You will need to remove them once they go out of the frame, otherwise everything will slow to a crawl. (And to echo what others have stated you will eventually need a real device).
From this document: Jumping Into Sprite Kit
You can implement the "Did Simulate Physics" method to get rid of the stars that fell from the bottom of the screen like so:
-(void)didSimulatePhysics
{
[self enumerateChildNodesWithName:#"star" usingBlock:^(SKNode *node, BOOL *stop) {
if (node.position.y < 0)
[node removeFromParent];
}];
}
Note that you will first need to set the name of your star shapes by using the name property like so:
star.name = "star"

Image processing in Obj-C

I want to do some scientific image processing on iOS in Obj-C or of course C, all I require to do this is to get a 3D array of the bit values of all the pixels' RGBA. UIImage doesn't seem to have a built in function. Do any of you know how to get the pixel values or more preferably a predefined library with those functions in there?
Thanks in advance, JEM
You'd normally either create a CGBitmapContext, telling it to use some memory you'd allocated yourself (so, you know where it is and how to access it) or let Core Graphics figure that out for you and call CGBitmapContextGetData if you're targeting only iOS 4/OS X 10.6 and later, then draw whatever you want to inspect to it.
E.g. (error checking and a few setup steps deliberately omitted for brevity; look for variables I use despite not defining and check the function documentation)
CGBitmapInfo bitmapInfo =
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
context =
CGBitmapContextCreate(
NULL,
width, height,
8,
width * 4,
rgbColourSpace,
bitmapInfo);
CGContextDrawImage(
context,
CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height),
[image CGImage]);
uint8_t *pixelPointer = CGBitmapContextGetData(context);
for(size_t y = 0; y < height; y++)
{
for(size_t x = 0u; x < width; x++)
{
if((bitmapInfo & kCGBitmapByteOrder32Little))
{
NSLog(#"rgba: %02x %02x %02x %02x",
pixelPointer[2], pixelPointer[1],
pixelPointer[0], pixelPointer[3]);
}
else
{
NSLog(#"rgba: %02x %02x %02x %02x",
pixelPointer[1], pixelPointer[2],
pixelPointer[3], pixelPointer[0]);
}
pixelPointer += 4;
}
}

Detect most black pixel on an image - objective-c iOS

I have an image! It's been so long since I've done pixel detection, I remember you have to convert the pixels to an array somehow and then find the width of the image to find out when the pixels reach the end of a row and go to the next one and ahh, lots of complex stuff haha! Anyways I now have no clue how to do this anymore but I need to detect the left-most darkest pixel's x&y coordinates of my image named "image1"... Any good starting places?
Go to your bookstore, find a book called "iOS Developer's Cookbook" by Erica Sadun. Go to page 378-ish and there are methods for pixel detection there. You can look in this array of RGB values and run a for loop to sort and find the pixel that has the smallest sum of R, G, and B values (this will be 0-255) that will give you the pixel closest to black.
I can also post the code if needed. But the book is the best source as it gives methods and explanations.
These are mine with some changes. The method name remains the same. All I changed was the image which basically comes from an image picker.
-(UInt8 *) createBitmap{
if (!self.imageCaptured) {
NSLog(#"Error: There has not been an image captured.");
return nil;
}
//create bitmap for the image
UIImage *myImage = self.imageCaptured;//image name for test pic
CGContextRef context = CreateARGBBitmapContext(myImage.size);
if(context == NULL) return NULL;
CGRect rect = CGRectMake(0.0f/*start width*/, 0.0f/*start*/, myImage.size.width /*width bound*/, myImage.size.height /*height bound*/); //original
// CGRect rect = CGRectMake(myImage.size.width/2.0 - 25.0 /*start width*/, myImage.size.height/2.0 - 25.0 /*start*/, myImage.size.width/2.0 + 24.0 /*width bound*/, myImage.size.height/2.0 + 24.0 /*height bound*/); //test rectangle
CGContextDrawImage(context, rect, myImage.CGImage);
UInt8 *data = CGBitmapContextGetData(context);
CGContextRelease(context);
return data;
}
CGContextRef CreateARGBBitmapContext (CGSize size){
//Create new color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
//Allocate memory for bitmap data
void *bitmapData = malloc(size.width*size.height*4);
if(bitmapData == NULL){
fprintf(stderr, "Error: memory not allocated\n");
CGColorSpaceRelease(colorSpace);
return NULL;
}
//Build an 8-bit per channel context
CGContextRef context = CGBitmapContextCreate(bitmapData, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
fprintf(stderr, "Error: Context not created!");
free(bitmapData);
return NULL;
}
return context;
}
NSUInteger blueOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+3);
}
NSUInteger redOffset(NSUInteger x, NSUInteger y, NSUInteger w){
return y*w*4 + (x*4+1);
}
The method on the bottom, redOffset, will get you the Red value in the ARGB (Alpha-Red-Green-Blue) scale. To change what channel in the ARGB you are looking at, change the value added to the x variable in the redOffset function to 0 to find alpha, keep it at 1 to find red, 2 to find green, and 3 to find blue. This works because it just looks at an array made in the methods above and the addition to x accounts for the index value. Essentially, use methods for the three colors (Red, green, and blue) and find the summation of those for each pixel. Whichever pixel has the lowest value of red, green, and blue together is the most black.

NSTextAttachmentCell is a mile high

I'm editing a subset of HTML in an NSTextView[1] and I want to simulate an <hr> tag.
I've figured out that the way to do it is with NSTextAttachment and a custom NSTextAttachmentCell, and have the code all written to insert the attachment and cell. The problem is, there's an enormous amount of blank space below the cell.
This space is not part of the cell itself—if I paint the entire area of the cell red, it's exactly the right size, but the text view is putting the next line of text very far below the red. The amount seems to depend on how much text is above the cell; unfortunately, I'm working with long documents where <hr> tags are crucial, and this causes major problems with the app.
What the heck is going on?
The money parts of my cell subclass:
- (NSRect)cellFrameForTextContainer:(NSTextContainer *)textContainer
proposedLineFragment:(NSRect)lineFrag glyphPosition:(NSPoint)position
characterIndex:(NSUInteger)charIndex {
lineFrag.size.width = textContainer.containerSize.width;
lineFrag.size.height = topMargin + TsStyleBaseFontSize *
heightFontSizeMultiplier + bottomMargin;
return lineFrag;
}
- (void)drawWithFrame:(NSRect)cellFrame inView:(NSView *)controlView
characterIndex:(NSUInteger)charIndex
layoutManager:(NSLayoutManager *)layoutManager {
NSRect frame = cellFrame;
frame.size.height -= bottomMargin;
frame.size.height -= topMargin;
frame.origin.y += topMargin;
frame.size.width *= widthPercentage;
frame.origin.x += (cellFrame.size.width - frame.size.width)/2;
[color set];
NSRectFill(frame);
}
[1] I tried a WebView with isEditable set and the markup it produced was unusably dirty—in particular, I couldn't find a way to wrap text nicely in <p> tags.
To answer Rob Keniger's request for the code that inserts the horizontal rule attachment:
- (void)insertHorizontalRule:(id)sender {
NSAttributedString * rule = [TsPage newHorizontalRuleAttributedStringWithStylebook:self.book.stylebook];
NSUInteger loc = self.textView.rangeForUserTextChange.location;
if(loc == NSNotFound) {
NSBeep();
return;
}
if(loc > 0 && [self.textView.textStorage.string characterAtIndex:loc - 1] != '\n') {
NSMutableAttributedString * workspace = rule.mutableCopy;
[workspace.mutableString insertString:#"\n" atIndex:0];
rule = workspace;
}
if([self.textView shouldChangeTextInRange:self.textView.rangeForUserTextChange replacementString:rule.string]) {
[self.textView.textStorage beginEditing];
[self.textView.textStorage replaceCharactersInRange:self.textView.rangeForUserTextChange withAttributedString:rule];
[self.textView.textStorage endEditing];
[self.textView didChangeText];
}
[self.textView scrollRangeToVisible:self.textView.rangeForUserTextChange];
[self reloadPreview:sender];
}
And the method in TsPage that constructs the attachment string:
+ (NSAttributedString *)newHorizontalRuleAttributedStringWithStylebook:(TsStylebook*)stylebook {
TsHorizontalRuleCell * cell = [[TsHorizontalRuleCell alloc] initTextCell:#"—"];
cell.widthPercentage = 0.33;
cell.heightFontSizeMultiplier = 0.25;
cell.topMargin = 12.0;
cell.bottomMargin = 12.0;
cell.color = [NSColor blackColor];
NSTextAttachment * attachment = [[NSTextAttachment alloc] initWithFileWrapper:nil];
attachment.attachmentCell = cell;
cell.attachment = attachment;
NSAttributedString * attachmentString = [NSAttributedString attributedStringWithAttachment:attachment];
NSMutableAttributedString * str = [[NSMutableAttributedString alloc] initWithString:#""];
[str appendAttributedString:attachmentString];
[str.mutableString appendString:#"\n"];
return str;
}
Try changing your cell class's cellFrameForTextContainer:proposedLineFragment:glyphPosition: method to return NSZeroPoint for the origin of the cell's frame.
(I have no idea why this should work, but the questioner and I have been live-debugging it, and it actually does.)
Added by questioner: It looks like, even though the rect being returned is described as a "frame", its origin is relative to the line it's in, not to the top of the document. Thus, the return value's origin.y value should be set to zero if you want it on the same line.
(The origin.x value, on the other hand, does refer to the cell's position in the line, so it should be the same as lineFrag.origin.x unless you want to change the cell's horizontal location.)