Get pixel colour from a Webcam - objective-c

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.

There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.

Related

CGImageRef not init/alloc correctly

I am currently having problems with CGImageRef.
Whenever I create a CGImageRef and look at it in debugger view, in Xcode, it is nil.
Here's the code:
-(void)mouseMoved:(NSEvent *)theEvent{
if (self.shoulddrag) {
NSPoint event_location = [theEvent locationInWindow];//direct from the docs
NSPoint local_point = [self convertPoint:event_location fromView:nil];//direct from the docs
CGImageRef theImage = (__bridge CGImageRef)(self.image);
CGImageRef theClippedImage = CGImageCreateWithImageInRect(theImage, CGRectMake(local_point.x,local_point.y,1,1));
NSImage * image = [[NSImage alloc] initWithCGImage:theClippedImage size:NSZeroSize];
self.pixleView.image = image;
CGImageRelease(theClippedImage);
}
}
Everything else seems to be working though. I can't understand. Any help would be appreciated.
Note: self.pixelView is an NSImageView instance that has not been overridden in any way.
Very likely local_point is not inside of the image. You've converted the point from the window to the view coordinates, but that may not be equivalent to the image coordinates. Test this to see if the lower-left corner of your image results in local_point being (0,0).
It's not clear how your view is laid out, but I suspect that what you want to do is subtract the origin of whatever region (possibly a subview) the user is interacting with relative to self.
Alright, I figured it out.
What I was using to create the CGImageRef was:
CGImageRef theImage = (__bridge CGImageRef)(self.image);
Apparently what I should have used is:
CGImageSourceRef theImage = CGImageSourceCreateWithData((CFDataRef)[self.image TIFFRepresentation], NULL);
I guess my problem was that for some reason I thought NSImage and CGImageRef had toll free bridging.
Apparently, I was wrong.

How to make a perfect crop (without changing the quality) in Objective-c/Cocoa (OSX)

Is there any way in Objective-c/cocoa (OSX) to crop an image without changing the quality of the image?
I am very near to a solution, but there are still some differences that I can detect in the color. I can notice it when zooming into the text. Here is the code I am currently using:
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
target.backgroundColor = [NSColor greenColor];
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//add the NSBitmapImage to the representation list of the target
[target addRepresentation:bmpImageRep];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
NSString *filename = [NSString stringWithFormat:#"%#%#.jpg", panelImagePrefix, panelNumber];
NSLog(#"This is the filename: %#", filename);
//write the data to a file
[data writeToFile:filename atomically:NO];
Here is a zoomed-in comparison of the original and the cropped image:
(Original image - above)
(Cropped image - above)
The difference is hard to see, but if you flick between them, you can notice it. You can use a colour picker to notice the difference as well. For example, the darkest pixel on the bottom row of the image is a different shade.
I also have a solution that works exactly the way I want it in iOS. Here is the code:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
So, is there a way to crop an image in OSX so that the cropped image does not change at all? Perhaps I have to look into a different library, but I would be surprised if I could not do this with Objective-C...
Note, This is a follow up question to my previous question here.
Update I have tried (as per the suggestion) to round the CGRect values to whole numbers, but did not notice a difference. Here is the code in case I used:
[source drawInRect:NSMakeRect(0,0,(int)panelRect.size.width,(int)panelRect.size.height)
fromRect:NSMakeRect((int)panelRect.origin.x , (int)(source.size.height - panelRect.origin.y - panelRect.size.height), (int)panelRect.size.width, (int)panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
Update I have tried mazzaroth code and it works if I save it as a png, but if I try and save it as a jpeg, the image loses quality. So close, but not close enough. Still hoping for a complete answer...
use CGImageCreateWithImageInRect.
// this chunk of code loads a jpeg image into a cgimage
// creates a second crop of the original image with CGImageCreateWithImageInRect
// writes the new cropped image to the desktop
// ensure that the xy origin of the CGRectMake call is smaller than the width or height of the original image
NSURL *originalImage = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"lockwood" ofType:#"jpg"]];
CGImageRef imageRef = NULL;
CGImageSourceRef loadRef = CGImageSourceCreateWithURL((CFURLRef)originalImage, NULL);
if (loadRef != NULL)
{
imageRef = CGImageSourceCreateImageAtIndex(loadRef, 0, NULL);
CFRelease(loadRef); // Release CGImageSource reference
}
CGImageRef croppedImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(200., 200., 100., 100.));
CFURLRef saveUrl = (CFURLRef)[NSURL fileURLWithPath:[#"~/Desktop/lockwood-crop.jpg" stringByExpandingTildeInPath]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(saveUrl, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, croppedImage, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", saveUrl);
}
CFRelease(destination);
CFRelease(imageRef);
CFRelease(croppedImage);
I also made a gist:
https://gist.github.com/4259594
Try to change the drawInRect orign to 0.5,0.5. Otherwise Quartz will distribute each pixel color to the adjacent 4 fixels.
Set the color space of the target image. You might be having a different colorspace causing to to look slightly different.
Try the various rendering intents and see which gets the best result, perceptual versus relative colorimetric etc. There are 4 options I think.
You mention that the colors get modified by the saving of JPEG versus PNG.
You can specify the compression level when saving to JPEG. Try with something like 0.8 or 0.9. you can also save JPEG without compression with 1.0, but ther PNG has a distinct advantage. You specify the compression level in the options dictionary for CGImageDesinationAddImage.
Finally - if nothing her helps - you should open a TSI with DTS, they can certainly provide you with the guidance you seek.
The usual problem is that cropping sizes are float, but image pixels are integer.
cocoa interpolates it automatically.
You need to floor, round or ceil the size and coordinates to be sure that they are integer.
This may help.
I am doing EXIF deleting of JPG files and I think I caught the reason:
All losses and changes come from the re-compress of your image during saving to file.
You may notice the change too if you just to save the whole image again.
What I am to do is to read the original JPG and re-compress it to a quality that take equivalent file size.

NSImage losing quality upon writeToFile

Basically, I'm trying to create a program for batch image processing that will resize every image and add a border around the edge (the border will be made up of images as well). Although I have yet to get to that implementation, and that's beyond the scope of my question, I ask it because even if I get a great answer here, I still may be taking the wrong approach to get there, and any help in recognizing that would be greatly appreciated. Anyway, here's my question:
Question:
Can I take the existing code I have below and modify it to create higher-quality images saved-to-file than the code currently outputs? I literally spent 10+ hours trying to figure out what I was doing wrong; "secondaryImage" drew the high quality resized image into the Custom View, but everything I tried to do to save the file resulted in an image that was substantially lower quality (not so much pixelated, just noticeably more blurry). Finally, I found some code in Apple's "Reducer" example (at the end of ImageReducer.m) that locks the focus and gets a NSBitmapImageRep from the current view. This made a substantial increase in image quality, however, the output from Photoshop doing the same thing is a bit clearer. It looks like the image drawn to the view is of the same quality that's saved to file, and so both are below Photoshop's quality of the same image resized to 50%, just as this one is. Is it even possible to get higher quality resized images than this?
Aside from that, how can I modify the existing code to be able to control the quality of image saved to file? Can I change the compression and pixel density? I'd appreciate any help with either modifying my code or pointing me in the way of good examples or tutorials (preferably the later). Thanks so much!
- (void)drawRect:(NSRect)rect {
// Getting source image
NSImage *image = [[NSImage alloc] initWithContentsOfFile: #"/Users/TheUser/Desktop/4.jpg"];
// Setting NSRect, which is how resizing is done in this example. Is there a better way?
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
// Sort of used as an offscreen image or palate to do drawing onto; the future I will use to group several images into one.
NSImage *secondaryImage = [[NSImage alloc] initWithSize: halfSizeRect.size];
[secondaryImage lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation: NSImageInterpolationHigh];
[image drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
[secondaryImage unlockFocus];
[secondaryImage drawInRect: halfSizeRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction: 1.0];
// Trying to add image quality options; does this usage even affect the final image?
NSBitmapImageRep *bip = nil;
bip = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL pixelsWide: secondaryImage.size.width pixelsHigh: secondaryImage.size.width bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
[secondaryImage addRepresentation: bip];
// Four lines below are from aforementioned "ImageReducer.m"
NSSize size = [secondaryImage size];
[secondaryImage lockFocus];
NSBitmapImageRep *bitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect(0, 0, size.width, size.height)];
[secondaryImage unlockFocus];
NSDictionary *prop = [NSDictionary dictionaryWithObject: [NSNumber numberWithFloat: 1.0] forKey: NSImageCompressionFactor];
NSData *outputData = [bitmapImageRep representationUsingType:NSJPEGFileType properties: prop];
[outputData writeToFile:#"/Users/TheUser/Desktop/4_halfsize.jpg" atomically:NO];
// release from memory
[image release];
[secondaryImage release];
[bitmapImageRep release];
[bip release];
}
I'm not sure why you are round tripping to and from the screen. That could affect the result, and it's not needed.
You can accomplish all this using CGImage and CGBitmapContext, using the resultant image to draw to the screen if needed. I've used those APIs and had good results (but I do not know how they compare to your current approach).
Another note: Render at a higher quality for the intermediate, then resize and reduce to 8bpc for the version you write. This will not make a significant difference now, but it will (in most cases) once you introduce filtering.
Finally, one of those "Aha!" moments! I tried using the same code on a high-quality .tif file, and the resultant image was 8 times smaller (in dimensions), rather than than the 50% I'd told it to do. When I tried displaying it would any rescaling of the image, it wound up still 4 times smaller than the original, when it should have displayed at the same height and width. I found out the way I was taking the NSSize from the imported image was wrong. Previously, it read:
NSRect halfSizeRect = NSMakeRect(0, 0, image.size.width * 0.5, image.size.height * 0.5);
Where it should be:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData: [image TIFFRepresentation]];
NSRect halfSizeRect = NSMakeRect(0, 0, [imageRep pixelsWide]/2, [imageRep pixelsHigh]/2);
Apparently it has something to do with DPI and that jazz, so I needed to get the correct size from the BitmapImageRep rather than from image.size. With this change, I was able to save at a quality nearly indistinguishable from Photoshop.

Where is this leaking? / Why am I having memory issues?

I'm having a bit of trouble with an iPad app I'm creating. There is a simple image sequence/animation of about 80 frames at one point.
The code looks like this (this is a UIView subclass subclass):
- (id)init {
UIImage *theImage = [UIImage imageNamed:#"chart0075.jpg"];
// get the frame
CGRect lgeFrame = CGRectMake(20, 130, theImage.size.width, theImage.size.height);
// set the new frame
CGFloat newHeight = theImage.size.height/1.65;
CGFloat newWidth = theImage.size.width/1.65;
CGRect smlFrame = CGRectMake(480, 200, newWidth, newHeight);
self = [super initWithLargeFrame:lgeFrame smallFrame:smlFrame];
if(self){
// lets add the image as an image view
theImageView = [[UIImageView alloc] initWithImage:theImage];
[theImageView setFrame:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
[theImageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight];
[self addSubview:theImageView];
// now we need to make an array of images for the image sequence
NSMutableArray *imageSeq = [NSMutableArray array];
for(int i = 1; i < 76; i++){
NSString *jpgnumber;
if(i<10){
jpgnumber = [NSString stringWithFormat:#"000%i",i];
}
else {
jpgnumber = [NSString stringWithFormat:#"00%i",i];
}
NSString *imageFile = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"chart%#",jpgnumber] ofType:#"jpg"];
[imageSeq addObject:[UIImage imageWithContentsOfFile:imageFile]];
}
[theImageView setAnimationImages:imageSeq];
[theImageView setAnimationDuration:1.5];
[theImageView setAnimationRepeatCount:1];
}
return self;
}
Then on a reverse pinch gesture the image is supposed to animate. The first time I do this reverse pinch gesture it takes a few seconds to start the animation. And sometimes I get a memory warning level 1 and the app will crash.
Whats the problem here? Are 80 jpgs too much to keep in memory at once? They're well under 2mb big in total, so they surely shouldn't be filling up the ipad's memory right?
I've looked at it with the allocations tool which is suggesting that I have about 40kb in memory at the time of the animation, but then this goes back down to 0 during subsequent animations. (although the allocations tool does confuse me quite a bit).
Does anyone have any idea what's causing this? I can post more code or anything if necessary?
Thanks a lot :)
Your memory usage depends on how big the images are uncompressed. The width times the height time 4 will tell you the number of bytes the images will take each, multiply by the number of images to get the total.
My guess is you are on the edge of being over memory wise.
Run in Instruments with the VMTracker instrument to be sure. You should be looking at the Dirty Resident memory.
WWDC '10 and WWDC '09 both had great content on Instruments and memory usage analysis.
You are not releasing theImageView
[theImageView release]

Flipping Quicktime preview & capture

I need to horizontally flip some video I'm previewing and capturing. A-la iChat, I have a webcam and want it to appear as though the user is looking in a mirror.
I'm previewing Quicktime video in a QTCaptureView. My capturing is done frame-by-frame (for reasons I won't get into) with something like:
imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: frame]];
image = [[NSImage alloc] initWithSize: [imageRep size]];
[image addRepresentation: imageRep];
[movie addImage: image forDuration: someDuration withAttributes: someAttributes];
Any tips?
Nothing like resurrecting an old question. Anyway I came here and almost found what I was looking for thanks to Brian Webster but if anyone is looking for the wholesale solution try this after setting your class as the delegate of the QTCaptureView instance:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image {
//mirror image across y axis
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}
You could do this by taking the CIImage you're getting from the capture and running it through a Core Image filter to flip the image around. You would then pass the resulting image into your image rep rather than the original one. The code would look something like:
CIImage* capturedImage = [CIImage imageWithCVImageBuffer:buffer];
NSAffineTransform* flipTransform = [NSAffineTransform transform];
CIFilter* flipFilter;
CIImage* flippedImage;
[flipTransform scaleByX:-1.0 y:1.0]; //horizontal flip
flipFilter = [CIImage filterWithName:#"CIAffineTransform"];
[flipFilter setValue:flipTransform forKey:#"inputTransform"];
[flipFilter setValue:capturedImage forKey:#"inputImage"];
flippedImage = [flipFilter valueForKey:#"outputImage"];
imageRep = [NSCIImageRep imageRepWithCIImage:flippedImage];
...
Try this!
it will apply filters to CaptureView, but not to the output video.
- (IBAction)Vibrance:(id)sender
{
CIFilter* CIVibrance = [CIFilter filterWithName:#"CIVibrance" keysAndValues:
#"inputAmount", [NSNumber numberWithDouble:2.0f],
nil];
mCaptureView.contentFilters = [NSArray arrayWithObject:CIVibrance];
}
btw, you can apply any filters from this ref: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html