Flipping Quicktime preview & capture - objective-c

I need to horizontally flip some video I'm previewing and capturing. A-la iChat, I have a webcam and want it to appear as though the user is looking in a mirror.
I'm previewing Quicktime video in a QTCaptureView. My capturing is done frame-by-frame (for reasons I won't get into) with something like:
imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: frame]];
image = [[NSImage alloc] initWithSize: [imageRep size]];
[image addRepresentation: imageRep];
[movie addImage: image forDuration: someDuration withAttributes: someAttributes];
Any tips?

Nothing like resurrecting an old question. Anyway I came here and almost found what I was looking for thanks to Brian Webster but if anyone is looking for the wholesale solution try this after setting your class as the delegate of the QTCaptureView instance:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image {
//mirror image across y axis
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}

You could do this by taking the CIImage you're getting from the capture and running it through a Core Image filter to flip the image around. You would then pass the resulting image into your image rep rather than the original one. The code would look something like:
CIImage* capturedImage = [CIImage imageWithCVImageBuffer:buffer];
NSAffineTransform* flipTransform = [NSAffineTransform transform];
CIFilter* flipFilter;
CIImage* flippedImage;
[flipTransform scaleByX:-1.0 y:1.0]; //horizontal flip
flipFilter = [CIImage filterWithName:#"CIAffineTransform"];
[flipFilter setValue:flipTransform forKey:#"inputTransform"];
[flipFilter setValue:capturedImage forKey:#"inputImage"];
flippedImage = [flipFilter valueForKey:#"outputImage"];
imageRep = [NSCIImageRep imageRepWithCIImage:flippedImage];
...

Try this!
it will apply filters to CaptureView, but not to the output video.
- (IBAction)Vibrance:(id)sender
{
CIFilter* CIVibrance = [CIFilter filterWithName:#"CIVibrance" keysAndValues:
#"inputAmount", [NSNumber numberWithDouble:2.0f],
nil];
mCaptureView.contentFilters = [NSArray arrayWithObject:CIVibrance];
}
btw, you can apply any filters from this ref: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html

Related

How to generate NSImage with specific resolution and specific size in mm

I am new to objective-c and cocoa programming. I am trying to generate image which will be 128mm in height and 128mm in width with 300 DPI resolution.
NSString *image = [[NSImage alloc] initWithSize:NSMakeSize(1512, 756)];
In above line of code 1512 and 756 are treated as points. So I am not able to convert it to what I need. It is creating image with (3024 * 1512) size.
Can you please suggest something...
Thanks in advance.
Here is the code which I tried
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(2268, 2268)];
[image lockFocus];
// Draw Something
[image unlockFocus];
NSString* pathh = #"/Users/abcd/Desktop/Images/1234.bmp";
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
NSSize copySize; // To change to 600 DPI
copySize.width = 600 * [newRep pixelsWide] / 72.0;
copySize.height = 600 * [newRep pixelsWide] / 72.0;
[newRep setSize:copySize]; //This input is not working
NSData *pngData = [newRep representationUsingType:NSBMPFileType properties:nil];
[pngData writeToFile:pathh atomically:YES];
Your question is somewhat confusing as you state you want a square image (128 x 128mm) and then attempt to construct a rectangular one (1512 x 756pts).
Guessing: it seems you may need to understand the difference between an NSImage and an NSImageRep and how they interact. Read Apple's Cocoa Drawing Guide, in particular the Images section. You may find the subsection Image Size and Resolution help to set the picture (no pun intended ;-)).
Another area you can read up on is printing - this often requires the generation of 300dpi images.
HTH

Can't apply inputSharpness on UIImage

I'm using a bunch of CIFilter filters in my app to adjust brightness, saturation etc' and they are working fine. I'm having some issues with inputSharpness. If I touch the sharpness slider the picture just disappears. Relevant code:
UIImage *aUIImage = [imageView image];
CGImageRef aCGImage = aUIImage.CGImage;
aCIImage = [CIImage imageWithCGImage:aCGImage];
//Create context
context = [CIContext contextWithOptions:nil];
sharpFilter = [CIFilter filterWithName:#"CIAttributeTypeScalar" keysAndValues: #"inputImage", aCIImage, nil];
....
- (IBAction)sharpSliderChanged:(id)sender
{
//Set filter value
[sharpFilter setValue:[NSNumber numberWithFloat:sharpSlider.value] forKey:#"inputSharpness"];
//Convert CIImage to UIImage
outputImage = [sharpFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
newUIImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
//add image to imageView
[imageView setImage:newUIImage];
}
I've read a post with a similar question, there a possible solution was to add a category for the UIImage effect you want to provide. The only difference here is that you should use one of the CIColorControls Parameters: inputSharpness and the CISharpenLuminance filter.
Back to your question: It seems from your comments you have some problem about how you initialize your filter. I take a look to the official documentation and I would use CISharpenLuminance instead during the initialization phase. It is only available in ios 6 though.
EDIT
Like i said if you want to stick with core image the feature you want is available on iOS 6 only. I can recommend you to use a third party lib: GPU library from bradlarson if you want to be compatible with ios 5.

CIFilter guassianBlur and boxBlur are shrinking the image - how to avoid the resizing?

I am taking a snapshot of the contents of an NSView, applying a CIFilter, and placing the result back into the view. If the CIFilter is a form of blur, such as CIBoxBlur or CIGuassianBlur, the filtered result is slightly smaller than the original. As I am doing this iteratively the result becomes increasingly small, which I want to avoid.
The issue is alluded to here albeit in a slightly different context (Quartz Composer). Apple FunHouse demo app applies a Guassian blur without the image shrinking, but I haven't yet worked out how this app does it (it seems to be using OpenGL which I am not familiar with).
Here is the relevant part of the code (inside an NSView subclass)
NSImage* background = [[NSImage alloc] initWithData:[self dataWithPDFInsideRect:[self bounds]]];
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
CIImage* ciImage = [background ciImage];
CIFilter *filter = [CIFilter filterWithName:#"CIGaussianBlur"
keysAndValues: kCIInputImageKey, ciImage,
#"inputRadius", [NSNumber numberWithFloat:10.0], nil];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result
fromRect:[result extent]];
NSImage* newBackground = [[NSImage alloc] initWithCGImage:cgImage size:background.size];
If I try a color-changing filter such as CISepiaTone, which is not shifting pixels around, the shrinking does not occur.
I am wondering if there is a quick fix that doesn't involve diving into openGL?
They're actually not shrinking the image, they're expanding it (I think by 7 pixels around all edges) and the default UIView 'scale To View' makes it looks like it's been shrunk.
Crop your CIImage with:
CIImage *cropped=[output imageByCroppingToRect:CGRectMake(0, 0, view.bounds.size.width*scale, view.bounds.size.height*scale)];
where view is the original bounds of your NSView that you drew into and 'scale' is your [UIScreen mainScreen] scale].
You probably want to clamp your image before using the blur:
- (CIImage*)imageByClampingToExtent {
CIFilter *clamp = [CIFilter filterWithName:#"CIAffineClamp"];
[clamp setValue:[NSAffineTransform transform] forKey:#"inputTransform"];
[clamp setValue:self forKey:#"inputImage"];
return [clamp valueForKey:#"outputImage"];
}
Then blur, and then crop to the original extent. You'll get non-transparent edges this way.
#BBC_Z's solution is correct.
Although I find it more elegant to crop not according to the view, but to the image.
And you can cut away the useless blurred edges:
// Crop transparent edges from blur
resultImage = [resultImage imageByCroppingToRect:(CGRect){
.origin.x = blurRadius,
.origin.y = blurRadius,
.size.width = originalCIImage.extent.size.width - blurRadius*2,
.size.height = originalCIImage.extent.size.height - blurRadius*2
}];

Applying CIFilter destroys data from .BMP File

I seem to be tying myself up in knots trying to read into all of the different ways you can represent images in a Cocoa app for OSX.
My app reads in an image, applies CIFilters to it and then saves the output. Until this morning, this worked fine for all of the images that I've thrown at it. However, I've found some BMP files that produce empty, transparent images as soon as I try to apply any CIFilter to them.
One such image is one of the adverts from the Dragon Age 2 loader (I was just testing my app on random images this morning); http://www.johnwordsworth.com/wp-content/uploads/2011/08/hires_en.bmp
Specifically, my code does the following.
Load a CIImage using imageWithCGImage (the same problem occurs with initWithContentsOfURL).
Apply a number of CIFilters to the CIImage, all the while storing the current image in my AIImage container class.
Previews the image by adding an NSCIImageRep to an NSImage.
Saves the image using NSBitmapImageRep / initWithCIImage and then representationUsingType.
This process works with 99% of the files I've thrown at it (all JPGs, PNGs, TIFFs so far), just not with certain BMP files. If I skip step 2, the preview and saved image come out OK. However, if I turn step 2 on, the image produced is always blank and transparent.
The code is quite large, but here are what I believe to be the relevant snippets...
AIImage Loading Method
CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)[NSURL fileURLWithPath:imagePath], nil);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil);
CFDictionaryRef dictionaryRef = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil);
self.imageProperties = [NSMutableDictionary dictionaryWithDictionary:((NSDictionary *)dictionaryRef)];
self.imageData = [CIImage imageWithCGImage:imageRef];
AIImageResize Method
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleXBy:(targetSize.width / sourceRect.size.width) yBy:(targetSize.height / sourceRect.size.height)];
CIFilter *transformFilter = [CIFilter filterWithName:#"CIAffineTransform"];
[transformFilter setValue:transform forKey:#"inputTransform"];
[transformFilter setValue:currentImage forKey:#"inputImage"];
currentImage = [transformFilter valueForKey:#"outputImage"];
aiImage.imageData = currentImage;
CIImagePreview Method
NSCIImageRep *imageRep = [[NSCIImageRep alloc] initWithCIImage:ciImage];
NSImage *nsImage = [[[NSImage alloc] initWithSize:ciImage.extent.size] autorelease];
[nsImage addRepresentation:imageRep];
Thanks for looking. Any advice would be greatly appreciated.

Get pixel colour from a Webcam

I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.