AVFoundation capture UIImage - objective-c

I'm trying to capture one or more UIImages programmatically using AVFoundation.
I set up the sessions and input devices and everything, but when I try to find explanations on how to actually take the photos, all I get is buffeled information about connections and what not.
I couldn't find a single example of actually taking photos and saving it to UIImage for further processing. All the example use a constant kCGImagePropertyExifDictionary Which doesn't seems to exist in iOS 5 SDK..
Can someone please provide me with a code or an explanation from top to bottom on how to take and save an image from the front facing camera to a UIImage using AVFoundation?
Thanks alot!

To use kCGImagePropertyExifDictionary, you should #import <ImageIO/ImageIO.h>.
All of the other information you seek is inside the AVFoundation Programming guide - particularly the Media Capture section.

Related

Capture multiple images in single interface using AVFoundation

I should able to capture multiple images (assume I am capturing passport for id proof) using iOS camera AVFoundation. I mean to say that I should have one interface to capture them one by one and merge them together. Is this possible in iOS, if yes is there any samples available for that.
Any help could be appreciated. Thank you
Can't completely get it. You can just take image from photostream one by one, you can take one photo and cut out 2 images, you can use imagepicker to call native camera, where is problem my friend?

ios app question video effects

I am trying to piece together a solution to let users take and edit videos in an app. I have seen the 8mm app and am wondering how they did it... and made it so smooth.
At first I was thinking the effects might have been a series of pngs streamed together like a animated gif and then placed on top of the real video. but then for merging the images to the video I am at a loss. Also the app is so smooth I think it has to be using some low level Core.media Framework but am not sure.
Any ideas or advise on where to begin?
Thanks
AVFoundation combined with OpenGL ES 2.0 (with shaders) provides great performances for adding effects to camera / video in realtime (and even better with the ios 5 but i can't say too much due to the NDA).
You probably read most the documentation of AVFoundation to start with, because there is a lot going on. One method that might be of interests is this one:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
which allow you to work directly with blocks of data representing video information coming from the camera. You can then modify this data to change the video information, for example, adding additional content or pictures on top of the video frame. You can use Open GL ES to do this processing.

360 degree video in MPMoviePlayerController

I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.

iOS: compare a slice of an image to library of options

I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!

iPhone Objective-C image manipulation

I am looking for a way to, in Objective-C, create a PNG from several smaller PNGs based on how the user sets things up. Is this possible using existing Apple classes, or do I need to use a 3rd party library? If 3rd party code is needed, can anyone recommend a good library? The simpler the better - simple filters (such as darkening/lightening the image) would be nice but not required.
Here is some pseudo-code, to give you a better idea of what I am looking for:
image = [myImageLibrary imageWithHeight:1024 width:768];
[image addImage:#"background.png" atX:0 andY:0 withRotation:0];
[image addImage:#"image2.png" atX:100 andY:200 withRotation:90];
[image saveAtLocation:#"output.png"];
At output.png we see image2.png placed on top of background.png and rotated 90 degrees
P.S. - I am sorry if this seems to be a duplicate of another question, I just have not found an answer that works for what I am trying to do.
Have you read the "Creating and Drawing Images" section of the Drawing and Printing Guide for iOS and the UIImage Class Reference docs?
What you're after is perfectly possible - with a well built class you could pretty much use that pseudo code as-is.
As a starter for ten, you could:
Create your own graphics context via UIGraphicsBeginImageContext.
Draw into that via the drawAtPoint: method of the UIImage class
Save the resultant image data out via UIGraphicsGetImageFromCurrentImageContext.
In terms of steps 1 and 3, see the UIKit Function Reference for more info. Additionally, the imageWithCGImage:scale:orientation: method of the UIImage class may prove useful for performing transformations, etc. as a part of step 2.
You'll want to look at CGContextDrawImage to draw your images, using a custom bitmap context, and then save it out using UIGraphicsGetImageFromCurrentImageContext(). The rotation can be done by applying CGAffineTransforms to your CGContext.
More information on Core Graphics here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html