I am using AVFoundation to take pictures instead of UIImagePicker due to how customizable the user interface presented to the user can be. When using it the aspect ratio that the picture is saved as is the same as the iPhone's video feed. What I want to happen is to have the pictures saved in the same aspect ratio as normal pictures are.
The way that I am currently approaching this is to overlay a black bar in the excess preview display and then just crop the photo after saving it as an image.
However, this feels very crude. I assume that it is a common thing to use the AVFoundation as a way of taking photos and so I assume I must be missing something!
I have used this example code. And I have read through the AVFoundation documentation but can only assume that I am missing a function. I have also read through similar questions to this which describe the process by which I might go about cropping images, but that isn't really my concern.
On the other hand, if there is no standard way to do this, please do let me know so that I can stop worrying that I am approaching it in a convoluted way.
Also, I am using Objective-C so if answers contain code, please could you use the same language?
Related
I'm building an iphone photo app. I don't know how to achieve this kind of effect on an image:
When the user drags a finger across the arrow, I want the image to be distorted accordingly. How can I achieve this? Is there any framework that makes this process simple?
Thank you.
No clue about iOS, but the look reminds me of a thin-plate spline warp. It's quite easy to implement in OpenGL, a quick Google search for example code returns plenty of hits.
From the sound of it I think your looking for the CoreImage framework. Look into the various distortion effects...in particular CIBumpDistortionLinear.
I don't have any code so check out this tutorial and read up on CoreImage.
So I am working on an app that will be taking pictures. I am using OpenCV to do things like detect that people are present in the picture. However what I would like to be able to do is detect if the picture is "good".
I am not sure exactly how to approach it but some ideas are:
I want to either detect a picture follows the rule of thirds. If it does not it could adjust the camera.
Have different types of pictures. So sometimes take a picture of a group and other times individual shots.
I think it may be easiest to take the picture and then try and crop it to meet one of these requirements? But I was wondering if there is any api or something that would be helpful for this?
In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.
I'm basically trying to work out how to take a slice of an image, say a screenshot of an iPhone home screen, slice out the first icon and compare it to a set array of images in a library. Any help on where to start?
I'm no iPhone programmer, but I might be able to suggest a few things:
The SURF feature detection implemented in OpenCV should help you with this
There is a nice article on using OpenCV in Objective-C code.
A quick & dirty way might be to use the difference blend mode which should return the difference between the 1st image(top) and the 2nd image(bottom). If there is no difference the result will be completely black. So, the more black pixels in the difference result, potentially, the more similarities between the compared images.
I'm not an iOS developer, so I don't know if there is an image library that ships with sdk or if there's a free/opensource library for basic image processing. Still this should be trivial to implement:
e.g.
- (int)difference((int)topPixel,(int)bottomPixel)
{
return abs(topPixel-bottomPixel);
}
Note: Syntax might not be correct :)
HTH
This may not help you with taking a screenshot of the iOS home screen... But these articles show how to take snapshots from within a UIKit application:
https://developer.apple.com/library/prerelease/ios/#qa/qa1703/_index.html
https://developer.apple.com/library/prerelease/ios/#qa/qa1714/_index.html
Perhaps you would instruct the user to press home-power (buttons) to take a snapshot and store in the photo roll, then load that screenshot into an app to process the screenshot.
Hope this helps!
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.