Best way to play video frame by frame - zoomed - objective-c

I need some advice on playing video frame by frame...
Right now I shoot a video and extract all frames using mpmovieplayer thumbnailImageAtTime for each frame.
The video could be zoomed as well. I am zooming by extracting the frames as mentioned above then resizing and cropping the frames.
This would be great except that thumbnailImageAtTime seems to be very slow. My videos will be less than 30 seconds long...most of the time only a few seconds and it takes about 20 seconds on iphone 4s to grab 60 frames. If you think this should be faster I can post the code I am using, but it is pretty straight forward. I am performing it on a background thread so UI is not affected.
I have been looking at AVFoundation to grab the frames, but have read that it is not exact and I need all 30 fps.
I am really looking for advice on the best way to do this. I need to be able to use a slider and buttons to move frame to frame backwards and forwards as well as jump to a specific frame. As I said the video might be digitally zoomed as well.
Should I not extract frames and just use the video file and move from frame to frame? If so, what is the best way to do this because the mpmovieplayer doesn't seem to allow me to move to an exact frame easily. Also, if I just use the video file what is the best way to zoom? Can I go through each frame of an asset and resize and crop it then save back to the video file? Is this the best way? Can I achieve everything I want to do using AVFoundation?
I have been trying things for about a week now and I do have everything working extracting the frames using mpmovieplayer...the speed is just unacceptable. If I could extract the frames very quickly that solution would be the best in my opinion. I might mention I only have to extract the frames once, not each time the user clicks on the video...if that makes a difference.
I hope this all makes sense and sorry for rambling. Any help would be much appreciated!

After a bit of research I am going to go with AVFoundation to play the video frame by frame and not extract the frames. It works great.

Related

How to change aspect ratio of photos taken using AVFoundation?

I am using AVFoundation to take pictures instead of UIImagePicker due to how customizable the user interface presented to the user can be. When using it the aspect ratio that the picture is saved as is the same as the iPhone's video feed. What I want to happen is to have the pictures saved in the same aspect ratio as normal pictures are.
The way that I am currently approaching this is to overlay a black bar in the excess preview display and then just crop the photo after saving it as an image.
However, this feels very crude. I assume that it is a common thing to use the AVFoundation as a way of taking photos and so I assume I must be missing something!
I have used this example code. And I have read through the AVFoundation documentation but can only assume that I am missing a function. I have also read through similar questions to this which describe the process by which I might go about cropping images, but that isn't really my concern.
On the other hand, if there is no standard way to do this, please do let me know so that I can stop worrying that I am approaching it in a convoluted way.
Also, I am using Objective-C so if answers contain code, please could you use the same language?

How do I process video frames in HTML5 quickly?

I am testing HTML5's video API. The plan is to have a video play with an effect, like making it black and white. I have and working together using a buffer. I take the current video frame and copy to the scratch buffer where I can process it. The problem is the rate at which it runs.
The Video API of HTML5 has the 'timeupdate' event. I tried using this to have the handler process frames, once for every frame, but it runs at a slower rate than the video.
Any ideas to speed up processing frames?
You can get much more frequent redraws by using requestAnimationFrame to determine when to update your canvas, rather than relying on timeupdate, which only updates every 200-250ms. It's definitely not enough for frame-accurate animation. requestAnimationFrame will update at most every 16ms (approx 60fps), but the browser will throttle it as necessary and sync with video buffer draw calls. It's pretty much exactly what you want for this sort of thing.
Even with higher frame rates, processing video frames with a 2D canvas is going to be pretty slow. For one thing, you're processing every pixel sequentially in the CPU, running Javascript. The other problem is that you're copying around a lot of memory. There's no way to directly access pixels in a video element. Instead, you have to copy the whole frame into a canvas first. Then, you have to call getImageData, which not only copies the whole frame a second time, but it also has to allocate the whole block of memory again, since it creates a new ImageData every time. Would be nice if you could copy into an existing buffer, but you can't.
It turns out you can do extremely fast image processing with WebGL. I've written a library called Seriously.js for exactly this purpose. Check out the wiki for a FAQ and tutorial. There's a Hue/Saturation plugin you can use - just drop the saturation to -1 to get your video to grayscale.
The code will look something like this:
var composition = new Seriously();
var effect = composition.effect('hue-saturation');
var target = composition.target('#mycanvas');
effect.source = '#myvideo';
effect.saturation = -1;
target.source = effect;
composition.go();
The big down side of using WebGL is that not every browser or computer will support it - Internet Explorer is out, as is any machine with old or weird video drivers. Most mobile browsers don't support it. You can get good stats on it here and here. But you can get very high frame rates on pretty large videos, even with much more complex effects.
(There is also a small issue with a browser bug that, oddly enough, shows up in both Chrome and Firefox. Your canvas will often be one frame behind the video, which is only an issue if the video is paused, and is most egregious if you're skipping around. The only workaround seems to be to keep forcing updates, even if your video is paused, which is less efficient. Please feel free to vote those tickets up so they get some attention.)

animation in game application in android

I am working on image animation. I have 200 transparent png images which I am trying to show one by one over a background image.
Can you tell me the best way to do it. The image should change in such a way that it should appear that a cartoon is running.
thanx
pavan
Did you end up finding a solution for this? I would be interested in knowing. If you do not need to animate quicker than every 300ms then this may work for you see my post:
See: 750 frame transparent PNG animation in ImageView at 23fps
Also using Animation.drawable could be an option if you can split your cartoon it up into small pieces 40-50 frames at a time and then play them one after the other.
I'm still looking for a better solution for this so would be interested to see another way.

ios app question video effects

I am trying to piece together a solution to let users take and edit videos in an app. I have seen the 8mm app and am wondering how they did it... and made it so smooth.
At first I was thinking the effects might have been a series of pngs streamed together like a animated gif and then placed on top of the real video. but then for merging the images to the video I am at a loss. Also the app is so smooth I think it has to be using some low level Core.media Framework but am not sure.
Any ideas or advise on where to begin?
Thanks
AVFoundation combined with OpenGL ES 2.0 (with shaders) provides great performances for adding effects to camera / video in realtime (and even better with the ios 5 but i can't say too much due to the NDA).
You probably read most the documentation of AVFoundation to start with, because there is a lot going on. One method that might be of interests is this one:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
which allow you to work directly with blocks of data representing video information coming from the camera. You can then modify this data to change the video information, for example, adding additional content or pictures on top of the video frame. You can use Open GL ES to do this processing.

QTKit capture: what frame size to use?

I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.