I'm using the OSX QTKit sample code from here: http://bit.ly/mAaHGI
I'd like to crop the video, both on the screen and the saved file, to simulate different aspect ratios. What is the best way to do this?
It's a bit more involved than just calling a crop method, but Core Video allows you to manipulate the video stream. You can find the Core Video Programming Guide here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/CoreVideo/CVProg_Intro/CVProg_Intro.html
Related
I am programming media player using VLCKit. I want to take preview picture of the video. How can i do that using VLCKit or maybe another tools?
P.S. I've already used AVFoundation and QTKit, but it didn't work. They argue on video format (.mkv)
You want to use VLCKit's thumbnailer class. It is doing everything for you.
I wrote code that loops through a video feed of a computer screen and recognizes certain PNG images by looping through pixels. I get 60fps with 250% CPU usage(1280x800 video feed). The code is a blend of Objective-C and C++.
I'm trying to find a faster alternative. Can Core Image detect instances of an image within another image and give me the pixel location? If not, is OpenCV fast enough to do that kind of processing at 60fps?
If Core Image and OpenCV aren't the correct tools, is there another tool that would be better suited?
(I haven't found any documentation showing Core Image can do what I need, I am trying to get a OpenCV demo working to benchmark)
I am trying to develop an iphone application which needs to show a 360 degree video like the one and rotate the video as per the phone movement. How can i do this? Is it possible to do this with normal MPMovieplayer controller?
I don't think you can do this with a normal MPMoviePlayerController, but there are several libraries out there to achieve this. Have a look here:
PanoramaGL
Panorama 360
They work with OpenGL and you can embed them in your Objective-C code.
EDIT:
As #Mangesh Vyas kindly pointed out those are intended to use with fixed images only. However they might be a suitable starting point for embedding video as well, if you modify the code accordingly. They already do the handling of direction, accelerometer etc. so you don't have to implement all that yourself.
I am working on a project where I would like to open a video (on a Mac) with QTKit. That part I can do no problem, but as I am playing it, I would like to edit or modify the video on the fly using OpenGL.
From what I understand, I should be able to intercept the frames and change them before it hits the display, but no matter what I do, I cannot seem to do so.
It sounds like you should have a look at Core Video and the display link mechanic.
You can basically get a callback on a high priority thread with the decoded frame in a CVImageBuffer and do whatever you like with it (including packing it up as a texture for OpenGL processing and display).
Apple provides documentation and demo code snippets on the developer sites.
In my application I have a color video which I want to make as black and white video. Is there any framework that supports this in iOS? If so, how to implement that effect on color video?
Can anyone help in this regard?
A bit late here, but you should check out OpenCV.
It can be compiled as a static lib for the iPhone and can convert video (frame by frame) to grayscale or two color.
You can start here. There are other resources for doing this floating around.
HI Lakshmi this might be familiar to you that at least 24 frames make a clip of video so you have to use a tool like movie maker so that you can see the video frame by frame and edit each frame.
See the opensource Framework named: GPUImage https://github.com/BradLarson/GPUImage/
The filter name GPUImageGrayscaleFilter is what you want
This framework included the sample code, so you can learn how to use GPUImageGrayscaleFilter in your apps easily