I have seen a sample that use video tag to send image data from webcam into webgl texture. It then need to create video tag and each frame need to check and update texture
Which sound not so efficient. I'm curious how can we just use stream from getUserMedia to set as texture src. Or any other way to let shader access webcam as texture directly, without creating video tag
Or it not possible?
Yes, there're plenty of examples: https://www.chromeexperiments.com/webcam-input,webgl?page=0.
Related
I have some code that used CreateJS /EaselJS to create a MovieClip that contains a Tween that contains an mp4 video. In MovieClip there is a method called 'gotoAndPlay' that you can use to change the timeline position of the playhead to a certain frame number. When using this method to change the play position of the video the tweens work but not the Tween that contains the mp4 movie...this object does not load is result in a blank video tag on the page except for the first play through of the clip. Once the mp4 video has been played it didn't play again if the position was set to it through gotoAndPlay...any ideas on how to fix this or if something wrong might be happening?
In ActionScript animations, FLV movies can be locked to the timeline. But in HTML Canvas animations, MP4 movies are not really fully-fledged "Animate" objects. They look the same for the most part but the integration is not as tight as in Flash.
Since the videos exist outside of the Canvas, you'll need to use jQuery or JavaScript to address them. This can be done by using the Code Snippets in the HTML5 Canvas - Components - Video folder.
As an advance warning, "seeking" to different locations in an MP4 video the way you described is not as reliable as it was in Flash. Browsers like Internet Explorer don't handle seeking well and will likely crash. If frame -by-frame accuracy is important, you may find the best visual results by avoiding the video component and converting your movie to an actual MovieClip in Animate CC, which will increase your file size significantly.
I am using HTML5 Canvas for rendering video, but the rendering is taking huge amount of CPU? I am using GtkLauncher (with webkit 1.8.0) for rendering the video on the Canvas.
Can some one please throw some light on this? Is video rendering on Canvas not efficient for embedded systems?
Also I would like to know, whether there is a way in HTML5 video tag to know the video frame rate, before I actually start to render the data on the Canvas. This I would need to know because I would have to set the timer (used for drawing the video frames) at that same frame rate .
Thanks and Regards,
Souvik
Most likely the video rendering is not accelerated and needs to
Decode on software
Resize on software
You did not give system details so this is just a guess. By poking browser internals you can dig out the truth.
Video framerate cannot be known beforehand and in theory can vary within one source. However if you host file yourself you can pre-extract this information using tools like ffmpeg and transfer the number in side-band (e.g. using AJAX / JSON).
I am working on a project where I would like to open a video (on a Mac) with QTKit. That part I can do no problem, but as I am playing it, I would like to edit or modify the video on the fly using OpenGL.
From what I understand, I should be able to intercept the frames and change them before it hits the display, but no matter what I do, I cannot seem to do so.
It sounds like you should have a look at Core Video and the display link mechanic.
You can basically get a callback on a high priority thread with the decoded frame in a CVImageBuffer and do whatever you like with it (including packing it up as a texture for OpenGL processing and display).
Apple provides documentation and demo code snippets on the developer sites.
I'm using the OSX QTKit sample code from here: http://bit.ly/mAaHGI
I'd like to crop the video, both on the screen and the saved file, to simulate different aspect ratios. What is the best way to do this?
It's a bit more involved than just calling a crop method, but Core Video allows you to manipulate the video stream. You can find the Core Video Programming Guide here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/CoreVideo/CVProg_Intro/CVProg_Intro.html
I am writing a simple video messenger-like application, and therefore i need to get frames of some compromise size to be able to fit into the available bandwidth, and still to have the captured image not distorted.
To retrieve frames I am using QTCaptureVideoPreviewOutput class, and i am successfully getting frames in the didOutputVideoFrame callback. (i need raw frames - mostly because i am using a custom encoder, so i just would like to get "raw bitmaps").
The problem is that for these new iSight cameras i am getting literally huge frames.
Luckily, these classes for capturing raw frames (QTCaptureVideoPreviewOutput) provide method setPixelBufferAttributes that allows to specify what kind of frames would i like to get. If i am lucky enough to guess some frame size that camera supports, i can specify it and QTKit will switch the camera into this specified mode. If i am unlucky - i get a blurred image (because it was stretched/shrinked), and, most likely, non-proportional.
I have been searching trough lists.apple.com, and stackoverflow.com, the answer is "Apple currently does not provide functionality to retrieve camera's native frame sizes". Well, nothing i can do about that.
Maybe i should provide in settings the most common frame sizes, and the user has to try them to see what works for him? But what are these common frame sizes? Where could i get a list of the frame dimensions that UVC cameras generate usually?
For testing my application i am using a UVC compliant camera, but not an iSight. I assume not every user is using iSight either, and i am sure even between different models iSight cameras have different frame dimensions.
Or, maybe, i should switch the camera to the default mode, generate a few frames, see what sizes it generates, and at least i will have some proportions? This looks like a real hack, and doesn't seem to be natural. And the image is most likely going to be blurred again.
Could you please help me, how have your dealt with this issue? I am sure i am not the first one who is faced with it. What would be the approach you would choose?
Thank you,
James
You are right, iSight camera produces huge frames. However, I doubt you can switch the camera to a different mode by setting pixel buffer attributes. More likely you set the mode of processing the frames in the QTCaptureVideoPreviewOutput. Take a look at QTCaptureDecompressedVideoOutput if you have not done it yet.
We also use the sample buffer to get the frame size. So, I would not say it's a hack.
A more natural way would be to make your own Quicktime Component that implements your custom encoding algorithm. In this case Quicktime would be able to use inside QTCaptureMovieFileOutput during the capture session. It would be a proper, but also a hard way.