What technologies should I use to produce a WebM live stream from a series of in-memory bitmaps? - html5-video

Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)
Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.
What Lego blocks should I be looking for here?
Dave

You'll want to use FFmpeg. Here's the basic flow:
Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM
If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.
If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.
One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.
Some resources to get you started:
FFmpeg Documentation (https://ffmpeg.org/documentation.html)
Converting raw images to video (https://superuser.com/a/469517/48624)
VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)

Related

how to get Dash segments of .mp4 video file

I have mp4 video file,which i need to load on my page,i am using MSE for that,but i don't know how can i get my video in segments with .m4s extensions,with header.m4s as parent segment with all information about my video file stored in it?Please help.
I believe that if a video is embedded on the website, it can be downloaded.
The only thing you could do is make it difficult for download.
This might be helpful. It says using a flash video is a good option to make downloading videos a bit difficult. Never used it but you could give it a try.
To protect the video, you should probably not try to artificially obfuscate the video loading. MPEG DASH supports encrypted MP4 video and common encryption (CENC), that could be a thing you can look into.

Scheme to play video file in own container format on Mac OS X

I am planning to write an application (C/C++/Objective-C) that will
play media files in own (private) container format. The files will
contain: multiple video streams, encoded by a video codec (like
XVid or H264, it is supposed, that components capable of decoding
the viideo formats are present in the system); multiple audio
streams in some compressed formats (it is supposed, that decoding
will be performed by a system component or by own code).
So, it seems it is required to implement the following scheme:
1) Implement container demuxer (may be in the form of media handler
component).
2) Pass video frames to a video decoder component, and mix
decompressed frames (using some own rules).
3) Pass audio data to an audio decoder component, or decompress the
audio by own code, and mix decoded audio data.
4) Render video frames to a window.
5) Pass audio data to a selected audio board.
Could anybody provide tips to any of the above stages, that is:
toolkits, that I should use; useful samples; may be names of
functions to be used; may be improvements to the scheme,....
I know I am quite late, so you might not need this anymore, but I just wanted to mention, that the right way to do it is to write a QuickTime component.
Although it is pretty old school, it's the same way Apple uses to support new formats and codecs.
Look at the Perian project as an orientation point.
Best

Webcam capture and convert to avi

I'm working on a project and i'm trying to capture a webcam and use a codex to save the file to the hard disk. but i can't find a program for it ?
It would be cool if the program is controllable from the outside but its not necessary.
(it has to capture the audio to)
vlc can do the recording and conversion directly if given the proper command line options (that however is not trivial but reasonably well documented).
Also there is the library libvlc that you can use to do anything that vlc does; I only used it for playback but I suppose that capturing and saving to file should be not too difficult.
You didn't tell your platform, but both vlc and libvlc are windows/linux/osx and so that shouldn't be a big problem.
This is for example a vlc command line I use to start recording from my webcam
vlc v4l2:// :v4l2-dev=/dev/video0 :v4l2-width=320 :v4l2-height=240
--sout "#transcode{vcodec=x264,acodec=mpga,vb=800,ab=128}
:standard{access=file,dst=capture.avi}"

How to compress Video data while taking video from camera?

Is there any way to compress the video data while taking from camera ? There is huge difference in video data bytes from taking camera and from photo library.I want to reduce some memory while taking video from camera. Is any way ?
Thanks
I filed a bug report with Apple on this matter, you could do the same, seems the more reports from developers the faster they fix things up.
No matter what videoQuality level you set on the UIImagePickerController, it always defaults to High when recording from the camera. Videos chosen from the user library respect your choice and compress really well with the hardware H.264 encoder present on the 3GS and up.
You can use FFMpeg to get video directly from camera, compress it and store it to a file.
Also FFMpeg is a standalone console application, and it doesn't need any other dlls.
Of course, this isn't objective-c, but it can be very useful in your case.

Streaming encoded video in Adobe AIR Application

I am developing a desktop application in Adobe AIR that will be used to stream the user's camera video to a wowza media server. I want to encode the video on the fly, means transmit the H.264 encoded video instead of the default flash player encoded video for quality purpose. Is there any way around for this?
Waiting for the help from people around,
Rick
H.264 encoding is usually done in Native Code C or C++ because it is a cpu
intensive set of algorithms. The source code for x264 can give you an
idea of the code required but it is a tough read if you start from scratch.
Here is a book to get you started or you can read the original AVC standard
if you suffer from insomnia.