how to get Dash segments of .mp4 video file - html5-video

I have mp4 video file,which i need to load on my page,i am using MSE for that,but i don't know how can i get my video in segments with .m4s extensions,with header.m4s as parent segment with all information about my video file stored in it?Please help.

I believe that if a video is embedded on the website, it can be downloaded.
The only thing you could do is make it difficult for download.
This might be helpful. It says using a flash video is a good option to make downloading videos a bit difficult. Never used it but you could give it a try.

To protect the video, you should probably not try to artificially obfuscate the video loading. MPEG DASH supports encrypted MP4 video and common encryption (CENC), that could be a thing you can look into.

Related

What technologies should I use to produce a WebM live stream from a series of in-memory bitmaps?

Boss handed me a bit of a challenge that is a bit out of my usual ballpark and I am having trouble identifying which technologies/projects I should use. (I don't mind, I asked for something 'new' :)
Job: Build a .NET server-side process that can pick up a bitmap from a buffer 10 times per second and produce/serve a 10fps video stream for display in a modern HTML5 enabled browser.
What Lego blocks should I be looking for here?
Dave
You'll want to use FFmpeg. Here's the basic flow:
Your App -> FFmpeg STDIN -> VP8 or VP9 video wrapped in WebM
If you're streaming in these images, probably the easiest thing to do is decode the bitmap into a raw RGB or RGBA bitmap, and then write each frame to FFmpeg's STDIN. You will have to read the first bitmap first to determine the size and color information, then execute the FFmpeg child process with the correct parameters. When you're done, close the pipe and FFmpeg will finish up your output file. If you want, you can even redirect FFmpeg's STDOUT to somewhere like blob storage on S3 or something.
If all the images are uploaded at once and then you create the video, it's even easier. Just make a list of the files in-order and execute FFmpeg. When FFmpeg is done, you should have a video.
One additional bit of information that will help you understand how to build an FFmpeg command line: WebM is a container format. It doesn't do anything but keep track of how many video streams, how many audio streams, what codecs to use for those streams, subtitle streams, metadata (like thumbnail images), etc. WebM is basically Matroska (.mkv), but with some features disabled to make adopting the WebM standard easier for browser makers. Inside WebM, you'll want at least one video stream. VP8 and VP9 are very compatible codecs. If you want to add audio, Opus is a standard codec you can use.
Some resources to get you started:
FFmpeg Documentation (https://ffmpeg.org/documentation.html)
Converting raw images to video (https://superuser.com/a/469517/48624)
VP8 Encoding (http://trac.ffmpeg.org/wiki/Encode/VP8)
FFmpeg Binaries for Windows (https://ffmpeg.zeranoe.com/builds/)

MPMoviePlayerController H.264 and Multiple Audio Streams

I am trying to find some information on how to support multiple audio streams from a single H.264/MPEG4 video file.
So far I have found very little information when googling, I was wondering if anybody has any information that may shed some light.
I would like to display the video then have a choice of which audio stream to play from the H.264 format.
Anybody?
MPMoviePlayer cannot be used to play a movie with multiple audio streams.

How to compress Video data while taking video from camera?

Is there any way to compress the video data while taking from camera ? There is huge difference in video data bytes from taking camera and from photo library.I want to reduce some memory while taking video from camera. Is any way ?
Thanks
I filed a bug report with Apple on this matter, you could do the same, seems the more reports from developers the faster they fix things up.
No matter what videoQuality level you set on the UIImagePickerController, it always defaults to High when recording from the camera. Videos chosen from the user library respect your choice and compress really well with the hardware H.264 encoder present on the 3GS and up.
You can use FFMpeg to get video directly from camera, compress it and store it to a file.
Also FFMpeg is a standalone console application, and it doesn't need any other dlls.
Of course, this isn't objective-c, but it can be very useful in your case.

Streaming encoded video in Adobe AIR Application

I am developing a desktop application in Adobe AIR that will be used to stream the user's camera video to a wowza media server. I want to encode the video on the fly, means transmit the H.264 encoded video instead of the default flash player encoded video for quality purpose. Is there any way around for this?
Waiting for the help from people around,
Rick
H.264 encoding is usually done in Native Code C or C++ because it is a cpu
intensive set of algorithms. The source code for x264 can give you an
idea of the code required but it is a tough read if you start from scratch.
Here is a book to get you started or you can read the original AVC standard
if you suffer from insomnia.

How to programmatically test for audio sync

I have a multimedia application that among other things converts video using FFMpeg. Video conversion being the pain that it is, I have in my test suits some tests that check our ability to convert various video formats, with emphasis on sample videos known not to work.
A common problem we've noticed from users is that some videos end up with their audio desynched after being processed, and I am looking for a way to check this in my tests.
Extracting the audio portion of the resulting videos is not a problem.
My best idea so far would be to check the offset of the first non-silence at both the beginning and end and compare each between the two videos, but I'm hoping someone smart has a better idea.
The application language/environment is Java, but since this is for testing, I'm free to use any toolset.
The basic problem is likely that the video and audio are different lengths. Extract the audio and test its length vs. the video length. If they are significantly different (more than maybe .05 sec, I'm not really sure what is detectable as "off"), then there's a problem.
To fix it, re-encode the audio to match the video length, and then put the audio and video back into a container format.