H.264 trimming/downsizing and streaming with Apache - apache

I am doing some research on how to do two things: Trim and stream H.264 video.
What does it take to trim a mpeg4 h.264 video to 30 seconds and downsize it to 480p. I am assuming I would need to find a 3rd party library that does H.264 encoding, doing a quick Google search and the only thing I find is VideoLan.org, but I cannot find their commercial license. Are there other options folks know of?
How is streaming of H.264 to a HTML5 work? I know that with Flash, one can have one file format that requires the whole file to be downloaded, then it will play. The other format allows streaming, but requires a Flash server. I am going to be using Apache to serve up the images on the Intranet, how does one go about streaming them on Apache?

1) You can use FFmpeg :
ffmpeg -i in.mp4 -s 720x480 -t 30 out.mp4
-s is to resize and -t is to dump only 30 seconds
2) For http streaming, if the moov atomc(contains the video headers and seek information), is present at the start of the video, the video will start playing as soon as it buffers up few seconds, it does not wait for the whole file to download. Forward seek is possible through ByteRange headers in http. To put moov atom in the beginning use qt-fastart . It comes with FFmpeg
qt-faststart in.mp4 out.mp4

Related

How to play live FLV stream?

I am capturing video from webcam in my PC and in the fly convert it to FLV (using ffmpeg).
As a result I have a continuously growing .FLV file.
And now I would like to play it as a live stream.
I was trying VLC but it plays the file no longer than the duration read from file on initialization.
What player can I use for live playing FLV?
I am working on Ubuntu 16.04.
Thank you in advance for your answers!
You cannot play live FLV directly but there is a tricky protocol popular among Chinese live streaming platform called "http-flv" that would play live flv within http framework.
Why http-flv?
Latency for HLS / Dash is long. It is about 10 to 20+ seconds.
Http-flv reduces end-to-end the latency to ~5 seconds. It could be played on browsers with MSE support.
How it works?
FLV is a simple container that "supports" file-based progressive streaming because one could get partial byte range in a flv video and still play it ( for mp4, you would need meta like moov etc for playback. )
For file server, host a growing flv file and remove the HTTP response header "content length" so that when client request the file, it does not know the response body size. It would keep the connection and receive videos segments until connection ends.
On client side, use flv.js to fetch only the latest segments for a flv file and perform the playback.
A lot of other tricks that would make the pipeline work.
There are a lot of source online you could play around with. Here are some references:
https://github.com/Bilibili/flv.js/
https://github.com/winshining/nginx-http-flv-module
A blog about how to achieve this: https://www.yanxurui.cc/posts/server/2017-11-25-http-flv/

Why so many partial content requests in Firefox when streaming mp4 video on Apache?

Edit: Turns out this is actually a Firefox bug.
I have several videos on my Apache 2.2 server encoded with ffmpeg using -movflags faststart and they stream fine. However seeking past the buffer line takes an extraordinary amount of time with Firefox (about 30 seconds or more to buffer) whereas Chrome has no problem at all.
Chrome shows one network request for the mp4 with partial content, but Firefox always shows hundreds of 206 partial content requests in succession when playing the mp4 (open for detail):
Most interesting is how there is one large request after all the small ones. This is the point where the video actually begins playing, and it transferred 26MB out of 1.3MB? I am not sure what is going on here.
Can anyone make sense of this? Compare what I am getting in output to this mp4 file here. It doesn't happen on that file.

Streaming video from multiple cameras to html5 player

Im trying to figure out a way of having a server which has a camera (or multiple cameras) connected via usb (firewire, whatever...) and then streams the video to users.
The idea so far is to have a red5 server which streams the camera feed as a H.264 stream and have a Html5 player like VideoJS with Flash fallback play the video. Looking at the browser support chart at http://en.wikipedia.org/wiki/HTML5_video#Browser_support i can see i would also need WebM and/or Ogg streams.
Any suggestions on how to do this? Is it possible to route the stream via some (preferable .NET) web application and recode the video on the fly? Although im guessing that would take some powerful hardware :) Is there another media server which supports all three formats?
Thank you for your ideas
You can use an IceCast server. Convert the camera's output to Ogg via ffmpeg2theora and pipe it into IceCast via oggfwd. Then let HTML5 <video> play from the IceCast server. Worked for me for Firefox.
E.g.
# Tune DVB-T receiver into channel
(tzap -c channels-4.conf -r "TV Rijnmond" > /dev/null 2>&1 &)
# Convert DVB-T output into Ogg and pipe into IceCast
ffmpeg2theora --no-skeleton -f mpegts -a 0 -v 5 -x 320 -y 240 -o /dev/stdout /dev/dvb/adapter0/dvr0 2>/tmp/dvb-ffmpeg.txt | oggfwd 127.0.0.1 8000 w8woord /cam3.ogg > /tmp/dvb-oggfwd.txt 2>&1

Red 5 publish Issue

I m publishing a stream on red5 using microphone on client side as3 code . but it not published good stream but the same thing i m doing on FMS it creates perfect stream
I need to be understand what is the issue during publish on red 5 .
Read Red5 documentation for that. And ofcourse there are differences between the performances of the two servers. However if you want to improve the quality of stream you can use FFMPEG or Xuggler with Red5 to encode streams.
Because you are not saying what your encoder is, it is hard to give a clear answer. If you are using Adobe's FMLE to create the stream that goes to your FMS server, it is the FMLE that explains why you have good video and audio encoding 'out-of-the-box'.
I have never tried to use FMLE with RED5, so I cannot tell you if it works, but doubtful it works out-of-the-box. It probably can work with a bit of tweaking on both client and server side.
To use your own encoder, what you do is capture two streams using ffmpeg, a great example on how to do that is on stackoverflow here.
Once you are capturing, you can use ffmpeg to send the combined audio and video streams to a file, or you can send it directly to your red 5 server. A simplified version of the ffmpeg command to show mapping two streams to give a single rtmp output is shown below. ffmpeg -i video_stream -i audio_stream -map 0:0 -map 1:0 -f flv rtmp://my.red5.server:1935/live/mystream

Check if file has a video stream

I'm on Mac OS X (Objective-C) and I'm looking for a way to determine if a file has a video stream. More specifically, a video stream that can be decoded by FFmpeg. I probably can put something together with Objective-C to see if a file has a QuickTime-compatible video stream but that's not enough. I could try MediaInfo but I don't know which files it can open.. Another option would be running FFmpeg and grep to see if there's a video stream. But this is relatively slow so I looked at FFmpeg's source code to see how they detect it and I couldn't even find out in which file to start.
Here is a tutorial on using libavformat and libavcodec (both part of FFMpeg) to get video stream info.