Streaming video from multiple cameras to html5 player - html5-video

Im trying to figure out a way of having a server which has a camera (or multiple cameras) connected via usb (firewire, whatever...) and then streams the video to users.
The idea so far is to have a red5 server which streams the camera feed as a H.264 stream and have a Html5 player like VideoJS with Flash fallback play the video. Looking at the browser support chart at http://en.wikipedia.org/wiki/HTML5_video#Browser_support i can see i would also need WebM and/or Ogg streams.
Any suggestions on how to do this? Is it possible to route the stream via some (preferable .NET) web application and recode the video on the fly? Although im guessing that would take some powerful hardware :) Is there another media server which supports all three formats?
Thank you for your ideas

You can use an IceCast server. Convert the camera's output to Ogg via ffmpeg2theora and pipe it into IceCast via oggfwd. Then let HTML5 <video> play from the IceCast server. Worked for me for Firefox.
E.g.
# Tune DVB-T receiver into channel
(tzap -c channels-4.conf -r "TV Rijnmond" > /dev/null 2>&1 &)
# Convert DVB-T output into Ogg and pipe into IceCast
ffmpeg2theora --no-skeleton -f mpegts -a 0 -v 5 -x 320 -y 240 -o /dev/stdout /dev/dvb/adapter0/dvr0 2>/tmp/dvb-ffmpeg.txt | oggfwd 127.0.0.1 8000 w8woord /cam3.ogg > /tmp/dvb-oggfwd.txt 2>&1

Related

How to ssh vps and play the video?

I build apache2 on vps,start it,and upload test.mp4 into /var/www/html.
ffplay http://vps_ip/test.mp4
I can watch test.mp4 ,maybe there is a other way to play it.
1.ssh root#vps_ip
2.mv /var/www/html/test.mp4 /tmp/test.mp4
3.ffplay /tmp/test.mp4
Could not initialize SDL - No available video device
(Did you set the DISPLAY variable?)
libsdl2-dev and sdl are all installed on my vps.
How to ffplay it after sshing login it?
To show a graphical application over SSH, you use X forwarding. However, X forwarding a video player will result in a stream of the full unencrypted video coming down the pipe, which probably won't work well (assuming that it even plays at all).
Download the video and play it on your local machine instead.

RTSP to HTTP MJPEG transcoding embedded in website

I have a phone which can display http MJPEG streams, and I would like to get this working. I have a camera here, which only sends out an RTSP Stream, I could convert this with vlc to a http MJPEG stream, but my phone needs this embedded into a website.
Like this: http://88.53.197.250/axis-cgi/mjpg/video.cgi?resolution=320x240
But the vlc transcoding, just sends out the bare http stream.
Is there any chance to embedd this correct, so that I can display this on the screen? I've googled a lot, but couldn't find a solution for that.
Thank you very much
I would like to use Suse Linux to do that
This is the command I use for converting RTSP to MJPEG with vlc:
vlc.exe -vvv -Idummy hereYourVideoSource --sout #transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8080/} --run-time= hereYourTimeOutValue vlc://quit;
Change hereYourVideoSource for your RTSP source and hereYourTimeOutValue for enable a timeout of proccessing if you want.
In this sample I use port 8080 on localhost, you can change it to another port. The request to get this mjpeg should be:
http://127.0.0.1:8080/
or:
http://localhost:8080/
In html you get the mjpeg using img tag:
<img src="http://localhost:8080/" />
Hope it helps. Suerte.

Convert RTSP stream to virtual web camera

I am trying to use a RTSP stream from an IP camera as video input source for various applications on Windows (eg. Skype, Zoom, Microsoft Teams, etc.).
The only solution I have found so far is using "webcam 7", an application that fetches an RTSP stream and creates a virtual webcam driver that registers in system as webcam and that any application can then use. Unfortunately, this application often becomes unstable and might crash randomly.
Are there any alternative/better ways for achieving this?
Create your own DirectShow video capture filter (there are lots of examples - this is a great one) and handle the RTSP stream inside it. This way you can implement the stability yourself.
I know this is a bit old question.
But you can also have look at vlc2vcam, looks promising.
Try Moonware Universal Source Filter from http://netcamstudio.com. The only drawback is that it creates only video "composite" device that sends both video + audio and Skype can only see the video (I think most of applications does the same).
If I find an easy way to split that stream will post it here.
You can easily do it on Ubuntu, Debian, Raspian, and Ubuntu Linux for Windows subsystems using the following method,
Installing required libraries, v4l2loopback-dkms and ffmpeg:
sudo apt install v4l2loopback-dkms
sudo apt install ffmpeg
Emulate a video device:
sudo modprobe v4l2loopback card_label="Webcam Stream Name" exclusive_caps=1
Streaming from RTSP uri to the created virtual device:
ffmpeg -stream_loop -1 -re -i rtsp://uri -vcodec rawvideo -threads 0 -f v4l2 /dev/video0
You can replace the '0' at the end of /dev/video0 with the number of the available and playable video device.

H.264 trimming/downsizing and streaming with Apache

I am doing some research on how to do two things: Trim and stream H.264 video.
What does it take to trim a mpeg4 h.264 video to 30 seconds and downsize it to 480p. I am assuming I would need to find a 3rd party library that does H.264 encoding, doing a quick Google search and the only thing I find is VideoLan.org, but I cannot find their commercial license. Are there other options folks know of?
How is streaming of H.264 to a HTML5 work? I know that with Flash, one can have one file format that requires the whole file to be downloaded, then it will play. The other format allows streaming, but requires a Flash server. I am going to be using Apache to serve up the images on the Intranet, how does one go about streaming them on Apache?
1) You can use FFmpeg :
ffmpeg -i in.mp4 -s 720x480 -t 30 out.mp4
-s is to resize and -t is to dump only 30 seconds
2) For http streaming, if the moov atomc(contains the video headers and seek information), is present at the start of the video, the video will start playing as soon as it buffers up few seconds, it does not wait for the whole file to download. Forward seek is possible through ByteRange headers in http. To put moov atom in the beginning use qt-fastart . It comes with FFmpeg
qt-faststart in.mp4 out.mp4

Red 5 publish Issue

I m publishing a stream on red5 using microphone on client side as3 code . but it not published good stream but the same thing i m doing on FMS it creates perfect stream
I need to be understand what is the issue during publish on red 5 .
Read Red5 documentation for that. And ofcourse there are differences between the performances of the two servers. However if you want to improve the quality of stream you can use FFMPEG or Xuggler with Red5 to encode streams.
Because you are not saying what your encoder is, it is hard to give a clear answer. If you are using Adobe's FMLE to create the stream that goes to your FMS server, it is the FMLE that explains why you have good video and audio encoding 'out-of-the-box'.
I have never tried to use FMLE with RED5, so I cannot tell you if it works, but doubtful it works out-of-the-box. It probably can work with a bit of tweaking on both client and server side.
To use your own encoder, what you do is capture two streams using ffmpeg, a great example on how to do that is on stackoverflow here.
Once you are capturing, you can use ffmpeg to send the combined audio and video streams to a file, or you can send it directly to your red 5 server. A simplified version of the ffmpeg command to show mapping two streams to give a single rtmp output is shown below. ffmpeg -i video_stream -i audio_stream -map 0:0 -map 1:0 -f flv rtmp://my.red5.server:1935/live/mystream