Convert RTSP stream to virtual web camera - camera

I am trying to use a RTSP stream from an IP camera as video input source for various applications on Windows (eg. Skype, Zoom, Microsoft Teams, etc.).
The only solution I have found so far is using "webcam 7", an application that fetches an RTSP stream and creates a virtual webcam driver that registers in system as webcam and that any application can then use. Unfortunately, this application often becomes unstable and might crash randomly.
Are there any alternative/better ways for achieving this?

Create your own DirectShow video capture filter (there are lots of examples - this is a great one) and handle the RTSP stream inside it. This way you can implement the stability yourself.

I know this is a bit old question.
But you can also have look at vlc2vcam, looks promising.

Try Moonware Universal Source Filter from http://netcamstudio.com. The only drawback is that it creates only video "composite" device that sends both video + audio and Skype can only see the video (I think most of applications does the same).
If I find an easy way to split that stream will post it here.

You can easily do it on Ubuntu, Debian, Raspian, and Ubuntu Linux for Windows subsystems using the following method,
Installing required libraries, v4l2loopback-dkms and ffmpeg:
sudo apt install v4l2loopback-dkms
sudo apt install ffmpeg
Emulate a video device:
sudo modprobe v4l2loopback card_label="Webcam Stream Name" exclusive_caps=1
Streaming from RTSP uri to the created virtual device:
ffmpeg -stream_loop -1 -re -i rtsp://uri -vcodec rawvideo -threads 0 -f v4l2 /dev/video0
You can replace the '0' at the end of /dev/video0 with the number of the available and playable video device.

Related

webcam cannot be accessed by WSL

I'm very new to WSL. I want to run a python code on ubuntu shell on my win10 PC. This code needs access to webcam, but it seems that the webcam is not opened properly..I have checked online and I found several posts 1-2 years ago which said that the integrated webcam cannot be accessed by WSL..Is there any update or trick that can use webcam on WSL?
Many thanks!
You can't still access the webcam in WSL but there are several ways to access the video stream in WSL. The most obvious one is converting your webcam to a web-based streaming protocol like RTSP, MPEG, FFMPEG streams (example) where you get almost similar experiences.
Also, there are many applications where you can make your webcam as an IPcam stream then use the stream URL instead of using the camera index in OpenCV or any other application.

How to ssh vps and play the video?

I build apache2 on vps,start it,and upload test.mp4 into /var/www/html.
ffplay http://vps_ip/test.mp4
I can watch test.mp4 ,maybe there is a other way to play it.
1.ssh root#vps_ip
2.mv /var/www/html/test.mp4 /tmp/test.mp4
3.ffplay /tmp/test.mp4
Could not initialize SDL - No available video device
(Did you set the DISPLAY variable?)
libsdl2-dev and sdl are all installed on my vps.
How to ffplay it after sshing login it?
To show a graphical application over SSH, you use X forwarding. However, X forwarding a video player will result in a stream of the full unencrypted video coming down the pipe, which probably won't work well (assuming that it even plays at all).
Download the video and play it on your local machine instead.

Using FFMPEG in an OS X application

so I've been wanting to make a real-time live streaming application. Essentially, the application would send the microphone feed from an Xcode application to a website where it can be viewed in real-time. Is FFMPEG the best solution for this? How would I go about doing this? If that's too broad, then how do I use the FFMPEG framework in an OS X objective-c application?
To directly address your questions:
(1) Is FFMPEG the best solution for this?
It depends. When setting up a live streaming environment you will likely stumble over FFMPEG, VLC and gstreamer, which are the options you have to simply stream video/audio. Therefore, yes, FFMPEG can be used as part of the solution. Please look into the following question: DIY Video Streaming Server
(2) How would I go about doing this?
Your requirement is to make a live streaming application which sends the mic input onto the web. This includes the following steps:
(0) Your Xcode application will need to provide a method to start this process. You don't necessarily need to integrate a framework to achieve this.
(1) Streaming / Restreaming
Use FFMPEG or VLC to grab your device and stream it locally:
ffmpeg -i audioDevice -acodec libfaac -ar 44100 -ab 48k -f rtp rtp://host:port
(2) Segmenting for HTTP Live Streaming*
Use a segmenter such as: mediastreamsegmenter (Apple), livehttp (VLC) or segment (FFMPEG) to prepare your stream for web delivery:
vlc -vvv -I dummy <SOURCEADDRESS> --sout='#transcode{acodec=libfaac,ab=48}:std{access=livehttp{seglen=10,delsegs=false,numsegs=10,index=/path/to/your/index/prog_index.m3u8,index-url=YourUrl/fileSequence######.ts},mux=ts{use-key-frames},dst=/path/to/your/ts/files/fileSequence######.ts}'
*you could also simply use VLC to grab your audiodevice with qtsound (see this question) and prepare it for streaming with livehttp.
(3) HTML 5 Delivery
Publish your stream
<audio>
<source src="YOUR_PATH/playlist.m3u8" />
</audio>
(3) If that's too broad, then how do I use the FFMPEG framework in an OS X objective-c application?
Either use an external wrapper framework to access FFMPEG functionality and consult the tutorials to work with these frameworks or you could also approach this by using NSTask to wrap your command line arguments in Objective-C and simply start those tasks from your application - as in this question.
Another way would be to use VLCKit, which offers VLC functionality in a framework for Objective-C (VLCKit wiki). However when tackling streaming challenges I prefer to work with the actual commands instead of pushing another layer of framework in between, which may be missing some options.
I hope this points you in the right directions. There are multiple ways to solve this. It's a broad question, therefore this broad approach to answer your question.

CRTMP Server RTSP play to Symbian RealPlayer

CRTMP Server is great tool... Nat Traversal when client is behind router working great.
...
Tested Android 2.2, 2.3, 4.1, RTSP streaming ok (rtmp flash also ok).
But RTSP on RealPlayer (Helix DNA 10.0 onS60) always shows 'can not play media; or 'can not connect' (connection is surely established - checked with wireshark).
...
(it is programming related problem, because i am willing to explore CRTMP code to accomplish solution)
BBC RTSP channel (wowza is behind) is showed well in symbian realPlayer, but streaming source:
ffmpeg -i rtsp://[bbc_channel_address] -c copy -f rtsp rtsp://[crtmp_server_addr]:8554/ch
... is ok for android, but not working for realPlayerS60.
Does anybody have a clue about reason?
Having RTSP working is not enough. Different phones have different requirements in terms of A/V codecs quality. The content you want to deliver may be too high quality for that device. This assumption fits well with what you said (working on android, but not on realPlayer)
You can ask this questions over the crtmpserver's mailing list. Consult http://rtmpd.com/resources/ for details about the mailing list
Edit:
Or the codecs you try to push from crtmpserver towards the phone are not even supported, let alone hitting the quality max limit

Streaming video from multiple cameras to html5 player

Im trying to figure out a way of having a server which has a camera (or multiple cameras) connected via usb (firewire, whatever...) and then streams the video to users.
The idea so far is to have a red5 server which streams the camera feed as a H.264 stream and have a Html5 player like VideoJS with Flash fallback play the video. Looking at the browser support chart at http://en.wikipedia.org/wiki/HTML5_video#Browser_support i can see i would also need WebM and/or Ogg streams.
Any suggestions on how to do this? Is it possible to route the stream via some (preferable .NET) web application and recode the video on the fly? Although im guessing that would take some powerful hardware :) Is there another media server which supports all three formats?
Thank you for your ideas
You can use an IceCast server. Convert the camera's output to Ogg via ffmpeg2theora and pipe it into IceCast via oggfwd. Then let HTML5 <video> play from the IceCast server. Worked for me for Firefox.
E.g.
# Tune DVB-T receiver into channel
(tzap -c channels-4.conf -r "TV Rijnmond" > /dev/null 2>&1 &)
# Convert DVB-T output into Ogg and pipe into IceCast
ffmpeg2theora --no-skeleton -f mpegts -a 0 -v 5 -x 320 -y 240 -o /dev/stdout /dev/dvb/adapter0/dvr0 2>/tmp/dvb-ffmpeg.txt | oggfwd 127.0.0.1 8000 w8woord /cam3.ogg > /tmp/dvb-oggfwd.txt 2>&1