SFU with Kinesis video stream SDK js - webrtc

I built a React.js and Node.js app, inspired by KVS example .
It is working for a few participants, a Master can stream his webcam video & audio via WebRTC, and get every viewer webcam video. But we realized it won't be suitable for 50 people as the Master CPU and network increase over each viewer connection.
For every viewer a peerConnection is created with Master. We'd rather the Master to send his stream only once to a server (then sending it to viewers)
We would like to go for a SFU solution, would it be possible with the Javascript SDK ?
It is advised for the C SDK to use putMedia & GStreamer, is there an equivalent in javascript ?
I saw that mediaSoup could do the SFU part, could it be used with Kinesis ?

Related

Stream live video from Raspberry Pi Camera to Android App

I have multiple Raspberry Pi Devices with the native camera in my home and office (PUBLISHERS). - Publisher(Pi) they are on a local network behind a firewall/router and connected to the internet.
I have an EC2 webserver (BROKER). It is publicly accessible over a public IP Address.
I have an Android App on my phone. It has internet connectivity through a 4G Network. (SUBSCRIBER/CONSUMER/CLIENT)
I am trying to view the live feed of each of the raspberry cameras on my Android app. The problem is more conceptual than technical. I am unable to decide what should be the right approach and most efficient way to achieve this in terms of costs and latency.
Approaches, I have figured out based on my research on this:-
Approach 1:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream saves it to AWS S3
3. Have a middleware that transcodes the RTSP stream and saves it in a format accessible to mobile app via S3 url
Approach 2:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Have a code in the pi device that reads the RTSP stream pushes it to a remote frame gathering (ImageZMQ) server. EC2 can be used here.
3. Have a middleware that transcodes the frames to an RTSP stream and saves it in a format on S3 that is accessible to the mobile app via pubicly accessible S3 URL
Approach 3:
1. Stream the camera in WebRTC format by launching a web browser.
2. Send the stream to a media server like Kurento. EC2 can be used here.
3. Generate a unique webrtc pubicly accessible url to each stream
4. Access the webrtc video via mobile app
Approach 4:
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream via Amazon Kinesis client installed on the devices.
3. Publish the Kinesis stream to AWS Cloud
4. Have a Lambda store to transcode it and store it in S3
5. Have the mobile app access the video stream via publicly accessible S3 url
Approach 5: - (Fairly complex involving STUN/TURN Servers to bypass NAT)
1. Stream the camera in RTSP / RTMP in the pi device via raspvid/ffmpeg
2. Grab the stream and send it a to mediaserver like gstreamer. EC2 can be used here.
3. Use a live555 proxy or ngnix RTMP module. EC2 can be used here.
4. Generate a unique publicly accessible link for each device but running on the same port
5. Have the mobile app access the video stream via the device link
I am open to any video format as long as I am not using any third-party commercial solution like wowza, antmedia, dataplicity, aws kinesis. The most important constraint I have is all my devices are headless and I can only access them via ssh. As such I excluded any such option that involves manual setup or interacting with desktop interface of the PUBLISHERS(Pis). I can create scripts to automate all of this.
End goal is I wish to have public urls for each of Raspberry PI cams but all running on the same socket/port number like this:-
rtsp://cam1-frontdesk.mycompany.com:554/
rtsp://cam2-backoffice.mycompany.com:554/
rtsp://cam3-home.mycompany.com:554/
rtsp://cam4-club.mycompany.com:554/
Basically, with raspvid/ffmpeg you have a simple IP camera. So any architecture applicable in this case would work for you. As example, take a look at this architecture where you install Nimble Streamer on your AWS machine, then process that stream there and get URL for playback (HLS or any other suitable protocol). That URL can be played in any hardware/software player upon your choice and be inserted into any web player as well.
So it's your Approach 3 which HLS instead of WerRTC.
Which solution is appropriate depends mostly on whether you're viewing the video in a native application (e.g. VLC) and what you mean by "live" -- typically, "live streaming" uses HLS, which typically adds at least 5 and often closer to 30 seconds of latency as it downloads and plays sequences of short video files.
If you can tolerate the latency, HLS is the simplest solution.
If you want something real-time (< 0.300 seconds of latency) and are viewing the video via a native app, RTSP is the simplest solution.
If you would like something real-time and would like to view it in the web browser, Broadway.js, Media Source Extensions (MSE), and WebRTC are the three available solutions. Broadway.js is limited to H.264 Baseline, and only performs decently with GPU-accelerated canvas support -- not supported on all browsers. MSE is likewise not supported on all browsers. WebRTC has the best support, but is also the most complex of the three.
For real-time video from a Raspberry Pi that works in any browser, take a look at Alohacam.io (full disclosure: I am the author).

Is it possible to save a video stream between two peers in webrtc in the server, realtime?

Suppose I have 2 peers exchanging video with webRTC. Now I need both of the streams to be saved as video files in the central server. Is is possible to do it realtime? (Storing/Uploading the video from peers is not an option).
I thought of making a 3 node webRTC connection, with the 3rd node being the server. This way, I can screen record the 3rd node's stream or save it using some other way. But I am not sure about the reliability/feasibility of the implementation.
This is for a mobile application, and I would avoid any method that involves uploading/saving.
PS: I'm using Agora.io for the purpose of video-conference.
in my opinion
you can do it like the record demo:https://webrtc.github.io/samples/src/content/getusermedia/record/.
record each stream to blobs and push them to your server with websocket.
then convert the blobs to a webm file or just add in a video
Agora doesn't offer on-premise recording out of the box but they do provide thee code for you to be able to launch your own on-premise recording using your own server. Agora has the code and instructions to deploy on GitHub: https://github.com/AgoraIO/Basic-Recording
The way it works, once you have set up the Agora Recording SDK, the client would trigger the recording to start, via user interaction (button tap) or some other event (i.e. peer-joined or stream-subscribed) this will trigger the recording service to join the channel and record the streams. _The service outputs the video file once recording has stopped.
you need a WebRTC media server.
WebRTC media servers makes it possible to support more complex
scenarios WebRTC media servers are servers that act as WebRTC clients
but run on the server side. They are termination points for the media
where we’d like to take action. Popular tasks done on WebRTC media
servers include:
Group calling Recording Broadcast and live streaming Gateway to other
networks/protocols Server-side machine learning Cloud rendering
(gaming or 3D) The adventurous and strong hearted will go and develop
their own WebRTC media server. Most would pick a commercial service or
an open source one. For the latter, check out these tips for choosing
WebRTC open source media server framework.
In many cases, the thing developers are looking for is support for
group calling, something that almost always requires a media server.
In that case, you need to decide if you’d go with the classing (and
now somewhat old) MCU mixing model or with the more accepted and
modern SFU routing model. You will also need to think a lot about the
sizing of your WebRTC media server.
For recording WebRTC sessions, you can either do that on the client
side or the server side. In both cases you’ll be needing a server, but
what that server is and how it works will be very different in each
case.
If it is broadcasting you’re after, then you need to think about the
broadcast size of your WebRTC session.
link:https://bloggeek.me/webrtc-server/

Streaming webcam and mic inputs through browser

Short version:
I need an in-browser solution to deliver the webcam and mic streams to a server.
Long version:
I'm trying to create a live streaming application. So far I've only managed to figure out this workflow:
Client creates stream (some transcoder is probably required here)
Client sends(publishes?) stream to server (basically hosts an RTMP/other stream that should be accessible by my server)
Server transcodes, transrates, etc. and publishes the stream to a CDN
Viewers watch published stream
Ideally, I'd like a browser-based solution that requires minimal setup from the client's end (a Flash plugin download might be acceptable) and streams the webcam and mic inputs to the server. I'm either unaware of the precise keywords or am looking for the wrong thing, but I can't find an apt solution.
Solutions that involve using ffmpeg or vlc to publish a stream aren't really what I'm looking for, since they require additional download and setup, and aren't restricted to just webcam and mic inputs. WebRTC probably won't serve the same quality but if all else fails, I think it can get the job done, at least for some browsers.
I'm using Ubuntu for development and have just activated a trial license for Wowza streaming server and cloud.
Is ffmpeg/vlc et. al. the only way out? Or is there something that can do the job in a single browser tab?
If you go the RTMP way, Adobe Flash Player supports H.264 encoding directly. Since you mentioned Wowza you can find an example and complete source code (including the fla) in the examples directory. There's also a demo here. There are many other open-source Flash capture plugins.
You can also use the aforementioned Flash recorder without Wowza. In this case you'll need a RTMP server, a notable example being the Nginx RTMP module which supports recording (to flv) and also offers callbacks that allow you to launch the transcoding once the recording is done.
With WebRTC you can record (getUserMedia, MediaStreamRecorder) small media chunks and send them to the server where they will get concatenated or using the peer-to-peer communications features of WebRTC (RTCPeerConnection). For a detailed overview see my answer here.
In both cases you'll have issues with devices/browsers that don't support Flash or WebRTC, eg. iPhones, Safari. Plus getUserMedia doesn't capture the same format across all browsers: Firefox audio/video in WebM and Chrome audio in wav and video in WebM.
For mobile devices you'll probably have to write apps.

Is it possible to use WebRTC to streaming video from Server to Client?

In WebRTC, I always see the implementation about peer-to-peer and how to get video streaming from one client to another client. How about server-to-client?
Is it possible for WebRTC to streaming video file from server-to-client?
(I am thinking about using WebRTC Native C++ API to create my own server application to connect to the current implementation on chrome or firefox browser client application.)
OK, if it is possible, will it be faster than many current video streaming services?
Yes it is possible as the server can be one of the peers in that peer-to-peer session.
If you respect the protocols and send the video in SRTP packets using VP8, the browser will play it. To help you build these components on other applications or servers, you can check this page and this project as a guide.
Now, comparing WebRTC with other streaming services... It will depend on several variables like the Codec or the protocol. But, for instance, comparing WebRTC (SRTP over UDP with VP8 Codec) against Flash (RTMP over TCP with H264 Codec), I would say that WebRTC wins.
The player will be Flash Player against the native <video> tag.
The transport would be TCP against UDP.
But of course, everything depends on what you are sending to the client.
I have written some apps and plugins using the native WebRTC API, and there isn't a lot of information out there yet, but here are a few useful resources to get you started:
QT Example: http://research.edm.uhasselt.be/jori/qtwebrtc
Native to Browser example: http://sourcey.com/webrtc-native-to-browser-video-streaming-example/
I started with the WebRTC Native C++ to Browser Video Streaming Example but it doesnot build anymore with the actual WebRTC Native Code.
Then I made modifications merging into a standalone process :
management of the peerConnection (the peerconnection_server)
access to Video4Linux capture (the peerconnection_client).
Removing the stream from browser to the WebRTC Native C++ client give a simple solution to access throught a WebRTC browser to a Video4Linux device that is available from GitHub webrtc-streamer.
Live Demo
We are attempting to replace MJPEGs with Webrtc for our server software and have a prototype module for doing this using a smattering of components tied to the Openwebrtc project. It has been an absolute bear to do, and we have frequent ICE negotiation errors (even over a simple LAN), but it mostly works.
We also built a prototype with the Google Webrtc module, but it had many dependencies. I find it easier to work with the Openwebrtc modules because Google's stuff is so tightly tied to general peer-to-peer scenarios on the browser.
I compiled the following from scratch:
libnice 0.1.14
gstreamer-sctp-1.0
usrsctp
Then I have to interact with libnice a bit directly to gather candidates. Also have to write out the SDP files by hand. But the amount of control--being able to control the source of the pipeline--makes it worthwhile. The resulting pipeline (with two clients off one server source) is below:
Of course. I'm writting a program using native WebRTC api which can join the conference as a peer and record both video and audio.
see: How to stream audio from browser to WebRTC native C++ application
and you can definitely streaming media from native app.
I'm sure you can use dummy_audio_file to streaming audio from local file, and you can find a way to access the video streaming progress by your own.
Yes it is. We have developed an load test tool to publish and play for Ant Media Server. This tool can broadcast media file. We used the same native WebRTC library used in Ant Media Server.
Sure it's possible, it allows covert live streaming to WebRTC, for example:
OBS/FFmpeg ---RTMP---> Server ---WebRTC--> Chrome/Client
For this scenario, it allows the ultra low latency live streaming, about 600~800ms, to play the live streaming by WebRTC. Please take a look at this demo.

RTSP streaming iOS 6 using WoWza Server

Does any one knows about RTSP streaming using WOWza Server ?
I want to play it on a MPMoviePlayer controller in iOS6 but it shows not enough buffer to keep it up. My webservice urls work fine because I have also checked them using a browser but I can't find anything about RTSP streaming.
Does any one have any tutorials about RTSP streaming on iPhone using WOWza Server ?
rtsp streaming is not possible on iPhone, iPad and iPod touch. Please refer to the following link.
http://www.wowza.com/forums/content.php?62
MPMoviePlayer only supports HTTP live streaming. to have RTSP working on iPhone, you must implement your client on the iOS.
There is library live555 which implements for you, but you must integrate it with the code. Also decoding of the stream must be implemented in software, by you or a 3rd party library.
Wowza support re-restreaming the content as HLS, if this is your wowza server than there are easy instructions on the www.wowza.com site, if its not perhaps they are streaming HLS.
If you have to use rtsp, there are several good players available on the app store, or you can build your own using lib 555 as mentioned or one of our open or closed sourced frameworks.
https://github.com/mooncatventures-group/AVDemoPlay2L