how do i connect a webRTC content provider to the Janus-Gateway streaming plug-in - webrtc

In the gstreamer - streamingtest example
(https://janus.conf.meetecho.com/streamingtest.html)
a gstreamer pipe is sending to udpsink host=127.0.0.1 port=5004, which then is broadcasted via webRTC in Janus.
how is it possible to send a webcam-stream from another user through his browser getUserMedia() to Janus-Gateway for broadcasting?
Do i have to configure a pipe for it as well and how would that look like?
I have installed Janus and i am able to run all the Demos.

there is a rtp_forward request possible against the videoroom which would forward the rtp from a publisher in that room to the streaming plug-in or any other ip.
it was added here:
https://github.com/meetecho/janus-gateway/pull/255
instead of rtp_listen though, you should request rtp_forward and also pass in the secret.
(this solution needs a browser, but I marked it as right solution since it works for me this way and also scaling users is possible like this)

Related

using webrtc for audio broadcast

I'm trying to stream a microphone/audio to multiple clients.
the broadcaster is a screenless raspberry, so I can't open a Webbrowser and click on "share mircophone"
The clients will be using their smartphone to listen.
the latency must be super low.
I did not find any WebRTC Demo that worked. All of them are either p2p or the scalable Broadcasting from muaz khan is only working for the initiator; not clients.
I came across Janus (which I didn't really understand what exactly this is doing) but I don't get how to install this and how to configure it.
Is there any way to easily share the microphone's output via WebRTC? Something like Apache hosting a simple website where the microphone audio is hosted on?
Thanks for all the ideas on how to solve it!
Is there any way to easily share the microphone's output via WebRTC?
No. There's nothing easy or simple about WebRTC.
the broadcaster is a screenless raspberry, so I can't open a Webbrowser and click on "share mircophone"
This is the simplest option... running a browser. Are you sure you need to actually allow it to access the audio device?
In the past, I've used a flag on Chromium to get around this problem. I don't remember exactly what that flag was, but looking at the list, it might have been...
--use-fake-ui-for-media-stream
You might also be able to use --enable-kiosk-mode.
At a minimum, if you were to open the browser interactively and enable access, that page would get automatic access in the future.
I did not find any WebRTC Demo that worked. All of them are either p2p
WebRTC is peer-to-peer, but remember that the "server" can be one of those "peers".
Finally, you can look into using GStreamer, but don't expect anything quick and easy. https://github.com/centricular/gstwebrtc-demos

How to turn webcam to rtsp

I have a product that can analyze video after inputting an rtsp url.
I would like to use a webcam to stream and feed my product the webcam rtsp.
How can I do that?
It will depend on the webcam you are using - most support RTSP but many do not publish the interface to access the stream as they are designed to be used with the webcam's own companion app.
There are some web resources which provide the RTSP urls for common web cams - you may find it hard to find a match as new versions of webcams roll out but it should give you a feel how to try accessing a vendors camera if you have a specific web cam you are testing against. Some examples (at the time of writing):
https://www.getscw.com/decoding/rtsp
https://soleratec.com/get-support/rtsp/
If you can't find the info for the camera you are using, and you have the companion app, you can also use a network sniffer tool like Wireshark (https://www.wireshark.org) and try to search the traffic for 'rtsp://' pattern.
If you just need to test your app and have access to a raspberry pi with a camera module you can also use this to generate an RTSP stream - there are several approaches for this but one I have found reliable is the v4l2rtspserver server:
https://github.com/mpromonet/v4l2rtspserver
There are specific instructions for setting it up on PI (https://github.com/mpromonet/v4l2rtspserver/wiki/Setup-on-Pi) and you can also verify it is working using VLC player on a laptop etc before testing in your specific application.
There are also a small number of test RTSP urls available on the web - the most reliable seem to be the one at this link provided by Wowza (again, link valid at time of writing):
https://www.wowza.com/html/mobile.html

OpenTok and File Sharing

I am building a video chat website using OpenTok. I have the video and text chat working, (still working on the screen sharing), but I was wondering if anyone could point me in the right direction regarding file sharing?
I would like both parties to be able to send files to each other, but not really sure how to go about it. Would it be possible to use Peer5?
There are several ways to get the peers to send files to each other.
A first way is to upload the file to your server or to some cloud storage service. Then share the link to the other peers via OpenTok's Signaling API (which is, presumably, an abstraction over WebRTC's DataChannels). This solution is simple, but not peer-to-peer.
Another solution is to, again, upload the file to a server and share the link to the other peers, but this time have the peers download the file via Peer5's Downloader. The Peer5 Downloader uses a coordination server to figure out which peers are available to help with the download. If no peers are available, the download will fall back to the HTTP server. This of course only makes sense if the file is being shared with several peers at the same time. In 1-to-1 communications it is pointless.
The previous solution is P2P only in the download part; the user still has to upload the file to a server. Another way, which would be P2P all the way, is to cut the file into chunks, and send them over the OpenTok Signaling API. It is a complicated process but there are several tutorials about this. The tutorials use the WebRTC DataChannel, but it is reasonable to assume that they could be adapted to the Signaling API:
https://bloggeek.me/send-file-webrtc-data-api/
http://www.html5rocks.com/en/tutorials/webrtc/datachannels/#build-a-file-sharing-application
A interesting open source application of a file sharing app using WebRTC is Sharefest, made by the guys from Peer5. You can use it for inspiration if you are inclined to make such a system.
As a side note, OpenTok seems to be considering to build a starter kit with sample code about how to integrate OpenTok with Peer5 in a file sharing application. I do not know how such an implementation will work, but I presume it is some variation of my second suggestion here. It could be good to keep an eye on it.

WebRTC -- can getUserMedia use local stream?

I want to let WebRTC encoded and play h264(NAL) stream(local file).
In the WebRTC tutorial, getUserMedia is use for get local camera connecting to the system, I don`t know if the getUserMedia function support
capture the local stream file like h264 stream.
If it doesn't work that way, may be I should modify WebRTC source code(I'm studying it).
Here is the question, If i change WebRTC code, how can i integration the new code into browser? Made it a plugin?
Firefox supports an extension to the <video> element that you can use to do this.
First, set the source of a video element:
v1.src = "file:///...";
Then you can call the (currently prefixed) mozCaptureStream or mozCaptureStreamUntilEnded function to get a MediaStream.
stream = v1.mozCaptureStream();
The proposed specification.
Note however that you need to ensure that the file is same origin with respect to the page. The same origin rules for file:/// are probably going to cause issues. Otherwise your MediaStream isn't going to be accessible to you. One way to ensure that is not to set the location directly, but to load the file using an <input type="file"> element.
As noted in other answers, Firefox currently only supports the baseline profile of H.264.
First, you are right getusermedia will not work for you. However, there are a couple of options.
Hack a stream together using RTCDataChannel. Breaking up the media stream and delivering each packet and then handling it on the client side.
Take a look at this demo for precorded media streams. I do not believe that H264 is addressed but it could help you on your way(probably for Firefox only)
Use some sort of webrtc breaker/endpoint that is native to stream the file. I know specifically that others(including myself) have streamed H264 to Firefox through the Janus-Gateway
Couple of asides:
Firefox only supports Baseline profiles in streaming h264 for a webrtc peerconnection
Chrome does not support h264 for webrtc at all
Are you trying to have getUserMedia return a h.264 encoded stream?
In which case, today it will only be possible with Firefox today, under some specific environment (cisco 264 plugin installed) and only for the base profile.
Chrome promised in november to add this capacity, but there is no timeline that I know of Expect at least Q2 2015.
Using our (temasys) commercial plugin you will soon be able to do that in IE and Safari.
Those are the only options on client side I can think of. On server side you can use whatever you want to transcode, including janus, kurento, powermedia, licode/lynkia, ....
Note: using other means like Datachannel or WebSocket are ok to transfer files, but would greatly reduce the user experience as you would not have all the added recovery (and security) mechanisms included in SRTP, DTLS, and would also not have specific mistreated media enhancements that are in webRTC like jitter, buffers, netQ, ect ...

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com