Is it possible to add or remove audio track to a stream after the stream connect to the pearConnection?
I was able to do it before ( in getUserMedia )
But it doesn't work after the stream connect to the peerConnection
Thanks you
That is possible with the addTrack and removeTrack APIs. See https://webrtc.github.io/samples/src/content/peerconnection/upgrade/ for an example (and note that the argument to removeTrack is an RTCRtpSender, not a track)
Related
I want to capture screen (or a canvas) with recordRTC and send it to tokbox session as a stream instead of a stream from camera, microphone or sharescreen.
What I want is subscribers gets a stream that is the record of a canvas of the other peer (the publisher). Is there a way to do it?
Thanks
This blog post details how you can publish a custom MediaStream into an OpenTok Session. https://tokbox.com/blog/camera-filters-in-opentok-for-web/
It's not officially supported just yet you have to do a bit of a hack.
as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.
I am able to stream video with Kurento using WebRTC, I need to implement multi party audio conference using MCU feature of Kurento Media server. So audio coming from all clients are merged and send back that combined audio to all clients in efficient manner using WebRTC.
if it will works then we need only two connection(one for send and one for receive) other wise we need peer connection to all clients using WebRTC. It is not feasible to establish peer connection to all all clients.
Please suggest me any sample code which have implemented MCU for audio using Kurento Media Server or guide me to implement same using Kurento Media Server.
I'm afraid there's no code that allows that un Kurento. There is the Composite media element, but that is usually for audio AND video. It combines streams into a single stream matrix of the required size, usually more than 9 streams may have performance problems. If you only want to process audio, surely it could handle much more than 9 streams. To use only audio just connect AUDIO stream to the HubPort.
EDIT 1
The code to generate the media elements needed, and the correct way establish an audio-only connection is as follows.
WebRtcEndpoint webrtc = new WebRtcEndpoint.Builder(pipeline).build();
Composite composite = new Composite.Builder(pipeline).build();
HubPort hubport = new HubPort.Builder(composite).build();
webrtc.connect(hubport, MediaType.AUDIO);
Please note that the connection is from the WebRtcEndpoint to the HubPort. If you need it to be bidirectional, you'll need to connect that way also.
hubport.connect(webrtc, MediaType.AUDIO);
I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.
I am just concerned with audio, and I'd like to send two audio streams to be synced on the receiver's side.
Now, for audio and video, the local stream can be obtained via
getUserMedia({'audio':true, 'video':constraints}, onUserMediaSuccess,
onUserMediaError);
Assuming I have two microphones, how do I get access to the two audio streams, and further, synchronize them at the receiver?
I think it will be possible soon.
var audio1 = new AudioStreamTrack(constraints1);
var audio2 = new AudioStreamTrack(constraints2);
var stream = new MediaStream([audio1, audio2]);
navigator.getUserMedia(stream, successCb, errorCb);
You'll be able to track both audio/video streams and combine them in a single stream.