Is concurrent stream limitation of Agora RTC SDK depend on video quality or only audio? - agora.io

In Agora FAQ I read one channel support 17 stream concurrently. Is this the limitation in case of only audio or low quality video streams are in the channel?
As I see in Agora sdk 4.x streams are replaced by tracks. Does it change something on limitation?
Thanks in advance,
László

The shift from streams to tracks in the Agora Web SDK 4.x is to give developers greater control over the individual tracks (audio, video) instead of the high-level stream object (which contains the tracks). This does not have any effect on the number of users within the channel.
To scale beyond the 17 user limitations, there are a few different approaches. The recommended way for web would be to use multiple client objects to subscribe to multiple channels. To ensure there is no duplication of video streams, limit each user to only broadcast into a single channel. Make sure when you initialize each client, you add the event listeners before joining the channel.
Something to note when you have more than 17 streams/videos playing at one time it can be very CPU/GPU intensive, so you might want to use Dual Stream mode to have a high quality and lower quality stream.

Related

Playing a Live stream from media server on android application

My setup is as follows:
OBS Studio to create the video feed
Ant Media Server to distribute the stream
Now I'm building an app that will display this stream and I'm currently using ExoPlayer, however I'm having a hard time getting it to work for both RTMP and HLS, I read some where that I could embed a webplayer in my app would that be easier? Here is my code for ExoPlayer:
//RTMP Url
String url = "rtmp://192.168.1.244/WebRTCApp/379358104902020985845622";
BandwidthMeter bandwidthMeter = new DefaultBandwidthMeter();
TrackSelection.Factory videoTrackSelectionFactory =
new AdaptiveTrackSelection.Factory();
TrackSelector trackSelector =
new DefaultTrackSelector(videoTrackSelectionFactory);
SimpleExoPlayer player = ExoPlayerFactory.newSimpleInstance(this, trackSelector);
PlayerView playerView = findViewById(R.id.simple_player);
playerView.setPlayer(player);
/*
Create RTMP Data Source
*/
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory();
MediaSource videoSource = new ExtractorMediaSource.Factory(rtmpDataSourceFactory)
.createMediaSource(Uri.parse(url));
player.prepare(videoSource);
player.setPlayWhenReady(true);
Any help on this would be much appreciated.
Most online video streaming use Adaptive Bit Rate streaming (ABR) protocols to deliver the video, mainly HSL and DASH this days.
Most Media players, like ExoPlayer, support these protocols well, although they are complex and evolving protocols so there are always edge cases.
Many video conferencing applications use WebRTC which is a real time optimised protocol - the usual approach is to use a WebRTC client for this type of stream.
The difference between the two approaches from a streaming latency point of view, at a very high level, is:
ABR protocols prioritise quality and avoiding interruptions and buffer enough of the video to try to gaurantee uninterrupted playback. They are usually aimed at movie and live video streaming services. Even for low latency implementation the latency is measured in multiple seconds and more.
WebRTC prioritises latency and sacrifices quality if necessary. It is aimed typically at real time sensitive applications like video conferencing where it is important not to fall behind the discussion even if it means a temporary video glitch or even brief interruption in video. Latency is usually measured in sub seconds.
Any Media Server comes from the WebRTC side although recent versions support HLS /CMAF and Low Latency DASH (these are still higher latency than WebRTC generally as noted above).
For your service, if you are able to use a DASH or HLS stream you may find that it is an easier path with ExoPlayer. If you look at the demo app for example you will see DASH and HLS streams but no RTMP ones. You can easily extend or modify the demo app to play your own HLS or DASH stream and this is often an easy way to start - look at the sample material in the assets/media.exolist.json and add your own URL:
https://github.com/google/ExoPlayer/blob/aeb306a164911aa1491b46c2db4da0d329c83c65/docs/demo-application.md
However, ExoPlayer should also support RTMP via an extension if this is your preferred route - there is a specific extension that allows this:
https://github.com/google/ExoPlayer/blob/0ba317b1337eaa789f05dd6c5241246478a3d1e5/extensions/rtmp/README.md
In theory you simply need to add this dependency to your application:
if your application is using DefaultDataSource or DefaultDataSourceFactory, adding support for RTMP streams is as simple as adding a dependency to the RTMP extension
It would be worth checking the issues list in this repository for any recent issues and/or workarounds.

How does audio and video in a webrtc peerconnection stay in sync?

How does audio and video in a webrtc peerconnection stay in sync? I am using an API which publishes audio and video (I assume as one peer connection) to a media server. The audio can occasionally go out of sync up to 200ms. I am attributing this to the possibility that the audio and video are separate streams and this accounts for the why the sync can be out.
In addition to Sean's answer:
WebRTC player in browsers has a very low tolerance for timestamp difference between arriving audio and video samples. Your audio and video streams must be aligned (interleaved) precisely, i.e. the timestamp of last audio sample received from network, should be +- 200ms or so comparing to timestamp of last video frame received from network. Otherwise WebRTC player will stop using NTP Timestamps and will play streams individually. This is because WebRTC player tries to keep latency at a minimum. Not sure it's good decision from WebRTC team. If your bandwidth is not sufficient, or if live encoder provides streams not timestamp-aligned - then you will have out of sync playback. In my opinion, WebRTC player could have a setting - whether to use that tolerance value or always play in sync, using NTP Timestamps, at the expense of latency.
RTP/RTCP (which WebRTC uses) traditionally uses the RTCP Sender Report. That allows each SSRC stream to be synced on a NTP Timestamp. Browsers do use them today, so things should work.
Are you doing any protocol bridging or anything that could be RTP only? What Media Server are you using?

Stream html5 camera output

does anyone know how to stream html5 camera output to other users.
If that's possible should I use sockets, images and stream them to the users or other technology.
Is there any video tutorial where I can take a look about it.
Many thanks.
The two most common approaches now are most likely:
stream from the source to a server, and allow users connect to the server to stream to their devices, typically using some form of Adaptive Bit Rate streaming protocol (ABR - basically creates multiple bit rate versions of your content and chunks them, so the client can choose the next chunk from the best bit rate for the device and current network conditions).
Stream peer to peer, or via a conferencing hub, using WebRTC
In general, the latter is more focused towards real time, e.g. any delay should be below the threshold which would interfere with audio and video conferences, usually less than 200ms for audio for example. To achieve this it may have to sacrifice quality sometimes, especially video quality.
There are some good WebRTC samples available online (here at the time of writing): https://webrtc.github.io/samples/

WebRTC video and photo at same time

I'm working on an application that transmits video in low quality using webrtc. Periodically I want to send from same camera single frame in high resolution.
When I try to acquire another stream using getUserMedia I get same low quality one and when I try to pass some constraints to force higher resolution then then operation fails with overconstrained error (even though normally when there is no other stream it works fine).
Is it even possible to have at the same time many streams with different parameters from same device? Or is it possible acquire high resolution image without requesting for a new stream?

Symbian/S60 audio playback rate

I would like to control the playback rate of a song while it is playing. Basically I want to make it play a little faster or slower, when I tell it to do so.
Also, is it possible to playback two different tracks at the same time. Imagine a recording with the instruments in one track and the vocal in a different track. One of these tracks should then be able to change the playback rate in "realtime".
Is this possible on Symbian/S60?
It's possible, but you would have to:
Convert the audio data into PCM, if it is not already in this format
Process this PCM stream in the application, in order to change its playback rate
Render the audio via CMdaAudioOutputStream or CMMFDevSound (or QAudioOutput, if you are using Qt)
In other words, the platform itself does not provide any APIs for changing the audio playback rate - your application would need to process the audio stream directly.
As for playing multiple tracks together, depending on the device, the audio subsystem may let you play two or more streams simultaneously using either of the above APIs. The problem you may have however is that they are unlikely to be synchronised. Your app would probably therefore have to mix all of the individual tracks into one stream before rendering.