Agora live streaming Quality configuration - live-streaming

I've been working on a flutter based project where the scope is to build live streaming functionality. I have used the flutter agora-rtc plugin for this with
video Dimensions: 1080:1920 in the configuration.
It works decent on high speed internet. But when the speed drops the video starts to stutter and lag a lot.
Can you please suggest a dimension and video configuration which will work on 1mbps speed with a decent quality

Setting video configuration based on the internet speed that you mentioned is not the only factor over here. You need to consider the number of simultaneous streams that will be played. Generally, a live stream requires higher bandwidth than a video regular call. You can refer this guide which can help you select the configuration based on your internet speed -https://docs.agora.io/en/All/faq/video_profile

Related

How to set priority to Audio over Video in WebRTC

Is there a way to set higher priority to Audio over video in WebRTC?
I am working on Cordova based mobile App and it uses Chrome Web View. My requirement is that Audio quality should not degrade in case of low bandwidth. Currently, I observed that Video and Audio both freezes or sometimes audio is not played at receiver side when running on low bandwidth.
I found this question but no solution provided there.
I also found a "priority" param in RTCRtpSender. But not sure about it.
I also found a long discussion about QoS priority here but I am still out of solution.

Frame by frame decode using Media Source Extension

I've been digging through the Media Source Extension examples on the internet and haven't quite figured out a way to adapt them to my needs.
I'm looking to take a locally cached MP4/WebM video (w/ 100% keyframes and 1:1 ratio of clusters/atoms to keyframes) and decode/display them non-sequentially (ie. frame 10, 400, 2, 100, etc.) and to be able to render these non-sequential frames on demand at rates from 0-60fps. The simple non-MSE approach using the currentTime property fails due to the latency in setting this property and getting a frame displayed.
I realize this is totally outside normal expectations for video playback, but my application requires this type of non-sequential high speed playback. Ideally I can do this with h264 for GPU acceleration but I realize there could be some platform specific GPU buffers to contend with, though it seems that a zero frame buffer should be possible (see here). I am hoping that MSE can accomplish this non-sequential high framerate low latency playback, but I know I'm asking for a lot.
Questions:
Will appendBuffer accept a single WebM cluster / MP4 Atom made up of a single keyframe, and also be able to decode at a high frequency (60fps)?
Do you think what I'm trying to do is possible in the browser?
Any help, insight, or code suggestions/examples would be much appreciated.
Thanks!
Update 4/5/16
I was able to get MSE mostly working with single frame MP4 fragments in Firefox, Edge, and Chrome. However, Chrome seems to be running into the frame buffer issue linked above and I haven't found a way to pre-process a MP4 to invoke this "low delay" mode. Anyone have any clues if it's possible to create such a file with an existing tool like MP4Box?
Firefox and Edge decode/display the individual frames immediately with very little latency, but of course something breaks once I load this video into a Three.js WebGL project (no video output, no errors). I'm ignoring this for now as I'd much rather have things working on Chrome as I'll be targeting Android as well.
I was able to get this working pretty well. The key was getting Chrome to enter its "low delay" mode by muxing a specially crafted MP4 file using modified mp4box sources. I added one line in movie_fragments.c so it read:
if (movie->moov->mvex->mehd && movie->moov->mvex->mehd->fragment_duration) {
trex->track->Header->duration = 0;
Media_SetDuration(trex->track);
movie->moov->mvex->mehd->fragment_duration = 0;
}
Now every MP4 created will have the MEHD fragment duration set to 0 which causes Chrome to process it as a live stream.
I still have one remaining issue related to the timestampOffset property which in combination with the FPS set in the media fragments control the playback speed. Since I'm looking to control the FPS directly I don't want any added delay from the MSE playback engine. I'll post a separate question here to address that.
Thanks,
Dustin

WebRTC - Peerconnection constraints

I've been working on a WebRTC videoconferencing app which is working great, taking into account the current state of WebRTC.
However, I have been exploring the possibilities to add constraints to the video and audio streams being send over by PeerConnection.
More specific in improving the performance of the video.
When videoconferencing on old (slow) laptops, we noticed that the quality of the image is really high but the frame per second is low. The stream is hacky.
About the audio quality, we give it a 8,5 for Chrome but only a 5,5 to 6 for Firefox.
I am not really interested in applying constraints to getUserMedia since this stream is being shown to the user aswell, and we don't want to change anything about this local output. (Unless there isn't another way)
I have found alot of information on W3G's drafts about MediaStreams and WebRTC itself.
These define certain constraints like default fps, minfps, minwidth and minheight of the image. On webrtc.org is also alot of information available like choosing codec etc.
But these settings can only be made "under the hood". It seems these settings cannot be addressed from RTCPeerConnection API level?
Certain examples on the net manipulate the SDP strings in the Offer / Answer part of the WebRTC handshake, is this the way to go ?
TL;DR : How to apply - and What is the best way to apply - constraints on WebRTC like minfps, maxfps, default fps, minwidth, maxwidth, dpi of image, bandwidth of video and audio, audio KHz and any other way to improve performance or quality of the stream(s).
Big thanks in advance !
Right now, most of those can't be set in Firefox or Chrome. A few can be adjusted (with care/pain) in the SDP, but even if there's an SDP option defined for something it doesn't mean that the browsers look at it.
Both Mozilla and Google are looking to improve CPU overload detection and reaction (reduce frame size dynamically, etc). Right now, this effectively isn't being done. Upcoming releases of FF (FF24) will adapt to the capture resolution (as a maximum), but we don't have constraints for that yet, just about:config prefs (see media.*). That would allow you to set a different default resolution for Firefox.

Streaming IP Camera solutions that do not require a computer?

I want to embed a video stream into my web page, which is part of our own cloud based software. The video should be low-latency (like video conferencing), and it would be preferable, but not required, for it to include audio. I am comfortable serving streaming binary data from the server-side, and embedding it into the page using HTML5 video.
What I am not comfortable with is the ability to capture the video data to begin with. The client does not already have a solution in place, and is looking to us for assistance. The video would be routed through our server equipment, and not be an embedded peice that connects directly to the video source.
It is a known quantity for us to use a USB or built-in camera from the computer. What I would like more information is about stand-alone cameras.
Some models of cameras have their own API documentation (example). It would seem from what I am reading that a manufacturer would typically have their own API which they repeat on many or all of their models, and that each manufacturer would be different in their API. However, I have only done surface reading and hope to gain more knowledge from someone who has already researched this, or perhaps even had first hand experience.
Do stand-alone cameras generally include an API? (Wouldn't this is a common requirement, so that security software can use multiple lines of cameras?) Or if not an API, how is the data retrieved from the on-board webserver? Is it usually flash based? Perhaps there is a re-useable video stream I could capture from there? Or is the stream formatting usually diverse?
What would I run into when trying to get the server-side to capture that data?
How does latency on a stand-alone device compare with a USB camera solution?
Do you have tips on picking out a stand-alone camera that would be a good fit for streaming through a server?
I am experienced at using JavaScript (both HTML5 and Node.JS), Perl and Java.
Each camera manufacturer has their own take on this from the point of access points; generally you should be able to ask for a snapshot or a MJPEG stream, but it can vary. Take a look at this entry on CodeProject; it tackles two common methodologies. Here's another one targeted at Foscam specifically.
Get a good NAS, I suggest Synology, check out their long list of supported IP Web Cams. You can connect them with a hub or with a router or whatever you wish. It's not a "computer" as-in "tower", but it does many computer jobs, and it can stay on while your computer is off or away, and do thing like like video feeds, torrents, backups, etc.
I'm not an expert on all the features, so I don't know how to get it to broadcast without recording, but even if it does then at least it's separate. Synology is a popular brand and there are lot of authorized and un-authorized plugins for it. Check them out and see if one suits you.

Windows 8 low latency video streaming

My game is based on Flash and uses RTMP to deliver live video to players. Video should be streamed from single location to many clients, not between clients.
It's essential requirement that end-to-end video stream should have very low latency, less than 0.5s.
Using many tweaks on server and client, I was able to achieve approx. 0.2s latency with RTMP and Adobe Live Media Encoder in case of loopback network interface.
Now the problem is to port the project to Windows 8 store app. Natively Windows 8 offers smooth streaming extensions for IIS + http://playerframework.codeplex.com/ for player + video encoder compatible with live smooth streaming. As of encoder, now I tested only Microsoft Expression Encoder 4 that supports live smooth streaming.
Despite using msRealTime property on player side, the latency is huge and I was unable to make it less than 6-10 seconds by tweaking the encoder. Different sources state that smooth [live] streaming is not a choice for low-latency video streaming scenarios, and it seems that with Expression Encoder 4 it's impossible to achieve low latency with any combination of settings. There are hardware video encoders which support smooth streaming, like ones from envivio or digital rapids, however:
They are expensive
I'm not sure at all if they can significantly improve latency on encoder side, compared to Expression Encoder
Even if they can eliminate encoder's time, can the rest of smooth streaming (IIS side) support required speed.
Questions:
What technology could be used to stream to Win8 clients with subsecond latency, if any?
Do you know players compatible with win8 or easily portable to win8 which support rtmp?
Addition. Live translation of Build 2012 uses Rtmp and Smooth Streaming in Desktop mode. In Metro mode, it uses RTMP and Flash Player for Metro.
I can confirm that Smooth Streaming will not be your technology of choice here. Under the very best scenario with perfect conditions, the best you're going to get is a few seconds (absolute minimum latency would be the chunk length itself, even if everything else had 0 latency.)
I think most likely RTSP/RTMP or something similar using UDP is your best bet. I would be looking at Video Conferencing technologies more than wide audience streaming technologies. If I remember correctly there are a few .NET components out there to handle RTSP H.264 meant for video conferencing - if I can find them later I will post here.