Outbound resolution is low when I use Screen + Camera WebRTC publish - webrtc

When I'm publishing WebRTC stream using Screen + Camera, then outbound resolution is low. So how can I increase outbound resolution?

When you are using Screen + Camera, you need to use Canvas for sending streams. There are some advantages/disadvantages to using Canvas. Here are some recommendations for increasing outbound resolution:
Using better CPU/RAM computer
Ant Media Server draws canvas with 15 FPS. It's not so much high value. Outbound resolution is increasing if the FPS value is low.
[BONUS]
The pros of using Canvas are as follows:
it supports hardware acceleration;
it is capable of manipulating any and each pixel separately.
The cons are as follows:
it overloads the processor and RAM;
its garbage collector is limited and provides no option to clear memory;
it requires manual handling of object events;
it doesn’t run well at high resolutions;
it requires each element to be rendered separately.
PS: You can check canvas examples

Related

WebRtc Bandwidth / resolution / frame rate control

I want to know why I normally transmit a 320240 resolution video and default my uplink traffic is at 1.5MB;
when I modify the SDP bandwidth limit, for example at 500kbps/s my video width and height are still 320240 and the frame rate is not reduced;
So what exactly is this reduced upside traffic?
= =
WebRTC uses so-called "lossy perceptual video compression." That is, the video is capable of being compressed into bit streams of various bandwidths... in your case 1.5mbps and 0.5mbps. It's like JPEG's quality parameter: in JPEG, adjusting that parameter changes the size of the image file. In video compression, instead of a quality parameter you request a bitrate.
When a lower-bitrate video stream is decompressed, it's a less faithful representation of the original. If you know what to look for you can see various compression artifacts in the decompressed imagery" blockiness, "mosquitoes" around objects, and so forth.
Streaming video and DVD video programs (cinema) use high bandwidth to minimize these effects below the threshold of perception at 1080p or 4K resolution.
In your SIF (320x240) resolution case, your decoded 0.5mbps video has more artifacts in it than your 1.5mbps video. But, because the resolution is relatively low, it will take some looking to find those artifacts. If they don't annoy you or your users, you can conclude that 0.5mbps is fine for your application. Long experience suggests that you should succeed just fine with that bitrate and resolution. You can even try 250kbps.
Reducing the frame rate doesn't proportionally save bandwidth; most compressed video frames represent differences from the previous frame.
Lower bitrates are better for mobile devices; they save power and your users' data plans.
If you want to see exaggerated compression artifacts and what they look like, set the bitrate down to 125kbps or lower.

WebRTC Datachannel for high bandwidth application

I want to send unidirectional streaming data over a WebRTC datachannel, and is looking of the best configuration options (high BW, low latency/jitter) and others' experience with expected bitrates in this kind of application.
My test program sends chunks of 2k, with a bufferedAmountLowThreshold event callback of 2k and calls send again until bufferedAmount exceeds 16k. Using this in Chrome, I achieve ~135Mbit/s on LAN and ~20Mbit/s from/to a remote connection, that has 100Mbit/s WAN connection on both ends.
What is the limiting factor here?
How can I see if the data is truly going peer to peer directly, or whether a TURN server is used?
My ultimate application will use the google-webrtc library on Android - I'm only using JS for prototyping. Can I set options to speed up bitrate in the library, that I cannot do in official JS APIs?
There are many variables that impact throughput and it also highly depends on how you've measured it. But I'll list a couple of things I have adjusted to increase the throughput of WebRTC data channels.
Disclaimer: I have not done these adjustments for libwebrtc but for my own WebRTC data channel library called RAWRTC, which btw also compiles for Android. However, both use the same SCTP library underneath, both use some OpenSSL-ish library and UDP sockets, so all of this should be appliable to libwebrtc.
Note that WebRTC data channel implementations using usrsctp are usually CPU bound when executed on the same machine, so keep that in mind when testing. With RAWRTC's default settings, I'm able to achieve ~520 Mbit/s on my i7 5820k. From my own tests, both Chrom(e|ium) and Firefox were able to achieve ~350 Mbit/s with default settings.
Alright, so let's dive into adjustments...
UDP Send/Receive Buffer Size
The default send/receive buffer of UDP sockets in Linux is quite small by default. If you can, you may want to adjust it.
DTLS Cipher Suites
Most Android devices have ARM processors without hardware AES support. ChaCha20 usually performs better in software and thus you may want to prefer it.
(This is what RAWRTC negotiates by default, so I have not included it in the end results.)
SCTP Send/Receive Buffer Size
The default send/receive window size of usrsctp, the SCTP stack used by libwebrtc, is 256 KiB which is way too small to achieve high throughput with moderate delay. The theoretical maximum throughput is limited by mbits = (window / (rtt_ms / 1000)) / 131072. So, with the default window of window=262144 and a fairly moderate RTT of rtt_ms=20, you will end up with a theoretical maximum of 100 Mbit/s.
The practical maximum is below that... actually, way lower than the theoretical maximum (see my test results). This may be a bug in the usrsctp stack (see sctplab/usrsctp#245).
The buffer size has been increased in Firefox (see bug 1051685) but not in libwebrtc used by Chrom(e|ium).
Release Builds
Optimisation level 3 makes a difference (duh!).
Message Size
You probably want to send 256 KiB sized messages.
Unless you need to support Chrome < ??? (sorry, I currently don't know where it landed...), then the maximum message size is 64 KiB (see issue 7774).
Unless you also need to support Firefox < 56, in which case the maximum message size is 16 KiB (see bug 979417).
It also depends on how much you send before you pause sending (i.e. the buffer's high water mark), and when you continue sending after the buffer has been drained (i.e. the buffer's low water mark). My tests have shown that targeting a high water mark of 1 MiB and setting a low water mark of 256 KiB results in adequate throughput.
This reduces the amount of API calls and can increase throughput.
End Results
Using optimisation level 3 with default settings on RAWRTC brought me up to ~600 Mbit/s.
Based on that, increasing the SCTP and UDP buffer sizes to 4 MiB brought me up further to ~700 Mbit/s, with one CPU core at 100% load.
However, I believe there is still room for improvements but it's unlikely to be low-hanging.
How can I see if the data is truly going peer to peer directly, or whether a TURN server is used?
Open about:webrtc in Firefox or chrome://webrtc-internals in Chrom(e|ium) and look for the chosen ICE candidate pair. Or use Wireshark.

Frame by frame decode using Media Source Extension

I've been digging through the Media Source Extension examples on the internet and haven't quite figured out a way to adapt them to my needs.
I'm looking to take a locally cached MP4/WebM video (w/ 100% keyframes and 1:1 ratio of clusters/atoms to keyframes) and decode/display them non-sequentially (ie. frame 10, 400, 2, 100, etc.) and to be able to render these non-sequential frames on demand at rates from 0-60fps. The simple non-MSE approach using the currentTime property fails due to the latency in setting this property and getting a frame displayed.
I realize this is totally outside normal expectations for video playback, but my application requires this type of non-sequential high speed playback. Ideally I can do this with h264 for GPU acceleration but I realize there could be some platform specific GPU buffers to contend with, though it seems that a zero frame buffer should be possible (see here). I am hoping that MSE can accomplish this non-sequential high framerate low latency playback, but I know I'm asking for a lot.
Questions:
Will appendBuffer accept a single WebM cluster / MP4 Atom made up of a single keyframe, and also be able to decode at a high frequency (60fps)?
Do you think what I'm trying to do is possible in the browser?
Any help, insight, or code suggestions/examples would be much appreciated.
Thanks!
Update 4/5/16
I was able to get MSE mostly working with single frame MP4 fragments in Firefox, Edge, and Chrome. However, Chrome seems to be running into the frame buffer issue linked above and I haven't found a way to pre-process a MP4 to invoke this "low delay" mode. Anyone have any clues if it's possible to create such a file with an existing tool like MP4Box?
Firefox and Edge decode/display the individual frames immediately with very little latency, but of course something breaks once I load this video into a Three.js WebGL project (no video output, no errors). I'm ignoring this for now as I'd much rather have things working on Chrome as I'll be targeting Android as well.
I was able to get this working pretty well. The key was getting Chrome to enter its "low delay" mode by muxing a specially crafted MP4 file using modified mp4box sources. I added one line in movie_fragments.c so it read:
if (movie->moov->mvex->mehd && movie->moov->mvex->mehd->fragment_duration) {
trex->track->Header->duration = 0;
Media_SetDuration(trex->track);
movie->moov->mvex->mehd->fragment_duration = 0;
}
Now every MP4 created will have the MEHD fragment duration set to 0 which causes Chrome to process it as a live stream.
I still have one remaining issue related to the timestampOffset property which in combination with the FPS set in the media fragments control the playback speed. Since I'm looking to control the FPS directly I don't want any added delay from the MSE playback engine. I'll post a separate question here to address that.
Thanks,
Dustin

WebRTC - Peerconnection constraints

I've been working on a WebRTC videoconferencing app which is working great, taking into account the current state of WebRTC.
However, I have been exploring the possibilities to add constraints to the video and audio streams being send over by PeerConnection.
More specific in improving the performance of the video.
When videoconferencing on old (slow) laptops, we noticed that the quality of the image is really high but the frame per second is low. The stream is hacky.
About the audio quality, we give it a 8,5 for Chrome but only a 5,5 to 6 for Firefox.
I am not really interested in applying constraints to getUserMedia since this stream is being shown to the user aswell, and we don't want to change anything about this local output. (Unless there isn't another way)
I have found alot of information on W3G's drafts about MediaStreams and WebRTC itself.
These define certain constraints like default fps, minfps, minwidth and minheight of the image. On webrtc.org is also alot of information available like choosing codec etc.
But these settings can only be made "under the hood". It seems these settings cannot be addressed from RTCPeerConnection API level?
Certain examples on the net manipulate the SDP strings in the Offer / Answer part of the WebRTC handshake, is this the way to go ?
TL;DR : How to apply - and What is the best way to apply - constraints on WebRTC like minfps, maxfps, default fps, minwidth, maxwidth, dpi of image, bandwidth of video and audio, audio KHz and any other way to improve performance or quality of the stream(s).
Big thanks in advance !
Right now, most of those can't be set in Firefox or Chrome. A few can be adjusted (with care/pain) in the SDP, but even if there's an SDP option defined for something it doesn't mean that the browsers look at it.
Both Mozilla and Google are looking to improve CPU overload detection and reaction (reduce frame size dynamically, etc). Right now, this effectively isn't being done. Upcoming releases of FF (FF24) will adapt to the capture resolution (as a maximum), but we don't have constraints for that yet, just about:config prefs (see media.*). That would allow you to set a different default resolution for Firefox.

Windows 8 low latency video streaming

My game is based on Flash and uses RTMP to deliver live video to players. Video should be streamed from single location to many clients, not between clients.
It's essential requirement that end-to-end video stream should have very low latency, less than 0.5s.
Using many tweaks on server and client, I was able to achieve approx. 0.2s latency with RTMP and Adobe Live Media Encoder in case of loopback network interface.
Now the problem is to port the project to Windows 8 store app. Natively Windows 8 offers smooth streaming extensions for IIS + http://playerframework.codeplex.com/ for player + video encoder compatible with live smooth streaming. As of encoder, now I tested only Microsoft Expression Encoder 4 that supports live smooth streaming.
Despite using msRealTime property on player side, the latency is huge and I was unable to make it less than 6-10 seconds by tweaking the encoder. Different sources state that smooth [live] streaming is not a choice for low-latency video streaming scenarios, and it seems that with Expression Encoder 4 it's impossible to achieve low latency with any combination of settings. There are hardware video encoders which support smooth streaming, like ones from envivio or digital rapids, however:
They are expensive
I'm not sure at all if they can significantly improve latency on encoder side, compared to Expression Encoder
Even if they can eliminate encoder's time, can the rest of smooth streaming (IIS side) support required speed.
Questions:
What technology could be used to stream to Win8 clients with subsecond latency, if any?
Do you know players compatible with win8 or easily portable to win8 which support rtmp?
Addition. Live translation of Build 2012 uses Rtmp and Smooth Streaming in Desktop mode. In Metro mode, it uses RTMP and Flash Player for Metro.
I can confirm that Smooth Streaming will not be your technology of choice here. Under the very best scenario with perfect conditions, the best you're going to get is a few seconds (absolute minimum latency would be the chunk length itself, even if everything else had 0 latency.)
I think most likely RTSP/RTMP or something similar using UDP is your best bet. I would be looking at Video Conferencing technologies more than wide audience streaming technologies. If I remember correctly there are a few .NET components out there to handle RTSP H.264 meant for video conferencing - if I can find them later I will post here.