WebRTC probing drops down transfer bit rate - webrtc

I am currently using Webrtc to stream a game. It's a custom WebRTC implementation inside the game engine.
Both the client and the server easily support 100+ Mbps upload speed. Currently, i have locked the max bitrate to 80 Mbps, which is supported.
The issue happens when the webrtc probes for the connection speed, it drops down to 7-8 mbps then it slowly goes back up to 80mbps.
Which drops again when a webrtc-probing happens.
I have linked a video below of the issue.
https://drive.google.com/file/d/1coI3rrGVf4OAFnt2oeSCx0zFJznvfyQv/view?usp=sharing
What could the issue be and is there any solution to fix it?

Related

WebRTC: Using own server to counteract quality problems?

I'm in the process of creating a WebRTC 1-on-1 video chat.
I was told by one of my users that a competitor of mine (who also offers a WebRTC video chat) that if the connections gets bad (=pixelated video and choppy sound), the competitor asks the user if they allow the connection go through their own server instead of p2p.
What might be the reason why they offer users to do it via their own server? Do they use a different system (=not WebRTC?) then?
I thought that nothing could be better than p2p, so I don't understand what my competitors do in such a case where they offer such a workaround.
Thank you for any insights.
Using a server could give you a better experience for a few reasons.
If you are sending to multiple viewers you have to share your bandwidth between every receiver. When you switch to a server you only have to upload once (and then the server distributes the video to the receivers).
The network route to the server could be better then the P2P call. If you run your server in something like AWS the two users may have a better network path to the server.
The server could be doing signal processing on the backend. You have things like SVC and Simulcast, where the sender uploads multiple 'quality levels' to the server, and the server decides which one to distribute.
Unlikely (but possible) I have seen demos where some companies will use machine learning to improve video. I have never done it myself though!
These are all done via WebRTC and common use cases. My guess would be your competitor is using an Open Source SFU/MCU. Many projects exist that cover these use cases.

Real Time screen grabbing and streaming with libav-tools

For my school project I have to stream screen grabbing from 1 station (i.e. server) to another (i.e. client) in Real Time, both running linux (ubuntu).
I'm using libav-tools (avconv as the encoder on the server side and avplay as the player on the client side)
avconv uses x11grab format to grab from the screen.
My problem is: avconv needs a few seconds to output the encoded video. this wait is too long for RT.
I've tried streaming to localhost to avoid network influence on speed, it still seems that avconv is responsible for the long wait.
Also, streaming a video file seems to be much faster, almost immediately.
The project is implemented in C++ and executes avconv in a fork.
Any suggestions as to shortening the procedure?
This is most likely due to internal buffering. There is often a buffer which is way too big on default. That is because having no delay is not the primary concern of most software, they are more concerned with bad connections and that sort of problems, which is what buffers are for.
See https://libav.org/avconv.html, search for "nobuffer" or "-analyzeduration" or "-rtbufsize" or "-max_delay" or "-fpsprobesize" or "rtmp_buffer" (if you use rtmp) or others and try your luck.
There will always be a noticable delay, especially if you use an encodings like h264 for transfer. But a few seconds it does not need to be in a controlled environment. You should be able to bring it down to fractions of a second.

To conserve iPhone power, but allow data transfer over TCP-IP?

To conserve iPhone power, but allow data transfer over TCP-IP what do I do?
I need to receive a constant stream of data all the time. But I don't want to kill the battery in 4 hours by removing the sleep feature.
thx
In one word you cannot do that, you cannot transfer constant stream of data over TCP-IP. One user closes your app, apple restricts resource access to your app. This is apple way of conserving power. You need not worry about power.
I think this old question of mine would help you - iOS Background downloads when the app is not active
You might be able to reduce power a bit by sending or asking for data in the largest chunks possible consistant with smooth operation of your particular application, as larger data bursts may allow the radios to idle for longer periods between the data transfers; and allowing the wifi and cellular radios to turn off greatly reduces power consumption.

Do ping requests put a load on a server?

I have a lot of clients (around 4000).
Each client pings my server every 2 seconds.
Can these ping requests put a load on the server and slow it down?
How can I monitor this load?
Now the server response slowly but the processor is almost idle and the free memory is ok.
I'm running Apache on Ubuntu.
Assuming you mean a UDP/ICMP ping just to see if the host is alive, 4000 hosts probably isn't much load and is fairly easy to calculate. CPU and memory wise, ping is handled by you're kernel, and should be optimized to not take much resources. So, you need to look at network resources. The most critical point will be if you have a half-duplex link, because all of you're hosts are chatty, you'll cause alot of collisions and retransmissions (and dropped pings). If the links are all full duplex, let's calculate the actual amount of bandwidth required at the server.
4000 client #2 seconds
Each ping is 72 bytes on the wire (32 bytes data + 8 bytes ICMP header + 20 bytes IP header + 14 bytes Ethernet). * You might have some additional overhead if you use vlan tagging, or UDP based pings
If we can assume the pings are randomly distributed, we would have 2000 pings per second # 72 bytes = 144000 bytes
Multiple by 8 to get Bps = 1,152,000 bps or about 1.1Mbps.
On a 100Mbps Lan, this would be about 1.1% utilization just for the pings.
If this is a lan environment, I'd say this is basically no load at all, if it's going across a T1 then it's an immense amount of load. So you should basically run the same calculation on which network links may also be a bottle neck.
Lastly, if you're not using ICMP pings to check the host, but have an application level ping, you will have all the overhead of what protocol you are using, and the ping will need to go all the way up the protocol stack, and you're application needs to respond. Again, this could be a very minimal load, or it could be immense, depending on the implementation details and the network speed. If the host is idle, I doubt this is a problem for you.
Yes, they can. A ping request does not put much CPU load on, but it certainly takes up bandwidth and a nominal amount of CPU.
If you want to monitor this, you might use either tcpdump or wireshark, or perhaps set up a firewall rule and monitor the number of packets it matches.
The other problem apart from bandwidth is the CPU. If a ping is directed up to the CPU for processing, thousands of these can cause a load on any CPU. It's worth monitoring - but as you said yours is almost idle so it's probably going to be able to cope. Worth keeping in mind though.
Depending on the clients, ping packets can be different sizes - their payload could be just "aaaaaaaaa" but some may be "thequickbrownfoxjumpedoverthelazydog" - which is obviously further bandwidth requirements again.

Adjust red5's SOSample quality level?

Is there any way to adjust the quality level at which RED5's SOSample records a webcam stream? I just installed it on a remote server, and it's recording in an awful quality.
The FLV will be recorded at the level set within your client application, see these specifically (via Adobe):
http://www.adobe.com/livedocs/fms/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000548.html
http://livedocs.adobe.com/flex/3/langref/flash/media/Camera.html#setQuality()
Another factor is your bandwidth, if data is dropped or lost in transit to the server it will not be recorded.