How many people can WebRTC support simultaneously with mesh topology? - webrtc

so my question is, how many people can WebRTC support simultaneously with mesh topology? for audio only, video only, audio&video

Related

Is WebRTC sufficient for peer to peer video calling

I'm building an application with peer-to-peer video calling. So far, I only know WebRTC. Is this sufficient for p2p video calling across the globe if I just have simplest Turn server(s)? By sufficient I mean it is as smooth as a normal video calling services like Google Meet or Zoom. If no, what else should I do to ensure smooth video calling?
For P2P calls with a few participants, WebRTC should absolutely be sufficient. WebRTC has evolved so much in the past decade that it's not unreasonable to estimate that most video applications that are not Zoom are built on it.
There are lots of tutorials about building WebRTC apps from scratch (here's one on DEV, and I appreciate everything Karl Stolley writes).
The only question is if you need to build the WebRTC logic from scratch. Jitsi is a good open-source library. There are other solutions with free tiers like Twilio, Agora, or Daily (full disclosure, where I work).
Good luck!

audio+video processing module in kurento

Kurento has lots of examples of writing kms-filter modules that process video frames, but none that show how to process audio+video. Additionally, kurento-module-scaffold.sh seems to only generate module projects that receive a cv::Mat or a GstVideoFrame.
Kurento itself says
As a differential feature, Kurento Media Server also provides advanced
media processing capabilities involving computer vision, video
indexing, augmented reality and speech analysis.
so it seems like it should be possible, but I can't see any way to get at the audio in their API.
So my question is: can this be done? If so, how?

Streaming IP Camera solutions that do not require a computer?

I want to embed a video stream into my web page, which is part of our own cloud based software. The video should be low-latency (like video conferencing), and it would be preferable, but not required, for it to include audio. I am comfortable serving streaming binary data from the server-side, and embedding it into the page using HTML5 video.
What I am not comfortable with is the ability to capture the video data to begin with. The client does not already have a solution in place, and is looking to us for assistance. The video would be routed through our server equipment, and not be an embedded peice that connects directly to the video source.
It is a known quantity for us to use a USB or built-in camera from the computer. What I would like more information is about stand-alone cameras.
Some models of cameras have their own API documentation (example). It would seem from what I am reading that a manufacturer would typically have their own API which they repeat on many or all of their models, and that each manufacturer would be different in their API. However, I have only done surface reading and hope to gain more knowledge from someone who has already researched this, or perhaps even had first hand experience.
Do stand-alone cameras generally include an API? (Wouldn't this is a common requirement, so that security software can use multiple lines of cameras?) Or if not an API, how is the data retrieved from the on-board webserver? Is it usually flash based? Perhaps there is a re-useable video stream I could capture from there? Or is the stream formatting usually diverse?
What would I run into when trying to get the server-side to capture that data?
How does latency on a stand-alone device compare with a USB camera solution?
Do you have tips on picking out a stand-alone camera that would be a good fit for streaming through a server?
I am experienced at using JavaScript (both HTML5 and Node.JS), Perl and Java.
Each camera manufacturer has their own take on this from the point of access points; generally you should be able to ask for a snapshot or a MJPEG stream, but it can vary. Take a look at this entry on CodeProject; it tackles two common methodologies. Here's another one targeted at Foscam specifically.
Get a good NAS, I suggest Synology, check out their long list of supported IP Web Cams. You can connect them with a hub or with a router or whatever you wish. It's not a "computer" as-in "tower", but it does many computer jobs, and it can stay on while your computer is off or away, and do thing like like video feeds, torrents, backups, etc.
I'm not an expert on all the features, so I don't know how to get it to broadcast without recording, but even if it does then at least it's separate. Synology is a popular brand and there are lot of authorized and un-authorized plugins for it. Check them out and see if one suits you.

Windows 8 low latency video streaming

My game is based on Flash and uses RTMP to deliver live video to players. Video should be streamed from single location to many clients, not between clients.
It's essential requirement that end-to-end video stream should have very low latency, less than 0.5s.
Using many tweaks on server and client, I was able to achieve approx. 0.2s latency with RTMP and Adobe Live Media Encoder in case of loopback network interface.
Now the problem is to port the project to Windows 8 store app. Natively Windows 8 offers smooth streaming extensions for IIS + http://playerframework.codeplex.com/ for player + video encoder compatible with live smooth streaming. As of encoder, now I tested only Microsoft Expression Encoder 4 that supports live smooth streaming.
Despite using msRealTime property on player side, the latency is huge and I was unable to make it less than 6-10 seconds by tweaking the encoder. Different sources state that smooth [live] streaming is not a choice for low-latency video streaming scenarios, and it seems that with Expression Encoder 4 it's impossible to achieve low latency with any combination of settings. There are hardware video encoders which support smooth streaming, like ones from envivio or digital rapids, however:
They are expensive
I'm not sure at all if they can significantly improve latency on encoder side, compared to Expression Encoder
Even if they can eliminate encoder's time, can the rest of smooth streaming (IIS side) support required speed.
Questions:
What technology could be used to stream to Win8 clients with subsecond latency, if any?
Do you know players compatible with win8 or easily portable to win8 which support rtmp?
Addition. Live translation of Build 2012 uses Rtmp and Smooth Streaming in Desktop mode. In Metro mode, it uses RTMP and Flash Player for Metro.
I can confirm that Smooth Streaming will not be your technology of choice here. Under the very best scenario with perfect conditions, the best you're going to get is a few seconds (absolute minimum latency would be the chunk length itself, even if everything else had 0 latency.)
I think most likely RTSP/RTMP or something similar using UDP is your best bet. I would be looking at Video Conferencing technologies more than wide audience streaming technologies. If I remember correctly there are a few .NET components out there to handle RTSP H.264 meant for video conferencing - if I can find them later I will post here.

Is it possible to access video data on both cameras on ipad2 at the same time?

Could we get two live video streams at the same time? Or should I switch cameras to get two streams?
According to Apple, accessing both cameras at once is not supported. If you have access to the Apple Developer Forums, see this post.
Also, BTW, switching streams is quite slow in my experience and would be no substitute for accessing both cameras at once.