WebRTC: Using own server to counteract quality problems? - webrtc

I'm in the process of creating a WebRTC 1-on-1 video chat.
I was told by one of my users that a competitor of mine (who also offers a WebRTC video chat) that if the connections gets bad (=pixelated video and choppy sound), the competitor asks the user if they allow the connection go through their own server instead of p2p.
What might be the reason why they offer users to do it via their own server? Do they use a different system (=not WebRTC?) then?
I thought that nothing could be better than p2p, so I don't understand what my competitors do in such a case where they offer such a workaround.
Thank you for any insights.

Using a server could give you a better experience for a few reasons.
If you are sending to multiple viewers you have to share your bandwidth between every receiver. When you switch to a server you only have to upload once (and then the server distributes the video to the receivers).
The network route to the server could be better then the P2P call. If you run your server in something like AWS the two users may have a better network path to the server.
The server could be doing signal processing on the backend. You have things like SVC and Simulcast, where the sender uploads multiple 'quality levels' to the server, and the server decides which one to distribute.
Unlikely (but possible) I have seen demos where some companies will use machine learning to improve video. I have never done it myself though!
These are all done via WebRTC and common use cases. My guess would be your competitor is using an Open Source SFU/MCU. Many projects exist that cover these use cases.

Related

Edit SQL Requests in Transit

I am trying to update a legacy system's sql solution to use the cloud.
The solution today involves a customer Windows SQL server installed onsite, then various machines are configured to connect to that IP Address / Port / Server Name. When they do connect the machines will set up any tables that are missing and regularly send their data. Data rates are low for an individual machine. Roughly one write request ever 10 seconds (it varies a lot), no more than 2-3k of information on each write request.
Moving this to the cloud is tricky mostly because each of the machines do not have a unique identifier. The good news is that we have the legacy machines connected to a IOT Gateway (Just think RPI) that knows a unique machineId. Furthermore the IOTG is a full fledge computer but not too powerful of one, and its Disk is an SD card.
New and Old Network Layout
So far I have had a few things fall on their face.
1) Setting the Machine to think the DB's IP/Port is that of the IOT Gateway. Setting up an Express server on the IOTG, listening, then injecting the unique id into the queries that I'd proxy up to the cloud. I may have had a bug, but for some reason I couldn't even see the requests coming in on the port. Even if I could I'd still have to figure out how to decode them. Shouldn't I at least be able to see these requests coming in?
2) Started looking into SQLite. The idea being to have SQLite listen on the port as an actual DB then have a process in the IOTG query data out of SQLite, append a unique ID, and then send it to the cloud. Unfortunately SQLite does not listen on a port.
I am starting to looking at just installing a whole SQL server on the device, but I'd really like to avoid that. I'm pretty sure its fairly large and writing to disk is not advisable for a small embedded system like I'm running.
Generally my questions boil down to:
1) Should I be able to see SQL Queries in an express server?
2) Should I be using a different tech? I failed to find a different more sql specific proxy.
3) Am I correct to think that the SQLite path is dead? Even if I could find a way to attach it to a port there is still not going to be any sort of response from SQLite when the clients try to make a connection.
4) Am I wrong to fear the local server? Diving into some documentation for making express work with DBs gets me to here: https://www.microsoft.com/en-us/sql-server/developer-get-started/node/ubuntu/ which suggests 4GB of memory, we're working on 0.5GB.
Any other thoughts on how to approach this would be great.

Best technology for building race simulation application

I am trying to do something new, something I have never done before. I am looking for advice or point me into right direction how to choose technology. I am trying to build race simulation app that will have thousands of iot devices streaming data into central platform. While I understand that I can use some sort of IOT hub with cloud providers, but what technology do I choose for storing data?
Example is online indoor biking app. There are apps where you can connect your indoor bike online and have simulated race. For my project I am trying to build something similar. Do I use NO SQL db in this scenario? What technology will allow better scale of application like this since it could be millions of devices around the world in "simulated" race. I am not worried about front-end and things like that, but backend, IOT hub, storing data, presenting-real time?
At this point it is important to understand what kind of data your IoT devices will stream, and at what kind of a rate. It will have significant impact on your question.
That it is if it's just location information and some other small data sent lets say once a second, then if you're talking about tens of thousands of devices - this is not a big load of information, and any standard database, like MySQL will be able to deal with it. You will of course need a multi-threaded server(s) capable of handling many requests in parallel.
If your IoT devices will stream HD video, then you're looking at a completely different solution, with a much stronger server, capable of handling allot of streams in parallel, with significant bandwidth requirements from your hosting company, as well as storage space for all the videos. In this case you will store the streams as files (if you'll need them later on), and you won't need any special database either.
In any case, once you'll reach millions of users, you'll be able to scale most modern databases and servers, like MySQL replication capability. For example, take a look how Wikipedia is relying on MySQL: wikipedia - MySQL https://www.mysql.com/why-mysql/case-studies/mysql-cs-wikipedia.html
So I wouldn't be worried regarding the database on this stage, but make sure that the design of my system is in accordance to the the type of data and rate it is streamed.
Hope this gives you a pointer.

Stress testing a desktop app system

If I want to stress test a 'classic' client-server (desktop app <-> LAN <-> database server) Windows Forms desktop application to see how it performs when many concurrent PC users are using it, how should I go about it? I want to simulate many PC users concurrently going through a work flow, to see if it all stands up and at what point the system degrades unacceptably. I've looked at many test tools but they all seems to be skewed toward testing functionality or web app performance, which is quite different.
Clearly having many actual people on actual PCs is not practical, and lots of virtual machines on a few PCs is not representative either. 'Cloud' computing (EC2, Azure etc) looks promising but the documentation and pricing information all seems to be skewed towards mobile apps or web servers, again not the same (but that could just be presentation so I remain open to the idea). I need to be able to virtualise a small LAN of many client machines running the application and a database server.
Can anyone suggest how to do this, or recommend something?
TIA
IMHO the real question is - do you really need to do performance testing in your case? Consider this - where is your business and functional logic?
Performance testing of Desktop applications is oxymoron by itself. Desktop application is made to be used by one person at a time. So if getting a response takes 5 seconds, it will take (pretty much) 5 seconds no matter how many users are clicking the button. The only real thing close to your backend is the DB and they by design support serious asynchronous load. In case this is not enough - just make a cluster.

Can we make 2 or more USB Internet Modems to work on one PC?

I'm working on a project that require a working similar to connecting 2 or multiple USB Internet Modems to just one PC and make them work all together at the same time, and i want to know if i can use 2 USB Internet Modems in just one PC and make them work simultaneously.
Thank you.
Combing multiple internet or WAN connection is definitely possible, although there are lots of caveats. Google '3G Multihoming' to see some more background.
There are actually relatively cheap ($50 approx) routers which will support load balancing over multiple WAN ports - look for Dual WAN or Multi-homming in the tech specs.
You can also 'trick' windows into using more than one internet connection - see some links below.
However, unless you use a service on the server side also to help split the loads, you need to be aware of some limitations:
If your use case has lots of requests that can run in parallel, as many web page downloads do, then this technique should work well.
If you are downloading a single large file in one request then it will likely not make much difference, although if it is smart enough to move all other request to another connection it may help a little.
Some links for Windows dual connection use (note I have not tested these personally):
http://www.wikihow.com/Combine-Two-Internet-Connections (be wary of the 'double speed' comment)
http://www.techkhoji.com/how-to-combine-multiple-internet-connections/ (see 'Method 2')

Microsoft Access Testing

I have inherited an Access database that has linked SQL tables. I need to test the network traffic that is caused by the execution of the Db. I need to ascertain which parts of the system cause the most Network traffic and therefore are the slowest.
I am not an access guru so ive struggled doing what was suggested, which is : have Task Manager open at the Networking tab.
Then Step in into the app and looking at where there is a significant rise in Network traffic. But this seems rather unreliable and time consuming.
Does anyone have any ideas how I can achieve my goal in Access?
If you really need to analyze the network traffic then you should probably get to know WireShark well enough to do a capture that is filtered on the traffic between the client and the SQL server.