Identifiers for WebRTC's Statistics API - webrtc

Could anyone tell me which identifiers in WebRTC Statistics API are directly related to the quality of the experiences users have during connections?

This depends on the type of session. A videocall where many participants collaborate has different needs than an audio call where one talks any the others mainly listen.
In general the elements that impact the perceived quality are packetsLost, jitter, currentRoundTripTime, framesDropped, pliCount, framesDropped.
You should also consider that the bandwidth estimators adapt the bandwidth (and so the quality) based on the feedback from the other party.
If you search for "Quality of experience estimators for WebRTC" you'll find studies that use the above statistics to estimate the QoE.

Related

OpenMDAO v/s modeFrontier comparisons for optimization capabilities and application scaling

I realize that this might not be the best platform to ask this, but I think this would be best unbiased one to put my question in.
How would you compare OpenMDAO v/s modeFrontier with regards to there optimization capabilities and application scaling and overall software development? Which one would you pick and why?
If you know of any resources or link do provide.
The most fundamental technical difference is OpenMDAO can pass data + derivative information between components. This means that if you want to use gradient based optimization and have access to at least some tools that provide derivative information, OpenMDAO will have far more effective overall capabilities. This is especially important when doing optimization with high-cost analysis tools (e.g. partial differential equation solvers --- CFD, FEA). In those situations making use of derivatives offers between a 100x and 10000x speedup.
One other difference is that OpenMDAO is designed to run natively on a distributed memory compute cluster. Industrial frameworks can submit jobs to remote clusters and query for the results, but OpenMDAO itself can run on the cluster and has a direct and internal MPI based distributed memory capability. This is critical to it being able to efficiently handle derivatives of those expensive PDE solvers. To the best of my knowledge, OpenMDAO is unique in this regard. This is a low level technical detail that most users never need to directly understand, but the consequence is that if you want to do any kind of high fidelity coupled optimziations (aero-structural, aero-propulsive, aero-thermal) with more than one PDE solver in the loop then OpenMDAO's architecture is going to be by far the most effective.
However, OpenMDAO does not offer a GUI. It does not have the same level of data tracking and visualization tools. Also, I know that mode-frontier offers the ability to split a single model up across multiple computers distributed across an organization. Mode Frontier, along with other tools like ModelCenter and Isight, all offer this kind of smooth user experience and code-free interaction that many find valuable.
Honestly, I'm not sure a direct comparison is really warranted. I think if you have an organization that invests in a commercial integration tool like Mode Fronteir, then you can still use OpenMDAO to create tightly coupled integrated optimizations which you can then include as boxes inside your overall integration framework.
You certainly can use OpenMDAO as a complete integration framework, and it has some advantages in that area related to derivatives and execution in distributed memory environments. But you don't have to, and it certainly does not have to be an exclusive decision.

Performance benchmark between boost graph and tigerGraph, Amazon Neptune, etc

This might be a controversial topic, but I am concerned about the performance of boost graph vs commercial software such as TigerGraph, since we need to choose one.
I am inclined to choose Boost, but I am concerned whether performance-wise, boost is good enough.
Disregarding anything around persistence and management, I am concerned with boost graph's core performance of algorithms.
If it is good enough, we can build our application logic on top of it without worry.
Also, I got below benchmarks of LDBC SOCIAL NETWORK BENCHMARK.
LDBC benchmark
seems that TuGraph is the fastest...
Is LDBC's benchmark authoritative in the realm of graph analysis software?
Thank you
I would say that any benchmark request is a controversial topic as they tend to represent a singular workload, which may or may not be representative of your workload. Additionally, performance is only one of the aspects you should look at as each option is built to target different workloads and offers different features:
Boost is a library, not a database, so anything around persistence and management would fall on the application to manage.
TigerGraph is an analytics platform that is focused on running real-time graph analytics, such as deep link analysis.
Amazon Neptune is a fully managed service focused on highly concurrent transactional graph workloads.
All three have strong capabilities and will perform well when used in the manner intended. I'd suggest you figure out which option best matches the type of workload you are looking to run, the type of support you need, and the amount of operational work you are willing to onboard to make the choice more straightforward.

Compare SDN Mininet results to traditional network results

My topic is: Comparative performance analysis of SDN-based network and traditional network. So I decided to use mininet and already know how to perform some tests. However, I am wondering what tests would be better to choose (throughput, jitter, packet delivery ratio, latency, end packet delay, etc.) and how/where actually I can do tests for traditional network? NS2? What would be you suggestions? Maybe any useful links/tutorials?
Many thanks,
You should first use the same simulator to simulate the both networks types. The traditional and SDN are almost the same. The only difference is the management view and the flexibility.
You need first to:
set your goals from the study. Why you are performing this study? Has someone did this before? google scholar and check.
If some people have done this then think in some metrics or objectives they were missing and then start to think how to highlight them.
A good start for SDN research is always this paper (http://ieeexplore.ieee.org/document/6994333/).
Try to comment to let us know more in case this is not sufficient. I'm doing my PhD in SDN so I would like to help and exchange knowledge.

Optimization algorithms optimizing an existing system connections

i am currently working on an existing infrastructure where i have about a 1000 customer sites connected to about 5 different hubs. A customer site may connect to one or two hubs to ensure reliability but each customer site is connected to at least one hub. I want to ensure if the current system is the best or can be optimised to have better connection from customer sites to hubs, to help improve connectivity and reliability. Can you suggest good Optimisation Algorithms to look into?. Thank you
Sounds like you're doing some variation of the Facility Problem.
This is a well-known problem, and while there are algorithms that can solve for the global optimum (Djiskra's Algorithm, or other variants of Dynamic Programming), they do not scale well (i.e. you run into the curse of dimensionality). You could try this, but 1000 sounds already pretty big (depends on your exact problem formulation though).
I'd recommend taking a look at this coursera mooc Discrete Optimization. You don't have to take the whole course, but in the "Assignments" section of the video lectures, he also explains a variant of the Facility problem, some possible approaches to think about, and once you've decided which one you want to use, you can look deeper into that particular approach.

Commercial uses for grid computing?

I keep hearing from associates about grid computing which, from what I can gather, is highly distributed stuff along the lines of SETI#Home.
Is anyone working on these sort of systems for business use? My interest is in figuring out if there's a commercial reason for starting software development in this field.
Rendering Farms such as Pixar
Model Evaluation e.g. weather, financials, military
Architectural Engineering e.g. earthquakes.
To list a few.
Grid computing is really only needed if you have a lot of WORK that needs to be done, like folding proteins, otherwise a simple server farm will likely be plenty.
Obviously Google are major users of Grid Computing; all their search service relies on it, and many others.
Engines such as BigTable are based on using lots of nodes for storage and computation. These are commercially very useful because they're a good alternative to a small number of big servers, providing better redundancy and cost effective scaling.
The downside is that the software is fiendishly difficult to write, but Google seem to manage that one ok :)
So anything which requires big storage and/or lots of computation.
I used to work for these guys. Grid computing is used all over. Anyone who makes computer chips uses them to test designs before getting physical silicon cut. Financial websites use grids to calculate if you qualify for that loan. These days they are starting to replace big iron in a lot of places, as they tend to be cheaper to maintain over the long term.