CoTURN Usage Statistics - webrtc

I am still a bit new to the WebRTC world and trying to find my way through. I have succcessfully set up CoTURN, and been able to route calls behind a firewall by using CoTURN. Now I am wondering if it is possible to somehow inspect and possibly visualize usage statistics of CoTURN? I would love to know how many users are utilizing the server at any given time, how much the bandwidth and CPU usage is etc.? I saw details on how to optimize bandwidth and CPU usage in the official docs, but I haven't found any info on actually monitoring the usage. Any help would be highly appreciated.

If you want to monitor standard usage statistics like CPU usage, load, bandwidth, etc., you can focus on what's available for your infrastructure. For example in AWS you could have CloudWatch, or in generic Linux deployments export the usage stats with Prometheus and have them presented with Grafana.
For the coturn/TURN specific statistics, then coturn allows to store some metrics in Redis; it's described in https://github.com/coturn/coturn/blob/master/turndb/schema.stats.redis
Total traffic information is also reported when the allocation is deleted. The keys are
"turn/user/<username>/allocation/<id>/total_traffic" or "turn/user/<username>/allocation/<id>/total_traffic/peer".
Applications interested in the total amount of traffic per allocation can subscribe to these events as:
psubscribe turn/realm/*/user/*/allocation/*/total_traffic
psubscribe turn/realm/*/user/*/allocation/*/total_traffic/peer

Related

How to handle resource limits for apache in kubernetes

I'm trying to deploy a scalable web application on google cloud.
I have kubernetes deployment which creates multiple replicas of apache+php pods. These have cpu/memory resources/limits set.
Lets say that memory limit per replica is 2GB. How do I properly configure apache to respect this limit?
I can modify maximum process count and/or maximum memory per process to prevent memory overflow (thus the replicas will not be killed because of OOM). But this does create new problem, this settings will also limit maximum number of requests that my replica could handle. In case of DDOS attack (or just more traffic) the bottleneck could be the maximum process limit, not memory/cpu limit. I think that this could happen pretty often, as these limits are set to worst case scenario, not based on average traffic.
I want to configure autoscaler so that it will create multiple replicas in case of such event, not only when the cpu/memory usage is near limit.
How do I properly solve this problem? Thanks for help!
I would recommend doing the following instead of trying to configuring apache to limit itself internally:
Enforce resource limits on pods. i.e let them OOM. (but see NOTE*)
Define an autoscaling metric for your deployment based on your load.
Setup a namespace wide resource-quota. This enforeces a clusterwide limit on the resources pods in that namespace can use.
This way you can let your Apache+PHP pods handle as many requests as possible until they OOM, at which point they respawn and join the pool again, which is fine* (because hopefully they're stateless) and at no point does your over all resource utilization exceed the resource limits (quotas) enforced on the namespace.
* NOTE: This is only true if you're not doing fancy stuff like websockets or stream based HTTP, in which case an OOMing Apache instance takes down other clients that are holding an open socket to the instance. If you want, you should always be able to enforce limits on apache in terms of the number of threads/processes it runs anyway, but it's best not to unless you have solid need for it. With this kind of setup, no matter what you do, you'll not be able to evade DDoS attacks of large magnitudes. You're either doing to have broken sockets (in the case of OOM) or request timeouts (not enough threads to handle requests). You'd need far more sophisticated networking/filtering gear to prevent "good" traffic from taking a hit.

scalability of azure cloud queue

In current project we currently use 8 worker role machines side by side that actually work a little different than azure may expect it.
Short outline of the system:
each worker start up to 8 processes that actually connect to cloud queue and processes messages
each process accesses three different cloud queues for collecting messages for different purposes (delta recognition, backup, metadata)
each message leads to a WCF call to an ERP system to gather information and finally add retreived response in an ReDis cache
this approach has been chosen over many smaller machines due to costs and performance. While 24 one-core machines would perform by 400 calls/s to the ERP system, 8 four-core machines with 8 processes do over 800 calls/s.
Now to the question: when even increasing the count of machines to increase performance to 1200 calls/s, we experienced outages of Cloud Queue. In same moment of time, 80% of the machines' processes don't process messages anymore.
Here we have two problems:
Remote debugging is not possible for these processes, but it was possible to use dile to get some information out.
We use GetMessages method of Cloud Queue to get up to 4 messages from queue. Cloud Queue always answers with 0 messages. Reconnect the cloud queue does not help.
Restarting workers does help, but shortly lead to same problem.
Are we hitting the natural end of scalability of Cloud Queue and should switch to Service Bus?
Update:
I have not been able to fully understand the problem, I described it in the natual borders of Cloud Queue.
To summarize:
Count of TCP connections have been impressive. Actually too impressive (multiple hundreds)
Going back to original memory size let the system operate normally again
In my experience I have been able to get better raw performance out of Azure Cloud Queues than service bus, but Service Bus has better enterprise features (reliable, topics, etc). Azure Cloud Queue should process up to 2K/second per queue.
https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/
You can also try partitioning to multiple queues if there is some natural partition key.
Make sure that your process don't have some sort of thread deadlock that is the real culprit. You can test this by connecting to the queue when it appears hung and trying to pull messages from the queue. If that works it is your process, not the queue.
Also take a look at this to setup some other monitors:
https://azure.microsoft.com/en-us/documentation/articles/storage-monitor-storage-account/
It took some time to solve this issue:
First a summarization of the usage of the storage account:
We used the blob storage once a day pretty heavily.
The "normal" diagonistics that Azure provides out of the box also used the same storage account.
Some controlling processes used small tables to store and read information once an hour for ca. 20 minutes
There may be up to 800 calls/s that try to increase a number to count calls to an ERP system.
When recognizing that the storage account is put under heavy load we split it up.
Now there are three physical storage accounts heaving 2 queues.
The original one still keeps up to 800/s calls for increasing counters
Diagnositics are still on the original one
Controlling information has been also moved
The system runs now for 2 weeks, working like a charm. There are several things we learned from that:
No, the infrastructure is "not just there" and it doesn't scale endlessly.
Even if we thought we didn't use "that much" summarized we used quite heavily and uncontrolled.
There is no "best practices" anywhere in the net that tells the complete story. Esp. when start working with the storage account a guide from MS would be quite helpful
Exception handling in storage is quite bad. Even if the storage account is overused, I would expect some kind of exception and not just returning zero message without any surrounding information
Read complete story here: natural borders of cloud storage scalability
UPDATE:
The scalability has a lot of influences. You may are interested in Azure Service Bus: Massive count of listeners and senders to be aware of some more pitfalls.

For shipping logs from app server, which to use Logstash forwarder, FLume or Fluentd?

Logstash forwarder is light, but from logstash forwarder to logstash , there is latency over the network. [ if i am using Logstash forwarder on one machine and sending logs to Logstash which is on other machine ]
Flume /Flume-ng : CPU utilisation is high for same amount of data (for example for 2 MB ,its like 20 percent )
Fluentd : doestn't use java, its based on CRuby , but its CPU utilisation is also at peak time 30 percent, .
As per our use case we do not want to add significant load on my production boxes to just forward the log and if i use logstash i will be introducing new single point of failure so i am pretty confused to choose one among them.
Interesting performance statistics.
From my experience, logstash-forwarder is fairly light weight and encryption/compression is very helpful. This indeed might cause some latency. Is that an important factor for you? I guess latency is smaller than 2-3 seconds. I think that in many log management use cases, real-time is not a strong requirement.
At the end of the day, all these agents need to collect data from apps/files, package them and ship them over the network. This takes some cycles but in most cases, these are 2%-4% of the resources a normal server would have.
Have a look at rsyslog which has many configurations on how often it piggy backs logs. You can run it in a docker and limit resources more strictly on rsyslog or on any of the other agents (https://hub.docker.com/r/logzio/logzio-rsyslog-shipper/)
Another option would be to post logs directly from your app server with bulk HTTP post by writing your own code. It's something most open source like ELK can ingest and it something we recommend using at http://logz.io

ElastiCache Redis spikes

I'm using ElastiCache Redis and storing small piece of data (~5-10MB) in it. Everything works perfect for a while and then suddenly it responds lot longer than usually (like 2000ms instead of 100ms). Most of actions that I'm doing is simple select single entry from Redis and then providing it to client. I noticed this problem only in benchmarks, not in real usage.
According to Google and StackOverflow it can be related to Redis Persistence, but I found that persistence is disabled in group options of ElastiCache.
I used redis-stat to monitor stuff in Redis, and seems like there are regular CPU usage spikes by system every n-minutes.
Anyone knows what kind of thing can cause such problem?

How can I monitor the bandwidth of an object in the Amazon S3 service?

How can programmatically monitor the bandwidth of an object in AWS S3 service? I would like to do this to prevent excessive bandwidth usage by clients who are using our services and costing us more than we can afford. We like to limit 1TB bandwidth for each object.
The detailed usage reports are just per bucket, not per object.
What you could do is enable logging and parse the logs once an hour or so. It's certainly not instant, but it would prevent people from going way over your usage limits.
Also, s3stat is a good option up to a point. Once you start doing more than ~ 50 million requests per month, they have trouble crunching the data.