How can I measure instantaneuous bandwidth using ns-3? - ns-3

I have looked into FlowMon which counts packets over the entire simulation interval. Can I specify shorter intervals and many such intervals? Are there other options to measure instantaneous bandwidth (or at least, bandwidth averaged over short intervals like 1ms) in ns-3?

You could still use FlowMon for this. FlowMon accumulates the number of packets as you progress through the simulation so at different time intervals you can get values such as txBytes and do some calculations with that.
Take a look at NS3 scheduling, and try using that
Simulator::Schedule(Seconds(1), &StatsCalculationCallback);
Where the StatsCalculationsCallback is the function where you would calculate the difference between the value now and value before to get for that short interval. Then in the StatsCalculationsCallback function you would then also need to reschedule again for the next interval.

Related

AWS Cloudwatch ELB monitoring active connections

I would like to monitor the maximum number of active connections that my ApplicationELB is managing over a 5-minute period.
The ApplicationELB publishes a metric called ActiveConnectionCount. The documentation describes this in part as:
The total number of concurrent TCP connections active from clients to the load balancer and from the load balancer to targets.
And further states:
The most useful statistic is Sum.
I believe that Sum would total all the active connections reported within the time frame. E.g. Let's say the ELB is maintaining 10 connections and it reports this number every second, then the Sum would be 3000 over a 5-minute period. This is not what I want. Furthermore, when I use SUM over a 5-minute period I'm getting 20k or so -- far more than the number of real concurrent connections which are at most a few hundred.
If I aggregate using Maximum then the number reported by AWS is zero (!?).
If I aggregate using Average then the number appears to be reasonable (ranging from 80 - 200), but also wildly inaccurate. That is, it is almost inversely correlates with new connections and response time. That is, during time so of the day when response time is low and new connections is low, average active connections is higher.
So, I guess, here are my questions:
(1) How can I achieve seeing maximum number of concurrent connections between ELB and clients/app server? (Ideally, I could separate these two, but it doesn't look like the ELB does that).
Less important, but I'm curious:
(2) Why does MAXIMUM yield zero, while AVERAGE yields 80-200?
(3) Why does the documentation say that SUM should be used?
Thanks for any help / insight!
How can I achieve seeing maximum number of concurrent connections
between ELB and clients/app server? (Ideally, I could separate these
two, but it doesn't look like the ELB does that).
Why does MAXIMUM yield zero, while AVERAGE yields 80-200?
As you said, the ELB does not do that. From the metrics you can also see something called "SampleCount" which is the number of samples taken during a period of time, by default 1 minute. If we could somehow access the counts in these samples, we could get a min and max sample. For whatever reason, it's currently broken or not implemented and min/max show as 0. Therefore, the most useful metric, in my opinion at least, is the average which takes the sum (of counts) and divides it by the SampleCount.
Why does the documentation say that SUM should be used?
Good question because if you think about it it doesn't make much sense and doesn't give you much information since it's just a sum of the count in all samples.

Collect statistics on current traffic with Bro

I want to collect statistics on traffic every 10 seconds and the only tool that I found is connection_state_remove event,
event connection_state_remove(c: connection)
{
SumStats::observe( "traffic", [$str="all"] [$num=c$orig$num_bytes_ip] );
}
how to deal with those connections that did not removed by the end of this period. How to get statistics from them?
The events you're processing are independent of the time interval at which the SumStats framework reports statistics. First, you need to define what exactly are the statistics you care about — for example, you may want to count the number of connections for which Bro completes processing in a given time interval. Second, you need to define the time interval (in your case, 10 seconds) and how to process the statistical observations in the SumStats framework. This latter part is missing in your snippet: you're only making an observation but not telling the framework what to do with it.
The examples in the SumStats documentation are very close to what you're looking for.

Waiting time of SUMO

I am using sumo for traffic signal control, and want to optimize the phase to reduce some objectives. During the process, I use the traci module as an output of states in traffic junction. The confusing part is traci.lane.getWaitingTime.
I don't know how the waiting time is calculated and also after I use two detectors as an output to observe, I think it is too large.
Can someone explain how the waiting time is calculated in SUMO?
The waiting time essentially counts the number of seconds a vehicle has a speed of less than 0.1 m/s. In the case of traci.lane this means it is the number of (nearly) standing vehicles multiplied with the time step length (since traci.lane returns the values for the last step).

Calculate % Processor Utilization in Redis

Using INFO CPU command on Redis, I get the following values back (among other values):
used_cpu_sys:688.80
used_cpu_user:622.75
Based on my understanding, the value indicates the CPU time (expressed in seconds) accumulated since the launch of the Redis instance, as reported by the getrusage() call (source).
What I need to do is calculate the % CPU utilization based on these values. I looked extensively for an approach to do so but unfortunately couldn't find a way.
So my questions are:
Can we actually calculate the % CPU utilization based on these 2 values? If the answer is yes, then I would appreciate some pointers in that direction.
Do we need some extra data points for this calculation? If the answer is yes, I would appreciate if someone can tell me what those data points would be.
P.S. If this question should belong to Server Fault, please let me know and I will post it there (I wasn't 100% sure if it belongs here or there).
You need to read the value twice, calculate the delta, and divide by the time elapsed between the two reads. That should give you the cpu usage in % for that duration.

Mesaure upload and download speed in iPhone

I would like to measure the upload and download speed of data in iPhone, is any API available to achieve the same? Is it correct to measure it on the basis of dividing total bytes received with time taken in response?
Yes, it is correct to measure the total bytes / time taken, that is exactly what the speed is. You might want to take an average if you want to constantly show the download speed.., like using 500 bytes and the time it took to download those particular ones.
For doing this you could like have an NSMutableArray, as a buffer, which you empty idk every 2 seconds. Then you do [bufferMutableArray length]/2 and you know how many bytes a second you had those 2 seconds. When you empty the buffer ofc append to the data you are downloading.
There is no direct API to know the speed.
Total data received/sent and time only will give you average speed. There use to be lot of variation in the speed over the time so if you want more accurate value then do the speed calculation based on sampling.
(Data transferred in 1 miniut) /(60 seconds) ---> this solution only if you need greater accuracy in the speed calculation. The sampling duration can changed based on the level of accuracy required.