I am trying to find any element refer to IncomingBitrate in webrtc dump file .
Where I can find the incoming bitrate in webrtc-internals?
Also, How I can calculate incoming bitrate from webrtc stats?
In webrtc-internals check the active connection -- it's printed in bold. Usually it is Conn-Audio-1-0. There are two fields bytesSent and bytesReceived which will allow you to calculate the bitrate. Also check the constraints + stats demo for an actual example: https://webrtc.github.io/samples/src/content/peerconnection/constraints/
In getStats, iterate the reports until you find one of kind googCandidatePair with .stat('googActiveConnection') === 'true'. That is giving you the same information as webrtc-internals. If you want per-track/stream values, reports of type ssrc have bytesSent or bytesReceived, depending on whether they are sent or received.
Then calculate the bitrate by dividing the bytes sent/received by the time difference between the getStats calls.
Related
I am using two msp430f5529 with booster pack(trf7970a),I'm making one module to work in special direct mode(as per sloa214,SDM is used to transmit data) and another one module to work in Direct Mode1(DM1) to receive transmitted data from module one. But I'm not able to receive any data.
Below is my tx code.
Mifare_SDM_config();
Mifare_SDM_Enter();
Mifare_SDM_Transmit((unsigned int*)tx_buff,10,1); //here 1 is parity bit
Mifare_SDM_Exit();
and my receiver code.
//Entering DM1 Mifare_DM1_Enter();
Mifare_DM1_Recieve(rx_buff,rx_len,1);//here 1 is parity bit
Mifare_DM1_Exit();
am I missing anything?
When I execute the 'ovs-dpctl show' command, I got:
$ ovs-dpctl show
system#ovs-system:
lookups: hit:37994604 missed:218759 lost:0
flows: 5
masks: hit:39862430 total:5 hit/pkt:1.04
port 0: ovs-system (internal)
port 1: vbr0 (internal)
port 2: gre_sys (gre)
port 3: net2
I retrieved some explanations:
[-s | --statistics] show [dp...]
Prints a summary of configured datapaths, including their data‐
path numbers and a list of ports connected to each datapath.
(The local port is identified as port 0.) If -s or --statistics
is specified, then packet and byte counters are also printed for
each port.
The datapath numbers consists of flow stats and mega flow mask
stats.
The "lookups" row displays three stats related to flow lookup
triggered by processing incoming packets in the datapath. "hit"
displays number of packets matches existing flows. "missed" dis‐
plays the number of packets not matching any existing flow and
require user space processing. "lost" displays number of pack‐
ets destined for user space process but subsequently dropped be‐
fore reaching userspace. The sum of "hit" and "miss" equals to
the total number of packets datapath processed.
The "flows" row displays the number of flows in datapath.
The "masks" row displays the mega flow mask stats. This row is
omitted for datapath not implementing mega flow. "hit" displays
the total number of masks visited for matching incoming packets.
"total" displays number of masks in the datapath. "hit/pkt" dis‐
plays the average number of masks visited per packet; the ratio
between "hit" and total number of packets processed by the data‐
path.
If one or more datapaths are specified, information on only
those datapaths are displayed. Otherwise, ovs-dpctl displays
information about all configured datapaths.
my question is:
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
Is the total number of incoming packets equal to (lookups.hit +
lookups.missed)?
Yes. (Plus lookups.lost except that I see that's zero for you.)
If the total number of incoming packets is equal to
(lookups.hit + lookups.missed), why is the value of masks.hit:39862430
greater than (lookups.hit:37994604 + lookups.missed:218759)?
masks.hit is the number of hash table lookups that were executed to
process all of the packets that were processed. A given packet might
require up to masks.total lookups.
Why is the masks.hit/pkt ratio greater than 1? What is the reasonable
value in what interval?
The ratio cannot be less than 1.00 because that would mean that
processing a packet didn't require even a single lookup. A ratio of
1.04 is very good because it means that most packets were processed with
only a single lookup. Higher ratios are worse.
by Ben Pfaff (blp#ovn.org)
I to want get and historize queue metrics for the "Enqueued, Dequeued an Size" (Terminology formerly met on ActiveMQ).
The moving charts provided in the management plugin are not enough for the monitoring that I need to do.
So with RabbitMQ, I'm getting data from https://rabbitmq-server:15672/api/queues/myvhost
This returns json.. for a queue, I can obtain real life production data like :
"messages":0, // for "Size"
"message_stats":{
"deliver_get":171528, // for "Dequeued"
"ack":162348,
"redeliver":9513,
"deliver_no_ack":0,
"deliver":171528,
"get":0,
"publish":51293 // for "Enqueued"
(...)
I'm in particular surprised by the publish counter:
Its value can even decrease between 2 measures done with a couple of minutes of delay ! (see sample chart around 17:00)
As you can see on my data, the deliver_get is significantly larger than the publish.
https://my-rabbitmq:15672/doc/stats.html doesn't give a lot of details that could explain what I actually notice.
Also, under the message_stats object that I obtain, I'm missing the some counters like confirm and return which could be related to the enqueuing.
Are there relationships between these metrics ? (like deliver_get + messages = redeliver + publish.. but that one doesn't work with my figures)
Is there another more detailed documentation about these metrics ?
I am stuck at a point where I am configuring the DCM module and the current parameter I am trying to configure DcmTimStrP2AdjustServer,
The requirement is P2CAN_SERVER_MAX = 25ms; P2STARCAN_SERVER_MAX = 5000ms;
Is DcmDspSessionP2ServerMax the same as P2CAN_SERVER_MAX? and if it is the same
What is the need for DcmTimStrP2AdjustServer and how do I find the best value for DcmTimStrP2AdjustServer.(The values all should be a multiple of DcmTaskTime which I find to be logical).
DcmTaskTime = 5ms;
I am following Autosar 4.0.3, using ETAS tool for configuring the parameters.
To fulfill your requirement, you need to configure respectively
DcmDspSessionP2ServerMax & DcmDspSessionP2StarServerMax for each session control in the DcmDspSessionRows at Dcm/DcmConfigSet/DcmDsp/DcmDspSession/.
i.e.
DcmDspSessionP2ServerMax 25
DcmDspSessionP2StarServerMax 5000
There is no DcmTimStrP2AdjustServer, but I guess you're referring to DcmTimStrP2ServerAdjust instead. DcmTimStrP2ServerAdjust & DcmTimStrP2StarServerAdjust should be configured to a multiple of your DcmTaskTime (5ms in your case, so i.e. 5ms, 10ms, 15, ms, ... is applicable) and are used to safeguard that the response is available on the bus before triggering the P2 or P2* timeouts. In your case you may want to set these values to the same values as in the DcmDspSessionRows if there is no other specification given, because the chosen timeout values there are already multiples of your DcmTaskTime:
DcmTimStrP2ServerAdjust 25
DcmTimStrP2StarServerAdjust 5000
The adjust value is an internal value, in order to adjust the delay between the Dcm Transmit Request and the message being actually on the Bus.
The definition of P2ServerMax and P2*ServerMax and their corresponding Adjust values is the same:
This parameter is used to guarantee that the diagnostic response is available on the bus before reaching P2 by adjusting the current DcmDspSessionP2ServerMax. This parameter mainly represents the software architecture dependent communication delay between the time the transmission is initiated by DCM and the time when the message is actually transmitted to the bus
I'm building an embedded device with a couple of sensors. The device will 'stream' digital data from these sensors over Bluetooth or USB.
Most of the communication will be from the embedded device to the host. The host will infrequently be sending control messages, to control the gain etc.
Since the physical and data link layers are taken care of, I'm looking for a simple message protocol that will make it easy to develop user applications to process/display data on the host computer. Does anyone have any suggestions?
A simple text protocol may be the best for this application.
Use the communication channel as a bi-directional serial pipe.
The device can stream sensor values in ASCII (text) format, separated by commas, with each set separated by the newline character. The rate is preferably set by the host.
For example,
21204,32014 (new line character '\n' - 0x0A) at the end of each line
21203,32014
21202,32011
....
This makes it easier to test, to stream the values to a file, import in to a spreadsheet etc.
Similarly commands to the device too, is best done in text.
SET GAIN_1 2 ( sent by host )
OK ( reply by device )
SET GAIN_2 4 (sent by host )
OK ( reply by device )
SET GAIN_9 2 (sent by host )
ERROR ( reply by device if it does not understand)
SET RATE 500 ( set the sensor dump rate to every 500 ms )
OK