TraceSources can't detect packet drop in queue NS3 - ns-3

I've been trying to run this simple tcp congestion analysis script that I wrote.
The topology is a 2 source node dumbell topology with a bottleneck in the center.
Something like
n2 n4
\ /
n0-----n1
/ \
n3 n5
The dashed line has a bandwidth of 100 Mbps ( 1ms. delay ) and the solid line has a bandwidth 60Mbps (2.5 ms delay)
The bottleneck has a DropTail queue in place with max no. of packets = 2084p.
This configuration will lead to congestion and the congestion window getting reducecd at around 1sec as shown in the file streams cwndDropTail_1/0.txt
Now, I want to to plot the number of packets dropped vs time in the bottleneck link (n0 - n1).
For this I used TraceSources as shown in lines 62-70 in tcp_ftp_n.cc. like so
...
Config::ConnectWithoutContext("/NodeList/*/DeviceList/*/$ns3::PointToPointNetDevice/TxQueue/Drop", MakeBoundCallback(&RxDrop, dropped_stream));
Config::ConnectWithoutContext("/NodeList/*/DeviceList/*/$ns3::PointToPointNetDevice/TxQueue/DropAfterDequeue", MakeBoundCallback(&RxDrop, dropped_stream));
Config::ConnectWithoutContext("/NodeList/*/DeviceList/*/$ns3::PointToPointNetDevice/TxQueue/DropBeforeEnqueue", MakeBoundCallback(&RxDrop, dropped_stream));
Config::ConnectWithoutContext("/NodeList/*/DeviceList/*/$ns3::PointToPointNetDevice/PhyRxDrop", MakeBoundCallback(&RxDrop, dropped_stream));
Config::ConnectWithoutContext("/NodeList/*/DeviceList/*/$ns3::PointToPointNetDevice/PhyTxDrop", MakeBoundCallback(&RxDrop, dropped_stream));
But none of these are able to trigger the callback RxDrop that writes into the file stream droppedPacketTrace.txt in spite of the fact that I can see the queue getting completely filled and the congestion window being reduced at the source.
What am I doing wrong here?
I did read a thread on the internet where it said that when a queue is filled, the packets are no longer dropped from the queue but simply never "enqueued". Does this have something to do with my problem, and if so then how should I proceed?
I tried being as detailed as possible but please inform me if you need some extra insight into the setup. I am attaching all the required files.
link for the codefile - tcp_ftp_n.cc
https://drive.google.com/file/d/15DvhYWEDNvpnZqAAgjM9Tq6oUeVsgw8X/view?usp=sharing
Note - I run ns3-dev

Related

STM32 Ethernet UDP Problems

Good work everyone,
I am communicating with the STM32F746 processor using the lwip library. I got the ethernet buffers in the special area of ​​the RAM and protected it with the MPU. Then I disabled the buffering feature. My code works as I want, but after 10-15 minutes communication is cut off. My grounding, my connections, everything looks good.
I left the mpu protection as default (0x30000000).
I wrote the following in the flash.ld file
. = ABSOLUTE(0x20010000);
*(.RxDecripSection)
. = ABSOLUTE(0x20010080);
*(.TxDecripSection)
. = ABSOLUTE(0x20010100);
*(.RxArraySection)
. = ABSOLUTE(0x200118D0);
*(.TxArraySection)
I set memory region from ethernetif.c file. But the result is still the same. My connection drops after 10 minutes.

GNURadio Companion and OFDM TX and RX in single Graph

I am following this github example for understanding OFDM on gnuradio-companion, I am able to execute ofdm_tx individually (64 and 512 FFT point) without any issues, but when I connect these two in single graph, I am able to get spectrum from ofdm_tx (no output from ofdm_rx or getting straight line).
My question here, each time I close my output spectrum, my tool get hanged and in background (inside gnu-companion) I observe the following message tarin (attached, printscreen). Similar thing also observed when I run ofdm_rx individually.
Error message in Console :
packet_headerparser_b :info: Detected an invalid packet at item 1448.
header_payload_demux :info :parser returned #f
Please guide me in this regard,
by selecting "NO" for vector source "Repeat" variable , issue sorted out (no hang), but not able to see spectrum anymore.

How to interpret the RabbitMQ Message stats?

I to want get and historize queue metrics for the "Enqueued, Dequeued an Size" (Terminology formerly met on ActiveMQ).
The moving charts provided in the management plugin are not enough for the monitoring that I need to do.
So with RabbitMQ, I'm getting data from https://rabbitmq-server:15672/api/queues/myvhost
This returns json.. for a queue, I can obtain real life production data like :
"messages":0, // for "Size"
"message_stats":{
"deliver_get":171528, // for "Dequeued"
"ack":162348,
"redeliver":9513,
"deliver_no_ack":0,
"deliver":171528,
"get":0,
"publish":51293 // for "Enqueued"
(...)
I'm in particular surprised by the publish counter:
Its value can even decrease between 2 measures done with a couple of minutes of delay ! (see sample chart around 17:00)
As you can see on my data, the deliver_get is significantly larger than the publish.
https://my-rabbitmq:15672/doc/stats.html doesn't give a lot of details that could explain what I actually notice.
Also, under the message_stats object that I obtain, I'm missing the some counters like confirm and return which could be related to the enqueuing.
Are there relationships between these metrics ? (like deliver_get + messages = redeliver + publish.. but that one doesn't work with my figures)
Is there another more detailed documentation about these metrics ?

Unable to exit while loop in UVM monitor

This might be a silly mistake from my side that I have overlooked but I'm fairly new to UVM and I tried tinkering with my code for a while before this. I'm trying to send in a stream of 8 bit data within a packet using Data valid stall protocol from my UVM driver to the DUT. I'm facing an issue with my input monitor not being able to pick up these transactions that are driven.
I have a while loop with a condition that the valid bit must be high and the stall bit should be low. As long as this condition holds good, the monitor needs to pick up the data byte and push into the queue. I know for a fact that the data is being picked up and pushed to a queue as I used $display statements along the way. The problem is arising once all the data bytes are received and the valid bit goes low. Ideally, this should cause the exit from the while loop but isn't doing so. Any help here would be appreciated. I have attached a snippet of the code below. Thanks in advance.
virtual task main_phase (uvm_phase phase);
$display("Run phase of input monitor");
collect_transfer();
endtask: main_phase
virtual task collect_transfer();
fork
forever begin
wait_for_valid_transaction_cycle();
create_and_populate_pkt();
broadcast_pkt();
#(iP0_vif.cb_iP0_MON);
end
join_none
endtask: collect_transfer
virtual task wait_for_valid_transaction_cycle();
wait(iP0_vif.cb_iP0_MON.ip_valid && ~iP0_vif.cb_iP0_MON.ip_stall);
endtask: wait_for_valid_transaction_cycle
virtual task create_and_populate_pkt();
pkt = Router_seq_item :: type_id :: create("pkt");
pkt.valid = iP0_vif.cb_iP0_MON.ip_valid;
pkt.sop = iP0_vif.cb_iP0_MON.ip_sop;
$display("before data collection");
while(iP0_vif.cb_iP0_MON.ip_valid === `HIGH && iP0_vif.cb_iP0_MON.ip_stall === `LOW) begin
$display("After checking for stall");
pkt.data = iP0_vif.cb_iP0_MON.ip_data;
$display(pkt.data);
pkt.data_q.push_front(pkt.data);
pkt.eop = iP0_vif.cb_iP0_MON.ip_eop;
$display("print check in input monitor # time = %0t", $time);
#(iP0_vif.cb_iP0_MON);
end
$display("before printing input packet from monitor");
Check_for_port_route_and_populate_packet_field(pkt);
print_packet(pkt);
endtask: create_and_populate_pkt
The $display statement "before printing input packet from monitor" is not being displayed.
HIGH is defined as a binary 1 and LOW is defined as a binary 0.
The output of the code in terms of display statements is as below.
before data collection
before checking for stall
After checking for stall
2
print check in input monitor # time = 105
before checking for stall
After checking for stall
1
print check in input monitor # time = 115
before checking for stall
After checking for stall
3
print check in input monitor # time = 125
It's possible that the main phase objection is being dropped elsewhere in your environment. UVM will automatically kill any threads that were spawned during a phase when it ends.
To fix this, do not object to the main phase in your monitor. Objecting to that phase is the responsibility of the threads creating the stimulus. Instead, you should be launching this monitor during the run_phase, which will ensure that your loop is not killed until the end of simulation.
Also, during the shutdown phase, you will want your monitor to object whenever it is currently seeing a packet. This will ensure that simulation doesn't end as soon as stimulus has been sent in, giving your other monitors time to collect responses from the DUT.

Addressing ECUs directly using ELM 327 dongle and ISO 9141

I have a VW Golf 4, which is quite old and talks KWP 2000 (ISO 9141) on its CAN bus. I use a dongle powered by ELM 327, connected to the OBD-2 port of the car.
I am trying to send messages individually to each ECU. I tried to change the header of the messages:
AT SH 48 XX F1 (I hoped XX would be the ECU ID; 48 is the flag for "use physical addressing"). Any command I issue (e.g. tried 3E for "tester present") returns NO DATA (I disabled automatic timeouts and set the timeout to maximum value).
Is there a way to send messages directly to the ECU? I am not interested in the set of data provided via OBD-2, neither do I want to re-flash the ECUs. At the moment I just try to find out which ECUs are available on the bus.
Thanks!
VW works on Transport Protocol TP 2.0, hence you need to initialize with 0x200 header.
https://jazdw.net/tp20
See above link for more info.