How to use CommMonitor in gem5 [closed] - gem5

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to record the access details of the cache. I saw some answers, saying that CommMonitor can help. But no more details were found.
How to trace the data that is going through caches and DRAM memory in gem5?
Obtaining physical address trace from GEM5
The above are some answers about CommMonitor.
I have some questions about CommMonitor?
First of all, can Commmonitor be used for DerivO3CPU or only TimingSimpleCPU? I tried it on DerivO3CPU and there was output. But somewhere I seem to hear that it cannot be used for DerivO3CPU.
My understanding of CommMonitor is that it is like a filter. The data flowing through it is recorded. For example, add commMonitor between l2 and membus,
system.monitor2 = CommMonitor()
system.monitor2.trace = MemTraceProbe(trace_file = "CT_mon2.trc.gz")
system.monitor2.slave = system.l2.mem_side
system.membus.slave = system.monitor2.master
system.l2.cpu_side = system.tol2bus.master
The output format is :
11500: system.monitor2: Forwarded read request
77000: system.monitor2: Latency: 65500
77000: system.monitor2: Received read response
103000: system.monitor2: Forwarded read request
104000: system.monitor2: Forwarded read request
165000: system.monitor2: Latency: 62000
165000: system.monitor2: Received read response
170000: system.monitor2: Latency: 66000
170000: system.monitor2: Received read response
194500: system.monitor2: Forwarded read request
200500: system.monitor2: Forwarded read request
243000: system.monitor2: Latency: 48500
243000: system.monitor2: Received read response
249000: system.monitor2: Latency: 48500
249000: system.monitor2: Received read response
267500: system.monitor2: Forwarded read request
269500: system.monitor2: Forwarded read request
274000: system.monitor2: Forwarded read request
The generated CT_mon2.trc.gz file is a binary file after decompression, what should I do to see the data inside? It would be better if I can output the address and data
How to use it between l1dcache and cpu?

How to use it between l1dcache and cpu?
Edit gem5/src/cpu/→ BaseCPU.py as following and don't forget to build after any changes you do to the src folder:
You can use flags(as follows) to see whats passes through this master-slave interface.
build/X86/gem5.opt --debug-flags=CommMonitor --debug-file=trace.txt.gz configs/learning_gem5/part2/simple_cache.py
Find trace.txt.gz in the m5out folder.

Related

What is the difference between inbound-rtp & remote-inbound-rtp in the results we get from webrtc getstats?

I have been trying to figure out a way to calculate the following:
Bandwidth, Latency, Current Upload, and Download speed.
And have been confused with the values I am getting for the INBOUND-RTP, OUTBOUND-RTP, & REMOTE-INBOUND-RTP.
In my head, I was thinking about inbound-rtp as a collection of stats for all incoming data.
which apparently is wrong, since different stats for that type always stays Zero
The current setup uses chrome as a 2 connecting Clients, and a Media Server, with client instances running on "localhost"
The terminology used on MDN is a bit terse, so here's a rephrasing that I hope is helpful to solve your problem! Block quotes taken from MDN & clarified below. For an even terser description, also see the W3C definitions.
outbound-rtp
An RTCOutboundRtpStreamStats object giving statistics about an outbound RTP stream being sent from the RTCPeerConnection.
This stats report is based on your outgoing data stream to your peers. This is the measurement taken from the perspective of just that oubound RTP stream, which is why information that involves your peers (round trip time, jitter, etc.) is missing, because those can only be measured with an understanding of the peer's processing of your stream.
inbound-rtp
Statistics about an inbound RTP stream that's currently in use by this RTCPeerConnection, in an RTCInboundRtpStreamStats object.
By contrast to the Outbound RTP statistics, this stats report contains data about the inbound data stream you are receiving from your peer(s). Notice that if you do not have any connected peers your call to getStats does not include this report type at all.
remote-inbound-rtp
Contains statistics about the remote endpoint's inbound RTP stream; that stream corresponds to the local endpoint's outbound RTP stream. Using this RTCRemoteInboundRtpStreamStats object, you can learn how the well the remote peer is receiving data.
This stats report provides details about your outbound rtp stream from the perspective of the remote connection. That is to say that this stats report provides an analysis about your outbound-rtp stream from the perspective of the remote server that is handling your stream on the other side.
I'm on the MDN writing team at Mozilla and happened upon this just now. I've taken some of the information from this conversation and applied it back to the article about RTCStatsType. There's more to improve there still, but I wanted to thank you for that insight!
Always feel free to sign up for an MDN account and update any content you see that's inaccurate or incomplete! Or you can file an issue and we'll see what we can do.

How to decrease latency on Binance API orderbook calls? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am currently attempting to decrease latency when calling the orderbook with the Binance API.
I am getting a ping of ~7ms but the orderbook call takes ~200ms to download. I am using a VM hosted in the same AWS farm that Binance uses, and I am running on a network speed of ~800mbps. I do not understand why the orderbook call takes nearly two orders of magnitude more time to receive than the time it takes to ping the server when the size of the orderbook is relatively small.
Any help or insight into either the network, or restrictions imposed by Binance would be greatly appreciated.
Important distinction:
Ping: goes to the nearest CDN edge node, which responds to you; you don't get anywhere near a Binance server.
API request: goes to the nearest CDN edge node, gets routed to the Binance server, gets processed, then the response is routed back to you.
Binance servers are hosted on AWS in Tokyo. If you place your host there, the latency will be about 12-15ms (2ms from when Binance sent the response.)
To lower the average latency, you may try to post idempotent requests in parallel, and then use the response that arrives first. It's rude but it gets the job done.
As a random note, client order IDs are reusable, which makes them useless as a method of making orders idempotent. (However, if you use fixed order sizes, you could preemptively lock all the rest of your funds, then hammer Binance with order placements knowing a duplicate order couldn't succeed since you wouldn't have funds.)
By now there are multiple api endpoint api/api1/api2/api3.binance.com
I pinged the endpoint using cmd. And the given method provided by #Tiana.
For me api2 had somehow the best latency. But not really better than api1 or api3.
Just the normla api had very bad latencies from time to time.
Apparently the endpoints are on different server i guess and the api. endpoint is overused a bit.
So i would suggest using the api2.binance.com endpoint

RabbitMQ: Does anyone know how can I find the sources that describe RabbitMQ message parameters [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I got the contents from path: /api/queues/vhost/name and how are these parameters below?
message_stats:
deliver_get
incoming
Here you can find the stats documentation:
http://hg.rabbitmq.com/rabbitmq-management/raw-file/31c1d2668d39/priv/www/doc/stats.html
EDIT**
message_stats objects
publish Count of messages published.
publish_in Count of messages published "in" to an exchange, i.e. not taking account of routing.
publish_out Count of messages published "out" of an exchange, i.e. taking account of routing.
confirm Count of messages confirmed.
deliver Count of messages delivered in acknowledgement mode to consumers.
deliver_noack Count of messages delivered in no-acknowledgement mode to consumers.
get Count of messages delivered in acknowledgement mode in response to basic.get.
get_noack Count of messages delivered in no-acknowledgement mode in response to basic.get.
deliver_get Sum of all four of the above.
redeliver Count of subset of messages in deliver_get which had the redelivered flag set.
return Count of messages returned to publisher as unroutable.
/api/queues/(vhost)/(name)
incoming Detailed message stats (see section above) for publishes from exchanges >into this queue.
deliveries Detailed message stats for deliveries from this queue into channels.
consumer_details List of consumers on this channel, with some details on each.

UDP packet fragmentation

After reading dozens of articles I can't find an answer to a simple question - can UDP datagram arrive fragmented? I know that it can get fragmented on the way if it's size is above 576 bytes or something like that, but will it get merged when it arrives?
In other words, if I send a single packet via udp::socket::send_to(), can I assume that if it's not dropped on the way, I'll retrieve it by a single call to udp::socket::async_receive_from()?
The OS network stack will reassemble the fragments and give the user space the complete packet. And if one of the fragments get lost the user space will not receive the rest, but nothing.

Communicate between VB.NET applications across a LAN and WAN [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm looking into writing an application that will run on multiple machines on the network - I haven't started to write any code yet as I'm looking into options at the moment - I'll have each client inform the server of their presence, possibly by updating a SQL table that stores machine info and an "Offline / Online" status field... unless you can think of a better way of doing this?
As well as the client running on each users PC on the network, there will be "operators" running a different application.
What I'd like to be able to do is have the operators send messages to clients, the client then receives this message and displays it in a notification window. The operators application will do a SQL query to get all online clients and then send the notification only to these machines.
I can do the SQL side of things, the part I have no idea how to do... how do I have the operator application send notifications to the clients once it has the list it needs to send to?
I'll need to be able to send two strings at once:-
- Notification Title (String & Date.Now to show when the message was sent)
- Notification Message (multiline - no more than 5 lines)
Any help on how to have a vb.net application read two text boxes and send the contents to a remote vb.net application that can then assign those values to variables to be used and passed to a notification popup (I already have the popups working) would be greatly appreciated.
Thanks in advance.
Merick.
Depending on how complicated and exciting you want to make this task, you may want to explorer the pub/sub model and handle communication using WCF and some message queue (like MSMQ, Tibco, ActiveMQ, etc)