Fastest rate at which CANopen PDOs are received - pdo

Assuming the highest baud rate, what is the highest rate at which PDOs are received?

That depends on the length of the PDO and the number of PDOs per message. The ratio between transported data and protocol overhead is best when you use the full eight bytes of one CAN message.
If you want high troughput, use all eight bytes of one message
If you want the highest possible frequency use as few data bits as possible
A rule of thumb:
Eight bytes of a payload result in a CAN message of about 100 bit length.
With 1 Mbit/s maximum baud rate you can achieve about 10000 messages per second.

Related

How to send 2MB of data through UDP?

I am using TMS570LS3137 (DP84640 Phy). Trying to program UPD(unicast) using lwip to send 2MB of data.
As of now i can send upto 63kb of data. How to send 2MB of data at a time. UDP support upto 63kb of transmission only, but in this link
https://stackoverflow.com/questions/32512345/how-to-send-udp-packets-of-size-greater-than-64-kb#:~:text=So%20it's%20not%20possible%20to,it%20up%20into%20multiple%20datagrams.
They have mentioned as "If you need to send larger messages, you need to break it up into multiple datagrams.", how to proceed with this?
Since UDP uses IP, you're limited to the maximum IP packet size of 64 KiB generally, even with fragmentation. So, the hard limit for any UDP payload is 65,535 - 28 = 65,507 bytes.
I need to either
chunk your data into multiple datagrams. Since datagrams may arrive out of sending order or even get lost, this requires some kind of protocol or header. That could be as simple as four bytes at the beginning to define the buffer offset the data goes to, or a datagram sequence number. While you're at it, you won't want to rely on fragmentation but, depending on the scenario, use either the maximum UDP payload size over plain Ethernet (1500 bytes MTU - 20 bytes IP header - 8 bytes UDP header = 1472 bytes), or a sane maximum that should work all the time (e.g. 1432 bytes).
use TCP which can transport arbitrarily sized data and does all the work for you.

High precision queue statistics from RabbitMQ

I need to log with the highest possible precision the rate with which messages enter and leave a particular queue in Rabbit. I know the API already provides publishing and delivering rates, but I am interested in capturing raw incoming and outgoing values in a known period of time, so that I can estimate rates with higher precision and time periods of my choice.
Ideally, I would check on-demand (i.e. on a schedule of my choice) e.g. the current cumulative count of messages that have entered the queue so far ("published" messages), and the current cumulative count of messages consumed ("delivered" messages).
With these types of cumulative counts, I could:
Compute my own deltas of messages entering or exiting the queue, e.g. doing Δ_count = cumulative_count(t) - cumulative_count(t-1)
Compute throughput rates doing throughput = Δ_count / Δ_time
Potentially infer how long messages stay on the queue throughout the day.
The last two would ideally rely on the precise timestamps when those cumulative counts were calculated.
I am trying to solve this problem directly using RabbitMQ’s API, but I’m encountering a problem when doing so. When I calculate the message cumulative count in the queue, I get a number that I don’t expect.
For example consider the screenshot below.
The Δ_message_count between entries 90 and 91 is 1810-1633 = 177. So, as I stated, I suppose that the difference between published and delivered messages should be 177 as well (in particular, 177 more messages published than delivered).
However, when I calculate these differences, I see that the difference is not 177:
Δ of published (incoming) messages: 13417517652009 - 13417517651765 = 244
Δ of delivered (outgoing) messages: 1341751765667 - 1341751765450 = 217
so we get 244 - 217 =27 messages. This suggests that there are 177 - 27 = 150 messages "unaccounted" for.
Why?
I tried taking into account the redelivered messages given by the API but they were constant when I run my tests, suggesting that there were no redelivered messages, so I wouldn't expect that to play a role.

What is the average consumption of a GPS app (data-wise)?

I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)

Maximum transfer rate isochronous 128B endpoint full speed

In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.

SUPL MS-Assisted - what measurements are sent

In the MS-assisted case, it is the GPS receiver which sends the measurements for the SLP to calculate and revert. I understand the measurements include the Ephemeris, Iono, DGPS etc + Doppler shift that are sent. Please let me know if my understanding is right.
Does the SET send the code (the entire data transmitted by satellites as is) that it receives as is or splits it into the above components and send?
All the assistance information in SUPL is encapsulated using RRLP protocol (Radio resource location services (LCS) protocol for GSM), RRC (Radio Resource Control for UMTS) or TIA 801 (for CDMA 2000) or LPP (LTE Positioning Protocol for LTE). I'm just looking at RRLP standard ETSI TS 101 527. The following part sounds interesting:
A.3.2.5 GPS Measurement Information Element
The purpose of the GPS Measurement Information element is to provide
GPS measurement information from the MS to the SMLC. This information
includes the measurements of code phase and Doppler, which enables the
network-based GPS method where position is computed in the SMLC. The
proposed contents are shown in table A.5 below, and the individual
fields are described subsequently.
In subsequent section it is defined as:
reference frame - optional, 16 bits - the frame number of the last measured burst from the reference BTS modulo 42432
GPS TOW (time of week) - mandatory, 24 bits, unit of 1ms
number of satellites - mandatory, 4 bits
Then for each satellite the following set of data is transmitted:
satellite ID - 6 bits
C/No - 6 bits
Doppler shift - 16 bits, 0.2Hz unit
Whole Chips - 10 bits
Fractional Chips - 10 bits
Multipath Indicator - 2 bits
Pseudorange Multipath Error - 3+3 bits (mantissa/exponent)
I'm not familiar that much with GPS operation to understand all the parameters, but as far as I understand:
C/No is simply a signal(carrier) to noise ratio
Doppler shift - gives the frequency shift for a given satellite, obviously
Whole/Fractional Chips together give the phase (and thus satellite distance)
My understanding is that things like almanac, ephemeris, Iono, DGPS are all known on the network side. As far as I know those things are transferred from network to MS in MS-based mode.
Hope that helps.
Measurements collected from MS-assisted location requests include:
Satellite ID
code phase - whole chips
code phase - fractional chips
Doppler
Signal strength
Multipath indicator
pseudorange RMS indicator
In addition, the GPS time of measurements is also provided as one value (in milliseconds) for the time which all measurements are valid.
In practice, the required fields that need to be accurate and correct are:
Satellite ID
code phase - whole chips
code phase - fractional chips
Doppler
The code phase values for each satellite are almost always used for the most accurate location calculation. Doppler values can be used to estimate a rough location but aren't usually accurate enough to really contribute to the final solution.
The other values for signal strength, multipath indication, and RMS indicator usually vary in meaning so much between vendors that they don't really provide much benefit for the position calculation. They would normally be used for things like weighting other values so that good satellites count more in the final position.
The network already knows (or should know) the ephemeris and ionospheric model. They are not measurements collected by the handset.