Maximum transfer rate isochronous 128B endpoint full speed - usb

In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?

The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.

Related

Performance issue: High number of roundtrips when fetching data from view

In a Cadastral application several tables are joined to a view, MATRIKELSKEL_SAG, to ease client calling. Some of the tables use Oracle Spatial data structures, MDSYS.SDO_GEOMETRY.
When calling the view returning comparable number of rows we see order of magnitude change in number of roundtrips measured with auto trace in SQL Plus. In all our measurements high number of roundtrips between Oracle client and Oracle server are reflected in high response times as documented below.
The Oracle client is version 19.3.0.0.0 running on Windows Server 2016.
The Oracle server is version 19.15.0.0.0 running on RHEL 7.9
The SQL Plus autotrace script used is defined by:
set autotrace traceonly statistics
set serveroutput off
set echo off
set line 200
set array 1000
set verify off
timing start
SELECT * FROM MATRIKELSKEL_SAG WHERE SAGSID=<sagsID>;
timing stop
where sagsID is either 100143041 or 100149899
Measurements
Here are our measurements, call them measure_A and measure_B.
Measure_A: sagsId = 100143041
25118 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
792108 consistent gets
2149 physical reads
528 redo size
65322624 bytes sent via SQL*Net to client
14001426 bytes received via SQL*Net from client
175039 SQL*Net roundtrips to/from client
23098 sorts (memory)
0 sorts (disk)
25118 rows processed
Elapsed: 00:01:07.54
Measure_B: sagsId = 100149899
30021 rows selected.
Statistics
----------------------------------------------------------
180 recursive calls
0 db block gets
324173 consistent gets
2904 physical reads
396 redo size
6000615 bytes sent via SQL*Net to client
2681 bytes received via SQL*Net from client
59 SQL*Net roundtrips to/from client
27988 sorts (memory)
0 sorts (disk)
30021 rows processed
Elapsed: 00:00:03.16
Since the number of rows only differ by ~25% (25118 compared to 30021) we would expect some metrics to only differ in range of ~25%.
Observation 1
While 65Mb is sent to SQL Plus client in measure_A, only 6Mb is sent to client in measure_B. This may be an indication of an issue.
Observation 2
While measure_B has 59 roundtrips, measure_A has 175039 roundtrips, up by factor 2966. Since arraysize is set to 1000 we would expect 30021/1000 + handshake + overhead for measure_B. We see 59 roundtrips which is ok. For measure_A we would expect 25118/1000 + handshake + overhead = ~55. But we see 175039 roundtrips. This is definitely a puzzle.
Observation 3
Despite ~comparable physical reads and consistent gets response time is 1m7s in measure_A compared to 3s in measure_B.
Our questions
Why do we see a factor 2966 up in roundtrips in measure_A compared to measure_B, when returned bytes is only up a factor of 10?
Why do we see a factor 22 up in response time in measure_A compared to measure_A, when returned bytes is only up a factor of 10?
We can provide definition of view if needed.
This is probably because the size (= complexity, i.e. the number of vertices). The more vertices, the more data to send to the server.
You can get a feeling of that by running this query:
select
sagsid,
count(*),
sum(points), min(points), avg(points), median(points), max(points), avg(points), median(points),
sum(bytes), min(bytes), avg(bytes), median(bytes), max(bytes), avg(bytes), median(bytes)
from (
select sagsid, sdo_util.getnumvertices(geom) as points, vsize(geom) as bytes
from matrikelskel_sag
where sagsid in (100143041, 100149899)
)
group by sagsid;
This will return numbers about the number of points and the size of geometries in bytes for each SAGSID.
Its should help you understand what is happening and explain your observations.
As for optimizing the process, there are settings you can use for the SQLNET layer. See https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/optimizing-performance.html
Other things to check:
The complexity of the shapes. Depending on the nature of the shapes you are using, you may be able to simplify them, i.e. reduce the number of vertices. If the data is automatically digitized from photos or imagery, they maybe over complex wrt the needs of your application.
The number of decimals in the geometries you use. If they are automatically digitized or if they are transformed from another coordinate system. It is possible to reduce those and so make the geometries more compact with less data to transfer to the client.

High precision queue statistics from RabbitMQ

I need to log with the highest possible precision the rate with which messages enter and leave a particular queue in Rabbit. I know the API already provides publishing and delivering rates, but I am interested in capturing raw incoming and outgoing values in a known period of time, so that I can estimate rates with higher precision and time periods of my choice.
Ideally, I would check on-demand (i.e. on a schedule of my choice) e.g. the current cumulative count of messages that have entered the queue so far ("published" messages), and the current cumulative count of messages consumed ("delivered" messages).
With these types of cumulative counts, I could:
Compute my own deltas of messages entering or exiting the queue, e.g. doing Δ_count = cumulative_count(t) - cumulative_count(t-1)
Compute throughput rates doing throughput = Δ_count / Δ_time
Potentially infer how long messages stay on the queue throughout the day.
The last two would ideally rely on the precise timestamps when those cumulative counts were calculated.
I am trying to solve this problem directly using RabbitMQ’s API, but I’m encountering a problem when doing so. When I calculate the message cumulative count in the queue, I get a number that I don’t expect.
For example consider the screenshot below.
The Δ_message_count between entries 90 and 91 is 1810-1633 = 177. So, as I stated, I suppose that the difference between published and delivered messages should be 177 as well (in particular, 177 more messages published than delivered).
However, when I calculate these differences, I see that the difference is not 177:
Δ of published (incoming) messages: 13417517652009 - 13417517651765 = 244
Δ of delivered (outgoing) messages: 1341751765667 - 1341751765450 = 217
so we get 244 - 217 =27 messages. This suggests that there are 177 - 27 = 150 messages "unaccounted" for.
Why?
I tried taking into account the redelivered messages given by the API but they were constant when I run my tests, suggesting that there were no redelivered messages, so I wouldn't expect that to play a role.

What is the average consumption of a GPS app (data-wise)?

I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)

USB 1.1 more bulk bandwidth

I have the following problem:
Microcontroller with usb1.1, 32byte buffer for bulk transfers and a lot of real time data to move to Linux (kernel2.6) PC.
As far as I understand the maximum theoretical bandwidth available for bulk transfers in this case is 19 transfers * 32 bytes / frame (1ms) = 608 Kbytes/second
The problem for me is that this is still not enough to move the data in real time and changing to an USB 2.0 uC is not possible ...
Is there anything I can do in SW ( create a patch for linux2.6 ) in order to get 1 or 2 extra bulk transfers per frame ?
Thanks,
George
Since the limit is imposed by the physical USB hardware, there is no way to speed up transfer short of implementing compression on both sides of the transfer.
Even then, it is unlikely you will be able to speed up the transfer considerably.

Fastest rate at which CANopen PDOs are received

Assuming the highest baud rate, what is the highest rate at which PDOs are received?
That depends on the length of the PDO and the number of PDOs per message. The ratio between transported data and protocol overhead is best when you use the full eight bytes of one CAN message.
If you want high troughput, use all eight bytes of one message
If you want the highest possible frequency use as few data bits as possible
A rule of thumb:
Eight bytes of a payload result in a CAN message of about 100 bit length.
With 1 Mbit/s maximum baud rate you can achieve about 10000 messages per second.