I am using TMS570LS3137 (DP84640 Phy). Trying to program UPD(unicast) using lwip to send 2MB of data.
As of now i can send upto 63kb of data. How to send 2MB of data at a time. UDP support upto 63kb of transmission only, but in this link
https://stackoverflow.com/questions/32512345/how-to-send-udp-packets-of-size-greater-than-64-kb#:~:text=So%20it's%20not%20possible%20to,it%20up%20into%20multiple%20datagrams.
They have mentioned as "If you need to send larger messages, you need to break it up into multiple datagrams.", how to proceed with this?
Since UDP uses IP, you're limited to the maximum IP packet size of 64 KiB generally, even with fragmentation. So, the hard limit for any UDP payload is 65,535 - 28 = 65,507 bytes.
I need to either
chunk your data into multiple datagrams. Since datagrams may arrive out of sending order or even get lost, this requires some kind of protocol or header. That could be as simple as four bytes at the beginning to define the buffer offset the data goes to, or a datagram sequence number. While you're at it, you won't want to rely on fragmentation but, depending on the scenario, use either the maximum UDP payload size over plain Ethernet (1500 bytes MTU - 20 bytes IP header - 8 bytes UDP header = 1472 bytes), or a sane maximum that should work all the time (e.g. 1432 bytes).
use TCP which can transport arbitrarily sized data and does all the work for you.
Related
In a Cadastral application several tables are joined to a view, MATRIKELSKEL_SAG, to ease client calling. Some of the tables use Oracle Spatial data structures, MDSYS.SDO_GEOMETRY.
When calling the view returning comparable number of rows we see order of magnitude change in number of roundtrips measured with auto trace in SQL Plus. In all our measurements high number of roundtrips between Oracle client and Oracle server are reflected in high response times as documented below.
The Oracle client is version 19.3.0.0.0 running on Windows Server 2016.
The Oracle server is version 19.15.0.0.0 running on RHEL 7.9
The SQL Plus autotrace script used is defined by:
set autotrace traceonly statistics
set serveroutput off
set echo off
set line 200
set array 1000
set verify off
timing start
SELECT * FROM MATRIKELSKEL_SAG WHERE SAGSID=<sagsID>;
timing stop
where sagsID is either 100143041 or 100149899
Measurements
Here are our measurements, call them measure_A and measure_B.
Measure_A: sagsId = 100143041
25118 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
792108 consistent gets
2149 physical reads
528 redo size
65322624 bytes sent via SQL*Net to client
14001426 bytes received via SQL*Net from client
175039 SQL*Net roundtrips to/from client
23098 sorts (memory)
0 sorts (disk)
25118 rows processed
Elapsed: 00:01:07.54
Measure_B: sagsId = 100149899
30021 rows selected.
Statistics
----------------------------------------------------------
180 recursive calls
0 db block gets
324173 consistent gets
2904 physical reads
396 redo size
6000615 bytes sent via SQL*Net to client
2681 bytes received via SQL*Net from client
59 SQL*Net roundtrips to/from client
27988 sorts (memory)
0 sorts (disk)
30021 rows processed
Elapsed: 00:00:03.16
Since the number of rows only differ by ~25% (25118 compared to 30021) we would expect some metrics to only differ in range of ~25%.
Observation 1
While 65Mb is sent to SQL Plus client in measure_A, only 6Mb is sent to client in measure_B. This may be an indication of an issue.
Observation 2
While measure_B has 59 roundtrips, measure_A has 175039 roundtrips, up by factor 2966. Since arraysize is set to 1000 we would expect 30021/1000 + handshake + overhead for measure_B. We see 59 roundtrips which is ok. For measure_A we would expect 25118/1000 + handshake + overhead = ~55. But we see 175039 roundtrips. This is definitely a puzzle.
Observation 3
Despite ~comparable physical reads and consistent gets response time is 1m7s in measure_A compared to 3s in measure_B.
Our questions
Why do we see a factor 2966 up in roundtrips in measure_A compared to measure_B, when returned bytes is only up a factor of 10?
Why do we see a factor 22 up in response time in measure_A compared to measure_A, when returned bytes is only up a factor of 10?
We can provide definition of view if needed.
This is probably because the size (= complexity, i.e. the number of vertices). The more vertices, the more data to send to the server.
You can get a feeling of that by running this query:
select
sagsid,
count(*),
sum(points), min(points), avg(points), median(points), max(points), avg(points), median(points),
sum(bytes), min(bytes), avg(bytes), median(bytes), max(bytes), avg(bytes), median(bytes)
from (
select sagsid, sdo_util.getnumvertices(geom) as points, vsize(geom) as bytes
from matrikelskel_sag
where sagsid in (100143041, 100149899)
)
group by sagsid;
This will return numbers about the number of points and the size of geometries in bytes for each SAGSID.
Its should help you understand what is happening and explain your observations.
As for optimizing the process, there are settings you can use for the SQLNET layer. See https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/optimizing-performance.html
Other things to check:
The complexity of the shapes. Depending on the nature of the shapes you are using, you may be able to simplify them, i.e. reduce the number of vertices. If the data is automatically digitized from photos or imagery, they maybe over complex wrt the needs of your application.
The number of decimals in the geometries you use. If they are automatically digitized or if they are transformed from another coordinate system. It is possible to reduce those and so make the geometries more compact with less data to transfer to the client.
When using the redis-cli INFO command you get an ouput for
instantaneous_output_kbps and instantaneous_input_kbps, are those statistics measured in bytes or bits?
it's measured in bytes, even though it is not documented on the redis website.
This is how redis tracks those internally (see server.c, line 954):
trackInstantaneousMetric(STATS_METRIC_NET_INPUT,
server.stat_net_input_bytes);
trackInstantaneousMetric(STATS_METRIC_NET_OUTPUT,
server.stat_net_output_bytes);
this is tracked in bytes, and the trackInstantaneousMetric doesn't manipulate the data in any way. It's basically a moving average on the network IO that's measured in bytes.
In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.
I have a qpid queue with this parameters:
bus-sync-queue --durable --file-size=48 --file-count=64
I want to put to this queue 1 000 000 messages. Each message is just a string with 12 characters. (002000333222, 002000342678 and so on). What values I must set to config --file-size=X --file-count=Y to able to fit all messages to queue?
There is quite a big overhead on single persistent message, in you case one message will require at least 128 bytes of storage. You should rethink your design, either decrease expected number of no-acknowledged messages or use different approach.
Assuming the highest baud rate, what is the highest rate at which PDOs are received?
That depends on the length of the PDO and the number of PDOs per message. The ratio between transported data and protocol overhead is best when you use the full eight bytes of one CAN message.
If you want high troughput, use all eight bytes of one message
If you want the highest possible frequency use as few data bits as possible
A rule of thumb:
Eight bytes of a payload result in a CAN message of about 100 bit length.
With 1 Mbit/s maximum baud rate you can achieve about 10000 messages per second.