In a Cadastral application several tables are joined to a view, MATRIKELSKEL_SAG, to ease client calling. Some of the tables use Oracle Spatial data structures, MDSYS.SDO_GEOMETRY.
When calling the view returning comparable number of rows we see order of magnitude change in number of roundtrips measured with auto trace in SQL Plus. In all our measurements high number of roundtrips between Oracle client and Oracle server are reflected in high response times as documented below.
The Oracle client is version 19.3.0.0.0 running on Windows Server 2016.
The Oracle server is version 19.15.0.0.0 running on RHEL 7.9
The SQL Plus autotrace script used is defined by:
set autotrace traceonly statistics
set serveroutput off
set echo off
set line 200
set array 1000
set verify off
timing start
SELECT * FROM MATRIKELSKEL_SAG WHERE SAGSID=<sagsID>;
timing stop
where sagsID is either 100143041 or 100149899
Measurements
Here are our measurements, call them measure_A and measure_B.
Measure_A: sagsId = 100143041
25118 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
792108 consistent gets
2149 physical reads
528 redo size
65322624 bytes sent via SQL*Net to client
14001426 bytes received via SQL*Net from client
175039 SQL*Net roundtrips to/from client
23098 sorts (memory)
0 sorts (disk)
25118 rows processed
Elapsed: 00:01:07.54
Measure_B: sagsId = 100149899
30021 rows selected.
Statistics
----------------------------------------------------------
180 recursive calls
0 db block gets
324173 consistent gets
2904 physical reads
396 redo size
6000615 bytes sent via SQL*Net to client
2681 bytes received via SQL*Net from client
59 SQL*Net roundtrips to/from client
27988 sorts (memory)
0 sorts (disk)
30021 rows processed
Elapsed: 00:00:03.16
Since the number of rows only differ by ~25% (25118 compared to 30021) we would expect some metrics to only differ in range of ~25%.
Observation 1
While 65Mb is sent to SQL Plus client in measure_A, only 6Mb is sent to client in measure_B. This may be an indication of an issue.
Observation 2
While measure_B has 59 roundtrips, measure_A has 175039 roundtrips, up by factor 2966. Since arraysize is set to 1000 we would expect 30021/1000 + handshake + overhead for measure_B. We see 59 roundtrips which is ok. For measure_A we would expect 25118/1000 + handshake + overhead = ~55. But we see 175039 roundtrips. This is definitely a puzzle.
Observation 3
Despite ~comparable physical reads and consistent gets response time is 1m7s in measure_A compared to 3s in measure_B.
Our questions
Why do we see a factor 2966 up in roundtrips in measure_A compared to measure_B, when returned bytes is only up a factor of 10?
Why do we see a factor 22 up in response time in measure_A compared to measure_A, when returned bytes is only up a factor of 10?
We can provide definition of view if needed.
This is probably because the size (= complexity, i.e. the number of vertices). The more vertices, the more data to send to the server.
You can get a feeling of that by running this query:
select
sagsid,
count(*),
sum(points), min(points), avg(points), median(points), max(points), avg(points), median(points),
sum(bytes), min(bytes), avg(bytes), median(bytes), max(bytes), avg(bytes), median(bytes)
from (
select sagsid, sdo_util.getnumvertices(geom) as points, vsize(geom) as bytes
from matrikelskel_sag
where sagsid in (100143041, 100149899)
)
group by sagsid;
This will return numbers about the number of points and the size of geometries in bytes for each SAGSID.
Its should help you understand what is happening and explain your observations.
As for optimizing the process, there are settings you can use for the SQLNET layer. See https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/optimizing-performance.html
Other things to check:
The complexity of the shapes. Depending on the nature of the shapes you are using, you may be able to simplify them, i.e. reduce the number of vertices. If the data is automatically digitized from photos or imagery, they maybe over complex wrt the needs of your application.
The number of decimals in the geometries you use. If they are automatically digitized or if they are transformed from another coordinate system. It is possible to reduce those and so make the geometries more compact with less data to transfer to the client.
Related
I am using TMS570LS3137 (DP84640 Phy). Trying to program UPD(unicast) using lwip to send 2MB of data.
As of now i can send upto 63kb of data. How to send 2MB of data at a time. UDP support upto 63kb of transmission only, but in this link
https://stackoverflow.com/questions/32512345/how-to-send-udp-packets-of-size-greater-than-64-kb#:~:text=So%20it's%20not%20possible%20to,it%20up%20into%20multiple%20datagrams.
They have mentioned as "If you need to send larger messages, you need to break it up into multiple datagrams.", how to proceed with this?
Since UDP uses IP, you're limited to the maximum IP packet size of 64 KiB generally, even with fragmentation. So, the hard limit for any UDP payload is 65,535 - 28 = 65,507 bytes.
I need to either
chunk your data into multiple datagrams. Since datagrams may arrive out of sending order or even get lost, this requires some kind of protocol or header. That could be as simple as four bytes at the beginning to define the buffer offset the data goes to, or a datagram sequence number. While you're at it, you won't want to rely on fragmentation but, depending on the scenario, use either the maximum UDP payload size over plain Ethernet (1500 bytes MTU - 20 bytes IP header - 8 bytes UDP header = 1472 bytes), or a sane maximum that should work all the time (e.g. 1432 bytes).
use TCP which can transport arbitrarily sized data and does all the work for you.
I am using redis to save jsonwebtokens. I am confused a little about the consumption of memory for every record.
Let's say I have an instance on Google cloud that has 4GB Memory allocated to it, I want to know that how many records can it handle.
Given that a record has on an average 1 string values excluding he identifier and every string has on an average 200 characters.
It's all about how you store them. Using hashes (sizing them properly), or plain key value pair.
Do read this doc for more info http://redis.io/topics/memory-optimization
For 1 million keys (simple key value pair) of 200 characters it takes about 300 MB. So for 4 GB you can store more or less 14 million keys I guess. To make sure this, install redis in your machine, run a simple java (using jedis) snippet, and check the memory consumption before and after the insertion.
Jedis jedis = new Jedis("localhost");
for i=0 to N
jedis.set("Key_"+i,string);
Redis wraps strings into sds struct, which requires 3 extra bytes (or more) for each string.
Each sds is stored in a redisObject struct (using a pointer pointing to that sds object). It takes about 16 extra bytes if you're on a 64-bit machine.
You may also consider the entries in the hash table. Each one takes 24 bytes.
So you can assume each of your string occupies 243 bytes. 1 million strings will use more than 250 MB (Redis itself needs memory).
I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)
In the usb specification (Table 5-4) is stated that given an isochronous endpoint with a maxPacketSize of 128 Bytes as much as 10 transactions can be done per frame. This gives 128 * 10 * 1000 = 1.28 MB/s of theorical bandwidth.
At the same time it states
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Isn't it contradictory with the aforementioned table ?
I've done some tests and found that only 1 transaction is done per frame on my device. Also, I found on several web sites that just 1 transaction can be done per frame(ms). Of course I suppose the spec is the correct reference, so my question is, what could be the cause of receiving only 1 packet per frame ? Am I misunderstanding the spec and what i think are transactions are actually another thing ?
The host must not issue more than 1 transaction in a single frame for a specific isochronous endpoint.
Assuming USB Full Speed you could still have 10 isochronous 128 byte transactions per frame by using 10 different endpoints.
The Table 5-4 seems to miss calculations for chapter 5.6.4 "Isochronous Transfer Bus Access Constraints". The 90% rule reduces the max number of 128 byte isochr. transactions to nine.
After enabling gzip compression in my Apache server (mod_deflate) I found consistently that end user was being served on average 200 ms slower than uncompressed responses.
This was unexpected so I modified the compression directive to ONLY compress text/HTML responses, fired up wireshark and looked at the network dump before and after compression.
Here are my observations of a GET with minimum traffic in the network
Before Compression
Transactions on the wire: 46
Total time for 46 trans: 791ms
i. TCP seq/ack: 14ms
ii. 1st data segment: 693ms
iii. Remaining: 83ms (27/28 data units transferred + tcp/ip handshakes)
After Compression
Transactions on the wire: 10
Total time for 46 trans: 926ms
i. TCP seq/ack: 14ms
ii. 1st data segment: 746ms
iii. Remaining: 165ms (5 out of 6 data units transfered)
After the compression was set it is clear and understandable that the number of transactions on the wire are significantly lower than uncompressed.
However, the compressed data unit took much more longer time to transfer from source to destination.
It appears that the additional work of compression is understandably taking time but can not understand why each data sent was significantly slower when compressed.
My understanding of the compression process is:
1. GET Request is received by Apache
2. Apache identifies resource
3. Compress the resource
4. Respond with compressed response
With this scheme, I would assume that 3rd step is (the step before the very first segment of the response would take a longer time since we are -- compressing + responding -- but the rest of the chunks I assumed should take on average equal time as the uncompressed chunks but they are not.
Can anyone tell me why... or suggest a better way to analyze this scenario. Also does anyone have a before and after comparison... I would appreciate any feedback/comments/questions
I was using insufficient test to compare the two scenarios (i think less than 100 resources). With sufficient tests -- more than 6000 urls, it showed that the compressed response time to first byte was faster by 200 milliseconds in serving text/html, where as TTLB was faster by 25 milliseconds on the average.
I haven't load tested this which I plan to do and update this answer.