iperf bandwidth; what is the difference between -b 60m and -b 60M - iperf

I am using iperf and have the following problem
iperf ... -b 60M, I have 12% packet loss
iperf ... -b 60m, I have 0.2% packet loss
In both of these cases, the bandwidth is 60 Mbit/s
0.0-10.0 sec 71.0 MBytes 59.6 Mbits/sec 0.150 ms 0/50661 (0%)
What is the difference between -b 60m and -b 60M ?

I found the answer after hours of searching:
m = 1000*1000 bit
M = 1024*1024 bit
I have problem with searching for interpreting the result of iperf ( I am doing testing on boards and sometimes the result do not follow ordinary patterns).
Does anyone knows of any good website or document?

Related

Question how to use -ts or -tr flags in gdalwarp for resampling

I run a complex cpp application, where gdal (3.5.3) is a part to get elevation data from eudem v 1.1. The eudem files are quite large and I want to reduce their file size, since I don't need an accuracy of 25m. This is one file I use and want to downsize.
gdalinfo E20N20.TIF
Driver: GTiff/GeoTIFF
Files: E20N20.TIF
Size is 40000, 40000
Coordinate System is:
PROJCS["ETRS89_ETRS_LAEA",
GEOGCS["ETRS89",
DATUM["European_Terrestrial_Reference_System_1989",
SPHEROID["GRS 1980",6378137,298.2572221010042,
AUTHORITY["EPSG","7019"]],
AUTHORITY["EPSG","6258"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4258"]],
PROJECTION["Lambert_Azimuthal_Equal_Area"],
PARAMETER["latitude_of_center",52],
PARAMETER["longitude_of_center",10],
PARAMETER["false_easting",4321000],
PARAMETER["false_northing",3210000],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]]]
Origin = (2000000.000000000000000,3000000.000000000000000)
Pixel Size = (25.000000000000000,-25.000000000000000)
Metadata:
AREA_OR_POINT=Area
DataType=Elevation
Image Structure Metadata:
COMPRESSION=LZW
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 2000000.000, 3000000.000) ( 20d45'24.21"W, 45d41'42.74"N)
Lower Left ( 2000000.000, 2000000.000) ( 16d36'13.25"W, 37d23'21.20"N)
Upper Right ( 3000000.000, 3000000.000) ( 8d 7'39.52"W, 48d38'23.47"N)
Lower Right ( 3000000.000, 2000000.000) ( 5d28'46.07"W, 39d52'33.70"N)
Center ( 2500000.000, 2500000.000) ( 12d41'50.60"W, 43d 6' 4.82"N)
Band 1 Block=128x128 Type=Float32, ColorInterp=Gray
NoData Value=-3.4028234663852886e+38
Metadata:
BandName=Band_1
RepresentationType=ATHEMATIC
```
I tried the following cmd, but w/out success. Its probably about wrong ts or tr values;
```
gdalwarp -r average -tr 1024 1024 -wm 4096 -multi -wo NUM_THREADS=ALL_CPUS -co TILED=YES \
-co NUM_THREADS=ALL_CPUS E20N20.TIF dz_E20N20.TIF
Creating output file that is 977P x 977L.
Processing E20N20.TIF [1/1] : 0Using internal nodata values (e.g. -3.40282e+38) for image E20N20.TIF.
Copying nodata values from source E20N20.TIF to destination dz_E20N20.TIF.
ERROR 1: Integer overflow : nSrcXSize=19989, nSrcYSize=19989
ERROR 1: Integer overflow : nSrcXSize=20012, nSrcYSize=19989
```
I'm lost, here.
Where is your GDAL coming from, this is a 32-bit error because your data exceeds 4GB?
Also, unless you are reprojecting, I strongly advise you to use gdal_translate instead which has much better performance when resampling:
gdal_translate -r average -tr 1024 1024 -wo NUM_THREADS=ALL_CPUS -co TILED=YES \
-co NUM_THREADS=ALL_CPUS E20N20.TIF dz_E20N20.TIF
You can probably use the official Docker images if you need another GDAL build.
Thank you for your time. I found the problem in -wm 4096. I use the Linux Subsystem for windows to make the resampling. I used 1024 a power of 2, because I didn't really understand, how -ts or -ts worked. I finally use:
gdalwarp -r average -ts 8000 8000 -multi -wo NUM_THREADS=ALL_CPUS -co TILED=YES \
-co NUM_THREADS=ALL_CPUS E20N20.TIF dz_E20N20.TIF
This reduces my file size about 75% and the pixel dim moves from 25x25m to 125x125m.

Can anybody tell the units of 262144 in this command (sysctl -w vm.max_map_count=262144)?

I want to know the units of vm.max_map_count=262144 like in KB or MB or GB

WSL2 io speeds on Linux filesystem are slow

Trying out WSL2 for the first time. Running Ubuntu 18.04 on a Dell Latitude 9510 with an SSD. Noticed build speeds of a React project were brutally slow. Per all the articles on the web I'm running the project out of ~ and not the windows mount. Ran a benchmark using sysbench --test=fileio --file-test-mode=seqwr run in ~ and got:
File operations:
reads/s: 0.00
writes/s: 3009.34
fsyncs/s: 3841.15
Throughput:
read, MiB/s: 0.00
written, MiB/s: 47.02
General statistics:
total time: 10.0002s
total number of events: 68520
Latency (ms):
min: 0.01
avg: 0.14
max: 22.55
95th percentile: 0.31
sum: 9927.40
Threads fairness:
events (avg/stddev): 68520.0000/0.00
execution time (avg/stddev): 9.9274/0.00
If I'm reading this correctly, that wrote 47 mb/s. Ran the same test on my mac mini and got 942 mb/s. Is this normal? This seems like the Linux i/o speeds on WSL are unusably slow. Any thoughts on ways to speed this up?
---edit---
Not sure if this is a fair comparison, but the output of winsat disk -drive c on the same machine from the Windows side. Smoking fast:
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Random 16.0 Read 719.55 MB/s 8.5
> Disk Sequential 64.0 Read 1940.39 MB/s 9.0
> Disk Sequential 64.0 Write 1239.84 MB/s 8.6
> Average Read Time with Sequential Writes 0.077 ms 8.8
> Latency: 95th Percentile 0.219 ms 8.9
> Latency: Maximum 2.561 ms 8.7
> Average Read Time with Random Writes 0.080 ms 8.9
> Total Run Time 00:00:07.55
---edit 2---
Windows version: Windows 10 Pro, Version 20H2 Build 19042
Late answer, but I had the same issue and wanted to post my solution for anyone who has the problem:
Windows Defender seems to destroy the read speeds in WSL. I added the entire rootfs folder as an exclusion. If you're comfortable turning off Windows Defender, I recommend that as well. Any antivirus probably has similar issues, so adding the WSL directories as an exclusion is probably you best bet.

Can't configure GPU bios properly

I have 6 GPUs of RX470. It should be mining average 25-27 mh/s each but it's only 20 mh/s. Overall is 120 instead of 150-170. I think the problem is GPU bios configuration but can't figure out any other thing. Any suggestions?
25 mh/s is what you would expect from an RX 480 stock. To get the same hashrate for RX 470, you'd be looking at overclocking memory speed (+600). In terms of how to overclock, it depends on whether your running linux or windows.

Iperf: Transfer of data

I have a question in order to understand how iperf is working, I am using the following command.
What i dont understand is "How can 6945 datagrams are send?" because if 9.66 MBytes are transfered, then 9.66M/1458 = 6625 data grams should be tranfereded according to my understanding.
If 10.125MBytes (2.7Mbps * 30 sec) would have been transfered then 6944 data grams would have been send (excluding udp and other header)
Please clerify if some one knows ..
(Also I have used wireshark on both client and server and checked and there the number of packets is greater then the number of packets shown by iperf)
umar#umar-VPCEB11FM:~$ iperf -t 30 -c 192.168.3.181 -u -b 2.7m -l 1458
------------------------------------------------------------
Client connecting to 192.168.3.181, UDP port 5001
Sending 1458 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.175 port 47241 connected with 192.168.3.181 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-30.0 sec 9.66 MBytes 2.70 Mbits/sec
[ 3] Sent 6946 datagrams
[ 3] Server Report:
[ 3] 0.0-92318.4 sec 9.66 MBytes 878 bits/sec 0.760 ms 0/ 6945 (0%)
iperf uses base 2 for M and K, meaning that K = 1024 and M = 1024*1024.
When you do that math that way, you get 9.66 MB / 1458 B/d = 6947 datagrams which is within precision error (you have a max resolution of 0.01 MB which means a rounding error of 0.005 MB ~= 3.6 datagrams).