qemu-img info: disk size vs virtual size - misunderstanding - virtual-machine

please note following log:
[user#machine]$ qemu-img info kvm_1
image: kvm_1
file format: qcow2
virtual size: 146G (157286400000 bytes)
disk size: 101G
cluster_size: 65536
backing file: /xyz.qcow2
backing file format: qcow2
Format specific information:
compat: 0.10
[user#machine]$ qemu-img info kvm_2
image: kvm_2
file format: qcow2
virtual size: 146G (157286400000 bytes)
disk size: 146G
cluster_size: 65536
backing file: /xyz.qcow2
backing file format: qcow2
Format specific information:
compat: 0.10
The second machine (kvm_2) was fallocated using fallocate tool.
Can anyone explain a difference? What are difference in performance between these two config?
Does it matter for qcow2 format? What about raw format? What is root cause of difference in performance (unless any exists)?

Related

nest project takes too long to build - heap out of memory error

I have a nestjs project which throws the heap out of memory error everytime I try to
npm run build
or
sudo npm run start:dev
I tried increasing the "max_old_space_size" and increase physical RAM to 8GB. No luck so far. If I build it on a different system and run it in mine, it works. It's the build stage that is taking too much time. My project connects to remote mongodb databases using a tunnel.
Here is a SS of the error -
Any kind of help would be appreciated.
Files: 1196
Lines: 673815
Nodes: 3669054
Identifiers: 750618
Symbols: 827790
Types: 76
Memory used: 1047699K
Assignability cache size: 0
Identity cache size: 0
Subtype cache size: 0
Strict subtype cache size: 0
I/O Read time: 0.37s
Parse time: 3.32s
Program time: 4.62s
Bind time: 1.91s
Total time: 6.53s

GraphDB OutOfMemoryError: Java heap space

I'm using GraphDb Free 8.6.1 in research project, I'm running it with default configuration on linux server having 4GB memory in total.
Currently, we execute quite many CRUD operations in tripplestore.
GraphDB throwed exception in console:
java.lang.OutOfMemoryError: Java heap space
-XX:OnOutOfMemoryError="kill -9 %p"
Executing /bin/sh -c "kill -9 1411"...
Looking into process, GraphDB runs with parameter XX:MaxDirectMemorySize=128G
I was not able to changed, even with ./graph -Xmx3g, process is still running with XX:MaxDirectMemorySize=128G.
I've tried to configure ./grapdh parameter, setting the GDB_HEAP_SIZE=3072m, now process runs with additional -Xms3072m -Xmx3072m parameters, but remains XX:MaxDirectMemorySize=128G.
After update to GDB_HEAP_SIZE=3072m, repository went down again without .hprof file, no exception, nothing suspicious in logs. The following line was flushed into console: Java HotSpot(TM) 64-Bit Server VM warning:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f5b4b6d0000, 65536, 1) failed; error='Cannot allocate memory' (errno=12)
Please, can you help me to properly configure GraphDB tripplestore to get rid of the Heap Space exceptions?
Thank you.
By default, the value of the -XX:MaxDirectMemorySize (off heap memory) parameter in the JVM is equal to the -XMx (on heap memory). For very large repositories the size of the off heap memory may become insufficient so the GraphDB developers made this parameter 128GB or unlimited.
I suspect that your actual issue is actually allocating too much on heap memory, which leaves no space for the off heap in the RAM. When the database tries to allocate off heap RAM you hit this low OS-level error 'Cannot allocate memory'.
You have two options in solving this problem:
Increase the RAM of the server to 8GB and keep the same configuration - this would allow the 8 GB RAM to be distributed: 2GB (OS) + 3GB (on heap) + 3GB (off heap)
Decrease the -Xmx value to 2GB so the 4GB RAM will be distributed: 1GB (OS) + 2GB (on heap) + 1GB (off heap)
To get a good approximation how much RAM GraphDB needs please check the hardware sizing page:
http://graphdb.ontotext.com/documentation/8.6/free/requirements.html

apache rockerMQ broker doesn't start

i try to star rockerMQ broker,but i got the error message:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.
An error report file with more information is saved as:
/usr/local/soft/rocketMQ/incubator-rocketmq/distribution/target/apache-rocketmq/hs_err_pid6034.log
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000005c0000000, 8589934592, 0) failed; error='Cannot allocate memory' (errno=12)
and i got something from the error log file about message of memory:
Memory: 4k page, physical 4089840k(551832k free), swap 2621432k(2621432k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (25.144-b01) for linux-amd64 JRE (1.8.0_144-b01), built on Jul 21 2017 21:57:33 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
how can i let the rockerMQ broker working for me
You can reduce the JVM heap size.
Open the distribution/bin/runbroker.sh file of your project and change the following line
JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
as
JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g"
now broker will only generate a 4G heap .I hope it will solve your problem.Now you can try to build and run.
Try modifying the start shell scripts to make a smaller JVM heap size in your dev/test env

sdcard write error in rufus

I tried to write raspbian operating system to my sdcard.
I put my sdcard into sdcard reader and then tried to format it to fat32.
I can not do that with microsoft windows, so I downloaded sdformatter and my sdcard formatted successfully.
After that , I tried to write raspbian dd image into my sdcard, but I get an error and in logs I had:
Rufus version: 2.2.668
Windows version: Windows 10 64-bit
Syslinux versions: 4.07/2013-07-25, 6.03/2014-10-06
Grub versions: 0.4.6a, 2.02~beta2
Locale ID: 0x0409
Found USB 2.0 device 'Generic- Multi-Card USB Device' (0BDA:0158)
1 device found
Disk type: Removable, Sector Size: 512 bytes
Cylinders: 979, TracksPerCylinder: 255, SectorsPerTrack: 63
Partition type: MBR, NB Partitions: 1
Disk ID: 0x00000000
Drive has a Zeroed Master Boot Record
Partition 1:
Type: FAT32 (0x0b)
Size: 7.5 GB (8048869376 bytes)
Start Sector: 8192, Boot: No, Recognized: Yes
Scanning image...
'G:\2017-01-11-raspbian-jessie.img' doesn't look like an ISO image
Image has a Zeroed Master Boot Record
'G:\2017-01-11-raspbian-jessie.img' is a bootable disk image
Using image: 2017-01-11-raspbian-jessie.img
Format operation started
Requesting disk access...
Opened drive \\.\PHYSICALDRIVE1 for write access
Will use 'F:' as volume mountpoint
I/O boundary checks disabled
Analyzing existing boot records...
Drive has a Zeroed Master Boot Record
Volume has an unknown Partition Boot Record
Deleting partitions...
Clearing MBR/PBR/GPT structures...
Erasing 128 sectors
Writing Image...
write error: [0x00000002] The system cannot find the file specified.
RETRYING...
write error: [0x00000002] The system cannot find the file specified.
RETRYING...
write error: [0x00000037] The specified network resource or device is no longer available.
0 devices found
Found USB 2.0 device 'Generic- Multi-Card USB Device' (0BDA:0158)
1 device found
No volume information for drive 0x81
Disk type: Removable, Sector Size: 512 bytes
Cylinders: 979, TracksPerCylinder: 255, SectorsPerTrack: 63
Partition type: MBR, NB Partitions: 1
Disk ID: 0x00000001
Drive does not have an x86 Master Boot Record
Partition 1:
Type: Small FAT16 (0x04)
Size: 7.5 GB (8053063680 bytes)
Start Sector: 0, Boot: No, Recognized: Yes
Found USB 2.0 device 'Generic- Multi-Card USB Device' (0BDA:0158)
1 device found
No volume information for drive 0x81
Disk type: Removable, Sector Size: 512 bytes
Cylinders: 979, TracksPerCylinder: 255, SectorsPerTrack: 63
Partition type: MBR, NB Partitions: 1
Disk ID: 0x00000001
Drive does not have an x86 Master Boot Record
Partition 1:
Type: Small FAT16 (0x04)
Size: 7.5 GB (8053063680 bytes)
Start Sector: 0, Boot: No, Recognized: Yes
Found USB 2.0 device 'Generic- Multi-Card USB Device' (0BDA:0158)
1 device found
Disk type: Removable, Sector Size: 512 bytes
Cylinders: 979, TracksPerCylinder: 255, SectorsPerTrack: 63
Partition type: MBR, NB Partitions: 2
Disk ID: 0x623FDBF4
Drive has a Zeroed Master Boot Record
Partition 1:
Type: FAT32 LBA (0x0c)
Size: 63 MB (66060288 bytes)
Start Sector: 8192, Boot: No, Recognized: Yes
Partition 2:
Type: GNU/Linux (0x83)
Size: 4 GB (4301258752 bytes)
Start Sector: 137216, Boot: No, Recognized: Yes
Found USB 2.0 device 'Generic- Multi-Card USB Device' (0BDA:0158)
1 device found
Disk type: Removable, Sector Size: 512 bytes
Cylinders: 979, TracksPerCylinder: 255, SectorsPerTrack: 63
Partition type: MBR, NB Partitions: 2
Disk ID: 0x623FDBF4
Drive has a Zeroed Master Boot Record
Partition 1:
Type: FAT32 LBA (0x0c)
Size: 63 MB (66060288 bytes)
Start Sector: 8192, Boot: No, Recognized: Yes
Partition 2:
Type: GNU/Linux (0x83)
Size: 4 GB (4301258752 bytes)
Start Sector: 137216, Boot: No, Recognized: Yes
Raspberry Pi has its own method of writing to the SD card. I would try that method to reduce the chance of write errors. https://www.raspberrypi.org/documentation/installation/installing-images/windows.md
It uses a specific program to write the image, which needs to be ran as administrator.
Using the official tool "Raspberry Pi Imager" worked fine for me.
Also, be sure to not use USB 3.0 card reader (2.0 recommended).

Gc overhead limit exceeded is showing after adding google play services library

If I add google play services library to my project, then gc overhead limit exceeded is showing!
This is eclipse.ini content:
-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20140603-1326
-product
org.eclipse.epp.package.java.product
--launcher.defaultAction
openFile
--launcher.XXMaxPermSize
1024M
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
1024m
--launcher.defaultAction
openFile
--launcher.appendVmargs
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms512m
-Xmx1024m
I was also facing the same problem. Then I set the values to 2048 for xms and xmx. However high CPU uses was observed for intial few seconds, but it worked.
-Xms2048m
-Xmx2048m