How to free space from local minishift environment? - minishift

I've followed the FAQ to free space from local minishift syndesis environment. However, still the minishift status shows disk usage above 80%. Any further hints?

I managed to free up more space by connecting docker to the local minishift daemon and checked the disk consumption:
squake:/tmp $ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 24 22 7.92GB 2.295GB (28%)
Containers 159 54 1.106GB 1.105GB (99%)
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
So that I managed to prune all unused containers:
docker container prune
Reducing from 85% to less than 50%:

You can also use the syndesis cmdline tool (https://doc.syndesis.io/#syndesis-dev) and issue
syndesis dev --cleanup

Related

Interface google block storage persistent disks

I went through the google document to find the interface for persistent disks but could not find it. For local disks it is SCSI/NVME.
What is the interface for persistent disks?
FC, iSCSI, NVMe
In case of the persistent disks (that are used by default by VM instances) it's a SCSI interface.
For confirmation I ran hwinfo command:
wb#instance-1:~$ sudo hwinfo --disk
13: SCSI 01.0: 10600 Disk
...
Driver: "virtio_scsi", "sd"
Driver Modules: "virtio_scsi", "sd_mod"
Device File: /dev/sda (/dev/sg0)
Device Files: /dev/sda, /dev/disk/by-id/google-instance-1, /dev/disk/by-path/pci-0000:00:03.0-scsi-0:0:1:0, /dev/disk/by-id/scsi-0Google_PersistentDisk_instance-1
...
You can see a virtual SCSI interface Driver Modules: "virtio_scsi", "sd_mod" which clearly indicete it's a SCSI interface.
Another hint:
wb#instance-1:~$ sudo lshw | grep scsi
logical name: scsi0
configuration: driver=virtio_scsi
bus info: scsi#0:0.1.0
bus info: scsi#0:0.1.0,1
bus info: scsi#0:0.1.0,14
bus info: scsi#0:0.1.0,15
More configrmation you can find the documentation regarding requirements for building your own images.
However, when you create an instance with Local SSD drive you have an option to select interface type - SCSI or NVM.
Or when using gcloud:
gcloud compute instances create example-instance \
--machine-type n2-standard-8 \
--local-ssd interface=[INTERFACE_TYPE] \
--local-ssd interface=[INTERFACE_TYPE] \
--image-project [IMAGE_PROJECT] \
--image-family [IMAGE_FAMILY]
More documentation on selecting the local SSD's interface here.
When you create a VM with local SSD and run lsblk you get:
wb#instance-3:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
├─sda1 8:1 0 9.9G 0 part /
├─sda14 8:14 0 3M 0 part
└─sda15 8:15 0 124M 0 part /boot/efi
nvme0n1 259:0 0 375G 0 disk
It's the clear indication what kind if interface is used.

Changing Disk Size of VM before VM installation using qemu kvm

VM Disk Information
I wanted to change Disk Size upto 20G before vm Installation. The image already has Virtual size of 50G.
First, You should consider trying to compress the disk size.
sudo qemu-img -O qcow2 vmf5.img vmf5-compact.qcow2
If you really need those disk spaces, what you need to do:
sudo qemu-img resize /var/lib/libvirt/images/vmf5.img 20G
There is a document about qemu-img

Opengrok OutOfMemoryError when re-indexing

I have checked out from SVN 19 projects, all are in code source directory.
I run indexing from jenkins with following command:
C:\Jenkins\workspace\Grok-Multiple-Projects-Checkout-And-Indexing>java -Xmx12288m -Xms2048m -jar C:\grok_0.12.1\opengrok-0.12.1.5\lib\opengrok.jar -W C:\grok_0.12.1\Data\configuration.xml -c C:\grok_0.12.1\ctags58\ctags.exe -P -S -v -s C:\grok_0.12.1\src -d C:\grok_0.12.1\Data -i *.zip -i *.tmp -i *.db -i *.jar -i d:.svn -G -L polished -a on -T 8
First time I run indexing with the upper command: there are no errors!
However, consecutive runs will produce a
Java.lang.OutOfMemoryError: Java heap space
It runs fine until a point in the logs where it hangs for aprox 30 mins, and at some point memory consumption increases until it eats up all the allocated 12GB of RAM.
Here is the log:
09:38:40 Nov 01, 2016 9:38:45 AM org.opensolaris.opengrok.index.IndexDatabase$1 run
09:38:40 SEVERE: Problem updating lucene index database:
09:38:40 java.lang.OutOfMemoryError: Java heap space
09:38:40
09:38:41 Nov 01, 2016 9:38:45 AM org.opensolaris.opengrok.util.Statistics report
09:38:41 INFO: Done indexing data of all repositories (took 0:37:20)
09:38:41 Nov 01, 2016 9:38:45 AM org.opensolaris.opengrok.util.Statistics report
09:38:41 INFO: Total time: 0:37:21
09:38:41 Nov 01, 2016 9:38:45 AM org.opensolaris.opengrok.util.Statistics report
09:38:41 INFO: Final Memory: 19M/11,332M
Any ideas to why it needs so much memory and if increasing it will solve the OOM error. Could it be a memory leak in opengrok?
I know this is old question for old OpenGrok version (I can tell that because OpenGrok dropped the org.opensolaris class prefix in 2018), however I think it still needs some answer.
Assuming the indexer was indeed performing incremental reindex, there has to be something that is consuming the heap excessively. For instance, it could be caused by merging the pre existing history with the newly added (incremental) history.
Running the indexer with the -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/jvm/ Java options will create a heap dump that can be then analyzed with tools such as MAT (Eclipse) or YourKit.
As for memory leaks: not that it is not possible to create memory leaks in Java (via thread local storage), it is just quite improbable for them to manifest in the indexer.

gluster geo-replication xsync

Help with a solution to the problem. Set geo replication to synchronize files (about 5m).
After starting the synchronization occurs. But after copying files 80K runs out of space on tmpfs (/ run).
It is normal for geo replication or not?
Perhaps something I did wrong?
dpkg -l | grep glust
ii glusterfs-client 3.5.3-1 amd64 clustered file-system (client package)
ii glusterfs-common 3.5.3-1 amd64 GlusterFS common libraries and translator modules
ii glusterfs-server 3.5.3-1 amd64 clustered file-system (server package)
gluster volume geo-replication stgrec01 172.16.10.3::stgrec01_slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
------------------------------------------------------------------------------------------------- ---------------------------------
msk-m9-stg28 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave faulty N/A N/A
msk-m9-stg29 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave Passive N/A N/A
df -H
rootfs 50G 2,2G 46G 5% /
udev 11M 0 11M 0% /dev
tmpfs 420M 420M 0 100% /run
ls xsync | wc -l
84956
As you posted, The df -H is showing the tmpfs 100% used. Try to clear it before proceeding.

mount lacie 2T on debian server

I have a server (OS debian), and to increase the disk capacities I wanted to plugin a LaCie disk (2T). Obviously debian didn't mount it automatically (like ubuntu does) which is not funny :). When I run df -h I got:
Sys. fich. Taille Util. Dispo Uti% Monté sur
rootfs 455G 7,6G 424G 2% /
udev 10M 0 10M 0% /dev
tmpfs 200M 776K 200M 1% /run
/dev/disk/by-uuid/2ae485ac-17db-4e86-8a47-2aca5aa6de28 455G 7,6G 424G 2% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 1,2G 72K 1,2G 1% /run/shm
As you can see, there isn't any 2T or 1,xT -> so the disk isn't mounted.
I looked at almost same problems on goole to see what others did to fix this, I figured out that I had to run cat /etc/fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=2ae485ac-17db-4e86-8a47-2aca5aa6de28 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=c1759574-4b7c-428b-8427-22d3a420c9e4 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
And the my LaCie does not show up in this file either.
How can I mount my USB disk in this case?
df -h only shows mounted devices
Which type filesystem are you using ? If ntfs you fisrt need to install ntfs-3g (which is provided by ubuntu in basic install but not in debian)
You can try to mount your device with the command mount -t <fs-type> /dev/<sdxx> </mount/point>