gluster geo-replication xsync - geo

Help with a solution to the problem. Set geo replication to synchronize files (about 5m).
After starting the synchronization occurs. But after copying files 80K runs out of space on tmpfs (/ run).
It is normal for geo replication or not?
Perhaps something I did wrong?
dpkg -l | grep glust
ii glusterfs-client 3.5.3-1 amd64 clustered file-system (client package)
ii glusterfs-common 3.5.3-1 amd64 GlusterFS common libraries and translator modules
ii glusterfs-server 3.5.3-1 amd64 clustered file-system (server package)
gluster volume geo-replication stgrec01 172.16.10.3::stgrec01_slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
------------------------------------------------------------------------------------------------- ---------------------------------
msk-m9-stg28 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave faulty N/A N/A
msk-m9-stg29 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave Passive N/A N/A
df -H
rootfs 50G 2,2G 46G 5% /
udev 11M 0 11M 0% /dev
tmpfs 420M 420M 0 100% /run
ls xsync | wc -l
84956

As you posted, The df -H is showing the tmpfs 100% used. Try to clear it before proceeding.

Related

Which tmpsf partition is used in singularity and how can I increase it

I'm wondering how can I increase tmpfs size in singularity sif image and how it relates to tmpfs on host system. According to a post by Pawsey Centre:
Singularity has a flag for rendering containers from SIF image files ephemerally writable. --writable-tmpfs will allocate a small amount of RAM for this purpose (configured by the sys admins, by default just a bunch of MB)
On my host system I have the following tmpfs:
$ df -h | grep tmpfs
tmpfs 13G 2,9M 13G 1% /run
tmpfs 63G 84M 63G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
While inside container I have:
overlay 16M 12K 16M 1% /
...
tmpfs 63G 84M 63G 1% /dev/shm
tmpfs 16M 12K 16M 1% /.singularity.d/libs
tmpfs 13G 2.9M 13G 1% /run/nvidia-persistenced/socket
I can only write small files in my container (couple of KB, otherwise "no space error" is thrown) Which tmpfs does singularity use and why? How can I increase it?
The size of the tmpfs partition is set by the admin using the sessiondir max size config entry. It defaults to 16MB, but you can check it via sudo singularity config global --get "sessiondir max size" or check the config file directly if you don't have sufficient permissions: grep sessiondir /usr/local/etc/singularity/singularity.conf.
You can change the config value (or ask your admins to do it) to increase it to the desired size. If that's not an option, you'll need to make sure the host filesystem is mounted at locations where the data is being written. This is likely a good idea anyway if you'll be writing a lot of data. Docker also does this behind the scenes when using volumes.

Interface google block storage persistent disks

I went through the google document to find the interface for persistent disks but could not find it. For local disks it is SCSI/NVME.
What is the interface for persistent disks?
FC, iSCSI, NVMe
In case of the persistent disks (that are used by default by VM instances) it's a SCSI interface.
For confirmation I ran hwinfo command:
wb#instance-1:~$ sudo hwinfo --disk
13: SCSI 01.0: 10600 Disk
...
Driver: "virtio_scsi", "sd"
Driver Modules: "virtio_scsi", "sd_mod"
Device File: /dev/sda (/dev/sg0)
Device Files: /dev/sda, /dev/disk/by-id/google-instance-1, /dev/disk/by-path/pci-0000:00:03.0-scsi-0:0:1:0, /dev/disk/by-id/scsi-0Google_PersistentDisk_instance-1
...
You can see a virtual SCSI interface Driver Modules: "virtio_scsi", "sd_mod" which clearly indicete it's a SCSI interface.
Another hint:
wb#instance-1:~$ sudo lshw | grep scsi
logical name: scsi0
configuration: driver=virtio_scsi
bus info: scsi#0:0.1.0
bus info: scsi#0:0.1.0,1
bus info: scsi#0:0.1.0,14
bus info: scsi#0:0.1.0,15
More configrmation you can find the documentation regarding requirements for building your own images.
However, when you create an instance with Local SSD drive you have an option to select interface type - SCSI or NVM.
Or when using gcloud:
gcloud compute instances create example-instance \
--machine-type n2-standard-8 \
--local-ssd interface=[INTERFACE_TYPE] \
--local-ssd interface=[INTERFACE_TYPE] \
--image-project [IMAGE_PROJECT] \
--image-family [IMAGE_FAMILY]
More documentation on selecting the local SSD's interface here.
When you create a VM with local SSD and run lsblk you get:
wb#instance-3:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
├─sda1 8:1 0 9.9G 0 part /
├─sda14 8:14 0 3M 0 part
└─sda15 8:15 0 124M 0 part /boot/efi
nvme0n1 259:0 0 375G 0 disk
It's the clear indication what kind if interface is used.

change a lib file to to another location centos 7

I'm using centos7 , and I'm a newbie
I' have installed arangodb rpm creating the repo at /yum.repos.d/ in root .
and arangodb3 is installed in /var/lib/arangodb3 location
this directory is used and I have another directory /home where there is space left .
how can I switch it to the free directory .
running df - h I get :
[root#cloudera-manager log]# df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/mapper/centos-root 50G 50G 20K 100% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 33M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 218M 280M 44% /boot
/dev/mapper/centos-home 442G 14G 429G 3% /home
tmpfs 1,6G 0 1,6G 0% /run/user/0
tmpfs 1,6G 0 1,6G 0% /run/user/994
cm_processes 7,8G 0 7,8G 0% /run/cloudera-scm-agent/process
[root#cloudera-manager log]#
I want to move it to another location / home
The arangodb rpm installs under /etc, /usr/bin, /usr/bin, /usr/share, /var/lib, /var/log, /var/run. Based on your df output, all of these would map to your root partition. So it would be difficult to relocate this package elsewhere. (see https://unix.stackexchange.com/questions/323532/yum-install-package-name-to-different-directory)
A better idea might be to measure your disk usage and relocate your biggest users of disk space to /home. For example, /var/log which has logfiles usually takes up a lot of space.
Two commands that will help are du and find.
The du -s will show the largest directories.
# du -s /*/* | sort -n
The find command will show files larger than 10MB.
# find / -size +10M

OpenStack's virtual nodes permanently in paused state

Recently I deployed Red Hat OpenStack 10 with Jenkins. I've found that my running nodes are became paused after a while.
virsh list stdout:
Id | Name | State
-------------------------
1 undercloud-0 paused
2 compute-0 paused
3 controller-0 paused
I tried to start or reboot VMs, but it didn't help. Machines are still in paused state. Is there any obvious things which I might miss?
I've found there is a lack of free space appeared after OpenStack runs for some time.
RHEL machines had smaller / partition and quite big /home partition. I found a VM images stored in /var and just moved it into /home
The steps are:
Stop all running VMs
# for i in $(virsh list --name); do virsh destroy $i; done
Create new directory and move images there
# mkdir /home/_images
# mv /var/lib/libvirt/images/* /home/_images
Remove the old directory with images and create a symlink to the new directory.
# rmdir /var/lib/libvirt/images
# ln -s /home/_images /var/lib/libvirt/images
Start VMs again (or reboot the machine), an ideal order is 1. undercloud-0, 2. controller-0, 3. compute-x nodes
# for i in $(virsh list --name); do virsh reboot $i; done

mount lacie 2T on debian server

I have a server (OS debian), and to increase the disk capacities I wanted to plugin a LaCie disk (2T). Obviously debian didn't mount it automatically (like ubuntu does) which is not funny :). When I run df -h I got:
Sys. fich. Taille Util. Dispo Uti% Monté sur
rootfs 455G 7,6G 424G 2% /
udev 10M 0 10M 0% /dev
tmpfs 200M 776K 200M 1% /run
/dev/disk/by-uuid/2ae485ac-17db-4e86-8a47-2aca5aa6de28 455G 7,6G 424G 2% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 1,2G 72K 1,2G 1% /run/shm
As you can see, there isn't any 2T or 1,xT -> so the disk isn't mounted.
I looked at almost same problems on goole to see what others did to fix this, I figured out that I had to run cat /etc/fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=2ae485ac-17db-4e86-8a47-2aca5aa6de28 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=c1759574-4b7c-428b-8427-22d3a420c9e4 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
And the my LaCie does not show up in this file either.
How can I mount my USB disk in this case?
df -h only shows mounted devices
Which type filesystem are you using ? If ntfs you fisrt need to install ntfs-3g (which is provided by ubuntu in basic install but not in debian)
You can try to mount your device with the command mount -t <fs-type> /dev/<sdxx> </mount/point>