I have a server (OS debian), and to increase the disk capacities I wanted to plugin a LaCie disk (2T). Obviously debian didn't mount it automatically (like ubuntu does) which is not funny :). When I run df -h I got:
Sys. fich. Taille Util. Dispo Uti% Monté sur
rootfs 455G 7,6G 424G 2% /
udev 10M 0 10M 0% /dev
tmpfs 200M 776K 200M 1% /run
/dev/disk/by-uuid/2ae485ac-17db-4e86-8a47-2aca5aa6de28 455G 7,6G 424G 2% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 1,2G 72K 1,2G 1% /run/shm
As you can see, there isn't any 2T or 1,xT -> so the disk isn't mounted.
I looked at almost same problems on goole to see what others did to fix this, I figured out that I had to run cat /etc/fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=2ae485ac-17db-4e86-8a47-2aca5aa6de28 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=c1759574-4b7c-428b-8427-22d3a420c9e4 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
And the my LaCie does not show up in this file either.
How can I mount my USB disk in this case?
df -h only shows mounted devices
Which type filesystem are you using ? If ntfs you fisrt need to install ntfs-3g (which is provided by ubuntu in basic install but not in debian)
You can try to mount your device with the command mount -t <fs-type> /dev/<sdxx> </mount/point>
Related
I'm wondering how can I increase tmpfs size in singularity sif image and how it relates to tmpfs on host system. According to a post by Pawsey Centre:
Singularity has a flag for rendering containers from SIF image files ephemerally writable. --writable-tmpfs will allocate a small amount of RAM for this purpose (configured by the sys admins, by default just a bunch of MB)
On my host system I have the following tmpfs:
$ df -h | grep tmpfs
tmpfs 13G 2,9M 13G 1% /run
tmpfs 63G 84M 63G 1% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
While inside container I have:
overlay 16M 12K 16M 1% /
...
tmpfs 63G 84M 63G 1% /dev/shm
tmpfs 16M 12K 16M 1% /.singularity.d/libs
tmpfs 13G 2.9M 13G 1% /run/nvidia-persistenced/socket
I can only write small files in my container (couple of KB, otherwise "no space error" is thrown) Which tmpfs does singularity use and why? How can I increase it?
The size of the tmpfs partition is set by the admin using the sessiondir max size config entry. It defaults to 16MB, but you can check it via sudo singularity config global --get "sessiondir max size" or check the config file directly if you don't have sufficient permissions: grep sessiondir /usr/local/etc/singularity/singularity.conf.
You can change the config value (or ask your admins to do it) to increase it to the desired size. If that's not an option, you'll need to make sure the host filesystem is mounted at locations where the data is being written. This is likely a good idea anyway if you'll be writing a lot of data. Docker also does this behind the scenes when using volumes.
I am trying to install tensorflow after installing anaconda in the home directory. It is giving disk storage space issue. I increased the disc storage to 30 GB and problem remains same. I am not able to change storage space allocation to different drive. With df command it is showing below:
Filesystem Size Used Avail Use% Mounted on
none 30G 23G 5.5G 81% /
tmpfs 853M 0 853M 0% /dev
tmpfs 853M 0 853M 0% /sys/fs/cgroup
/dev/sdb1 4.8G 4.4G 164M 97% /home
/dev/sda1 30G 23G 5.5G 81% /etc/ssh/keys
tmpfs 171M 636K 170M 1% /google/host/var/run
shm 64M 0 64M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /google/host/var/run/lock
tmpfs 341M 1.2M 340M 1% /google/host/var/run/shm
tmpfs 853M 0 853M 0% /run/google/devshell
I am not sure why below space is occupied.
"/dev/sda1 30G 23G 5.5G 81% /etc/ssh/keys"
is using 23 GB.
It is giving storage space during installation. Request you to suggest how make the space free and available as I have not installed any other application.
Ideally it should not happen, see if restart works. BTW how much was the size before?
I'm using centos7 , and I'm a newbie
I' have installed arangodb rpm creating the repo at /yum.repos.d/ in root .
and arangodb3 is installed in /var/lib/arangodb3 location
this directory is used and I have another directory /home where there is space left .
how can I switch it to the free directory .
running df - h I get :
[root#cloudera-manager log]# df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/mapper/centos-root 50G 50G 20K 100% /
devtmpfs 7,8G 0 7,8G 0% /dev
tmpfs 7,8G 0 7,8G 0% /dev/shm
tmpfs 7,8G 33M 7,8G 1% /run
tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup
/dev/sda1 497M 218M 280M 44% /boot
/dev/mapper/centos-home 442G 14G 429G 3% /home
tmpfs 1,6G 0 1,6G 0% /run/user/0
tmpfs 1,6G 0 1,6G 0% /run/user/994
cm_processes 7,8G 0 7,8G 0% /run/cloudera-scm-agent/process
[root#cloudera-manager log]#
I want to move it to another location / home
The arangodb rpm installs under /etc, /usr/bin, /usr/bin, /usr/share, /var/lib, /var/log, /var/run. Based on your df output, all of these would map to your root partition. So it would be difficult to relocate this package elsewhere. (see https://unix.stackexchange.com/questions/323532/yum-install-package-name-to-different-directory)
A better idea might be to measure your disk usage and relocate your biggest users of disk space to /home. For example, /var/log which has logfiles usually takes up a lot of space.
Two commands that will help are du and find.
The du -s will show the largest directories.
# du -s /*/* | sort -n
The find command will show files larger than 10MB.
# find / -size +10M
Recently I deployed Red Hat OpenStack 10 with Jenkins. I've found that my running nodes are became paused after a while.
virsh list stdout:
Id | Name | State
-------------------------
1 undercloud-0 paused
2 compute-0 paused
3 controller-0 paused
I tried to start or reboot VMs, but it didn't help. Machines are still in paused state. Is there any obvious things which I might miss?
I've found there is a lack of free space appeared after OpenStack runs for some time.
RHEL machines had smaller / partition and quite big /home partition. I found a VM images stored in /var and just moved it into /home
The steps are:
Stop all running VMs
# for i in $(virsh list --name); do virsh destroy $i; done
Create new directory and move images there
# mkdir /home/_images
# mv /var/lib/libvirt/images/* /home/_images
Remove the old directory with images and create a symlink to the new directory.
# rmdir /var/lib/libvirt/images
# ln -s /home/_images /var/lib/libvirt/images
Start VMs again (or reboot the machine), an ideal order is 1. undercloud-0, 2. controller-0, 3. compute-x nodes
# for i in $(virsh list --name); do virsh reboot $i; done
Help with a solution to the problem. Set geo replication to synchronize files (about 5m).
After starting the synchronization occurs. But after copying files 80K runs out of space on tmpfs (/ run).
It is normal for geo replication or not?
Perhaps something I did wrong?
dpkg -l | grep glust
ii glusterfs-client 3.5.3-1 amd64 clustered file-system (client package)
ii glusterfs-common 3.5.3-1 amd64 GlusterFS common libraries and translator modules
ii glusterfs-server 3.5.3-1 amd64 clustered file-system (server package)
gluster volume geo-replication stgrec01 172.16.10.3::stgrec01_slave status
MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS
------------------------------------------------------------------------------------------------- ---------------------------------
msk-m9-stg28 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave faulty N/A N/A
msk-m9-stg29 stgrec01 /xfs1tb/recordings01 172.16.10.3::stgrec01_slave Passive N/A N/A
df -H
rootfs 50G 2,2G 46G 5% /
udev 11M 0 11M 0% /dev
tmpfs 420M 420M 0 100% /run
ls xsync | wc -l
84956
As you posted, The df -H is showing the tmpfs 100% used. Try to clear it before proceeding.