I resetup my BackupPc and running into a Problem:
I want to backup "/backup" on all hosts. I started with ONE host for test purposes.
Process:
BackupPC calls a Shell-Script on the Client
That script generates some snapshots and Mount them to /backup/...
Now BackupPC should backup
At least BackupPC calls another Shell-Script, wich unmounts and removes the snapshots
The /backup gets "backed up" but just the Folder, not their Contents.
I enhanced the first shell-script to make sure the Folders have Content, here the Output:
2017-06-04 20:11:14 Created directory /data/backuppc/pc/v3.lipperts-web.de/refCnt
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-zabbix" created
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-filebeat" created
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 1,01 GiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-teamspeak" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-schnoddi" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-sentry" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-nginx" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: insgesamt 13
2017-06-04 20:11:16 Output from DumpPreUserCmd: -rw-r--r-- 1 root root 329 Jun 3 17:43 docker-compose.yml
2017-06-04 20:11:16 Output from DumpPreUserCmd: drwx------ 2 root root 12288 Jun 3 17:43 lost+found
2017-06-04 20:11:16 full backup started for directory /backup
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-zabbix" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-filebeat" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-teamspeak" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-schnoddi" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-sentry" successfully removed
2017-06-04 20:11:18 Output from DumpPostUserCmd: Logical volume "snaptshot-nginx" successfully removed
2017-06-04 20:11:18 Got fatal error during xfer (No files dumped for share /backup)
2017-06-04 20:11:23 Backup aborted (No files dumped for share /backup)
You can se there is an File listed "docker-compose.yml", but backup is empty
https://i.imgur.com/u6hfIh3.png
What could be the Problem here?
By changing RsyncArgs from Default to the Args wich I had (by Default) in BackupPC 3 it worked:
http://imgur.com/a/rYcHL
Related
As you can see in the below command I have assigned a total of 500 GB disk space to my VM. But I am seeing 14.4 GB actual space available to the disk and once it gets used completely. I got an error there isn't much space to use? How to extend space for /dev/mapper/centos-root.
I am using VMware ESXi and using centOS for this VM.
[root#localhost Apr]# fdisk -l
Disk /dev/sda: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00064efd
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 33554431 15727616 8e Linux LVM
Disk /dev/mapper/centos-root: 14.4 GB, 14382268416 bytes, 28090368 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 byte
Execute bellow Steps to increase your Linux Disk after adding space in VMWare :
Step 1 - update partition table
fdisk /dev/sda
Press p to print the partition table to identify the number of partitions.
Press n to create a new primary partition.
Press p for primary.
Press 3 for the partition number, depending on the output of the partition table print.
Press Enter two times.
Press t to change the system's partition ID.
Press 3 to select the newly creation partition.
Type 8e to change the Hex Code of the partition for Linux LVM.
Press w to write the changes to the partition table.
Step 2 - Restart the virtual machine.
Step 3 - verify that the changes were saved
fdisk -l
Step 4 - convert the new partition to a physical volume
pvcreate /dev/sda3
Step 5 - extend the physical volume (centos is your VG name, if not use your VG name in place of centos )
vgextend centos /dev/sda3
Step 6 - extend the Logical Volume (500G is the size you want to add , if not use the right size in place of 500G)
lvextend -L+500G /dev/mapper/centos-root
Step 7 - expand the ext filesystem online
resize2fs /dev/mapper/centos-root
extend disk without reboot
echo 1 > /sys/block/sda/device/rescan
echo 1 > /sys/block/sdb/device/rescan
echo 1 > /sys/block/nvme0n1/device/rescan_controller
partprobe
gdisk fix warnging
parted change partion size
## parted can executed as command line. but this is very dangerous
parted -s /dev/sdb "resizepart 2 -1" quit
parted -s /dev/sdb "resizepart 3 100%" quit
resizepart 3 100%
pvresize /dev/sda3
lvextend -l +100%FREE cs/root
xfs_growfs /dev/cs/root
I am training an image segmentation model on azure ML pipeline. During the testing step, I'm saving the output of the model to the associated blob storage. Then I want to find the IOU (Intersection over Union) between the calculated output and the ground truth. Both of these set of images lie on the blob storage. However, IOU calculation is extremely slow, and I think it's disk bound. In my IOU calculation code, I'm just loading the two images (commented out other code), still, it's taking close to 6 seconds per iteration, while training and testing were fast enough.
Is this behavior normal? How do I debug this step?
A few notes on the drives that an AzureML remote run has available:
Here is what I see when I run df on a remote run (in this one, I am using a blob Datastore via as_mount()):
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 103080160 11530364 86290588 12% /
tmpfs 65536 0 65536 0% /dev
tmpfs 3568556 0 3568556 0% /sys/fs/cgroup
/dev/sdb1 103080160 11530364 86290588 12% /etc/hosts
shm 2097152 0 2097152 0% /dev/shm
//danielscstorageezoh...-620830f140ab 5368709120 3702848 5365006272 1% /mnt/batch/tasks/.../workspacefilestore
blobfuse 103080160 11530364 86290588 12% /mnt/batch/tasks/.../workspaceblobstore
The interesting items are overlay, /dev/sdb1, //danielscstorageezoh...-620830f140ab and blobfuse:
overlay and /dev/sdb1 are both the mount of the local SSD on the machine (I am using a STANDARD_D2_V2 which has a 100GB SSD).
//danielscstorageezoh...-620830f140ab is the mount of the Azure File Share that contains the project files (your script, etc.). It is also the current working directory for your run.
blobfuse is the blob store that I had requested to mount in the Estimator as I executed the run.
I was curious about the performance differences between these 3 types of drives. My mini benchmark was to download and extract this file: http://download.tensorflow.org/example_images/flower_photos.tgz (it is a 220 MB tar file that contains about 3600 jpeg images of flowers).
Here the results:
Filesystem/Drive Download_and_save Extract
Local_SSD 2s 2s
Azure File Share 9s 386s
Premium File Share 10s 120s
Blobfuse 10s 133s
Blobfuse w/ Premium Blob 8s 121s
In summary, writing small files is much, much slower on the network drives, so it is highly recommended to use /tmp or Python tempfile if you are writing smaller files.
For reference, here the script I ran to measure: https://gist.github.com/danielsc/9f062da5e66421d48ac5ed84aabf8535
And this is how I ran it: https://gist.github.com/danielsc/6273a43c9b1790d82216bdaea6e10e5c
I am very newbie in OrangePI PC. I have installed it by dd on my macOS, and I have tried installing a Raspbian image which downloaded from the orangepi.org in Windows as well, after installation when I check free disk space it is showing:
root#orangepi:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 3.4G 2.7G 474M 86% /
/dev/root 3.4G 2.7G 474M 86% /
devtmpfs 374M 0 374M 0% /dev
tmpfs 101M 188K 101M 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 201M 0 201M 0% /run/shm
/dev/mmcblk0p1 41M 4.9M 37M 12% /boot
I have installed it on 32G flash drive. But when I check it through fdisk command it shows 32G as a disk size:
root#orangepi:~# sudo fdisk -l
Disk /dev/mmcblk0: 32.0 GB, 32010928128 bytes
4 heads, 16 sectors/track, 976896 cylinders, total 62521344 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x34605ba5
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 40960 124927 41984 83 Linux
/dev/mmcblk0p2 124928 7170047 3522560 83 Linux
root#orangepi:~#
How to fix this?
This solved my problem (solution is taken from here):
root#orangepi:~# fdisk /dev/mmcblk0
Command (m for help): p
Disk /dev/mmcblk0: 15.8 GB, 15804137472 bytes
4 heads, 16 sectors/track, 482304 cylinders, total 30867456 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x34605ba5
Device Boot Start End Blocks Id System
/dev/mmcblk0p1 40960 124927 41984 83 Linux
/dev/mmcblk0p2 124928 7170047 3522560 83 Linux
Command (m for help): d
Partition number (1-4): 2
Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): p
Partition number (1-4, default 2): 2
First sector (2048-30867455, default 2048): 124928
Last sector, +sectors or +size{K,M,G} (124928-30867455, default 30867455):
Using default value 30867455
Command (m for help): w
Then quit (command q), reboot. You will then be able to use resize:
resize2fs /dev/root
I have a vagrant box that has about 6gb of disk space. How do I change this so that the vm takes up as much space as needed? I see plenty of space on my laptop:
Vagrant:
[vagrant#localhost ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root
6744840 6401512 700 100% /
tmpfs 749960 0 749960 0% /dev/shm
/dev/sda1 495844 40707 429537 9% /boot
v-root 487385240 332649828 154735412 69% /vagrant
Laptop:
~ $ df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 974770480 662925656 311332824 69% 82929705 38916603 68% /
devfs 390 390 0 100% 675 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
There is no way to make dynamically sized disks with virtual box as far as I am aware... They always need to have a max size even though they can be dynamically allocated.
Resizing them is also quite complicated but there are several tutorials around on the net, for example something like this should get you on the right track: http://tanmayk.wordpress.com/2011/11/02/resizing-a-virtualbox-partition/
I usually make sure that the vagrant box I choose to use or create has about 40 gigs as the size for the dynamically allocated disk.
As part of checking some prereqs for an idea (IOS app with MBtiles on local storage enabeling offline maps) I'd like to know if it's useful to gzip MBtiles when transporting them to an IOS device.
In other words, is there a useful reduction in size when gzipping MBtiles? (or is the MBtile-format, already packed in some way, thus limiting the use of gzip or other packers).
If so, how much size reduction can I expect? (percentage ballpark)
-rw-r--r-- 1 randycarver wheel 105484288 Apr 9 11:05 gc_40_16.mbtiles
-rw-r--r-- 1 randycarver wheel 101777468 Apr 21 06:03 gc_40_16.mbtiles.gz
I'm seeing around 4% size reduction.