How to restore Virtualbox ? lost last two months of work - resize

https://forums.virtualbox.org/viewtopic.php?f=7&t=90893
Hello im desesperate and need help because i have lost about two months of work in my Windows 10 guest system.
Everything worked smoothly till i need to have more free space ( although i have a dynamic hd). So i have follow some tutorials and made some changes:
1 - I have the original almost full disk in: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk
2 - I made a copy in an external usb device.
3 - Convert to vdi: VBoxManage clonehd /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk1.vmdk /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk.vdi --format vdi
4 - Tried to resize the disk ( from 80gb to 100gb): VBoxManage modifyhd /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 and VBoxManage modifymedium disk /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 ( think this could be an error as i had to chage size to vdi file).
5 - Then i had to change the uuid ( because an error of uuid in use arised):VBoxManage internalcommands sethduuid "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk"
6 - Then comeback to: VBoxManage clonehd "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk" " " --format vdi
and resize VBoxManage modifymedium disk "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi" --resize 120000
I tried to change my virutal machine with the new vdi file to test if everything was fine ( change my /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk disk connection to the new/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi) . But i detected somewhat that the system has turned back two months ago !!!!
I was not worried and decided to go back to my "untouch" vmdk, but the most strange thing is that the original "untouch" file: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk also boots with things and files and state about two months ago. So im quite nervous.
Selección_058.png
Selección_058.png (65.19 KiB) Viewed 9 times
As watching files the 6c***** has to be the "good status" as was modified yesterday at night. Here is my file manager:
Selección_059.png
Selección_059.png (54.06 KiB) Viewed 9 times
Here is my VM ( made an snapshot about two months ago i dont remember when exactly)
https://imagebin.ca/v/4QlKV3Equ1fW
My log:
https://pastebin.com/JSLFRNMs
Hope anybody can help...
i think that the key is to return somewhat to 6c**** state of my vmdk file, i dont understand how this vmdk got changed as it was not touched
Thanks in advance

The problem was solved. It was nothing to do with resizing disks. I select the { 6cc3c***-*****} hard disk ( although it was "only" 47 gb), for surprise for me it load its "snapshot" part of 47 gb with the whole disk windows10-disk1.vmdk....
Sorry for my bad english, but its difficult to explain, in the settings of the virtual machine in storage section, select as main disk the 6cc***** and start/boot the VM
Once was loaded and working fine, i deleted the snapshot ( to bring all together to the present state) and then made another snapshot for backup.
Thanks

Related

QEMU KVM disk IO/SQL replication issue, on one of two identical clone VM's

I have a system running two QEMU KVM virtual machines, identical clones of one another. Both VM's are replicating from the same Master MySQL DB. One VM (vm-01) is carrying an active load, and is running fine. However, the other (standby) VM (vm-02) suddenly fell behind with replication, at 08:00 this morning, and even though replication is running properly, it keeps falling further behind at a slow rate (1s behind for every 10s of real time). vm-02 has been running perfectly for months to date.
After checking all the usual suspects (CPU load, disk space, SQL query errors etc. etc.) it turns out that everything is just fine... except for the virtual disk IO - specifically the write requests (WRRQ). On the host machine:
virt-top 16:01:35 - x86_64 16/16CPU 1596MHz 128915MB
3 domains, 2 active, 2 running, 0 sleeping, 0 paused, 1 inactive D:0 O:0 X:0
CPU: 1.8% Mem: 32768 MB (32768 MB by guests)
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
3 R 3 1 113K 20K 1.3 12.0 62d21:21 vm-01-ubuntu
9 R 0 563 97K 11K 0.5 12.0 83:09:51 vm-02-ubuntu
- (vm-Clone-ubuntu)
Both VM's have bin-logs disabled, so they only write the relay-bin-log. The active machine (vm-01-ubuntu) is running thousands of radius requests just fine, in addition to the exact same master SQL commands... and it is happily running with a few write requests. But the standby machine falls behind, with hundreds of write requests... perhaps related to replication catching-up... but so slowly?
Checking disk IO on the VM's:
vm-01:~# iostat -x
Linux 4.4.0-141-generic (vm-finrad01) 18/09/2019 _i686_ (1 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
12,04 0,02 9,85 13,87 0,13 64,09
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0,00 13,91 0,91 147,67 5,20 16,05 0,29 0,11 0,72 0,57 0,73 0,04 0,65
vm-02:~# iostat -x
Linux 4.4.0-141-generic (vm-finrad02) 18/09/2019 _i686_ (1 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0,26 0,01 0,25 6,46 0,09 92,93
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0,00 1,22 0,00 34,19 0,20 21,43 1,26 0,00 0,14 0,96 0,14 0,03 0,09
Doesn't yield any glaring issues, especially since the busier VM (vm-01) is doing more as expected.
The host machine has 128Gb of RAM, tons of SSD drive space, and is only running at 30% CPU usage. There are no RAID or drive issues.
Any suggestions on where to check next, given that the WRRQ count is the only evidence to date of vm-02 falling behind. Or am I chasing a red herring?
The issue is related to the guest OS, not the VM setup.
On Ubuntu the apt auto-update feature is quite aggressive, and in the case of the two suspect VM's, apt was attempting to constantly update the repos, writing at 16mB/s constantly. This is probably related to the fact that the Guest OS is Ubuntu 14.04, and the repos are no longer maintained.
The solution was to disable auto-updates, and rather run updates manually.
As root:
service unattended-upgrades stop
echo manual | tee /etc/init/unattended-upgrades.override
Then, edit apt configs to disable packages auto-refresh. Replace "APT::Periodic::Update-Package-Lists "1";" with "0":
cd /etc/apt/apt.conf.d/
cp 10periodic 10periodic.original
cat 10periodic | awk -F" " '$1=="APT::Periodic::Update-Package-Lists" {printf "%s %s\n",$1,"\"0\";"; next}1' > 10periodic
And lastly, disable the repos from the auto-upgrade list:
nano /etc/apt/apt.conf.d/50unattended-upgrades
Find section "Unattended-Upgrade::Allowed-Origins" and comment out the lines:
//"${distro_id}:${distro_codename}-security";
//"${distro_id}ESM:${distro_codename}";
I then rebooted the VM, and all has been well.

Why received ZFS dataset uses less space than original?

I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.

unable to merge large files in r

I have run into a problem.
I have 10 large separate files, file type File without column headers, which are in total near 4GB which are require merging. I have been told they are text files and pipe delimited, so I added the file extension txt on each files, which I hope is not the problem. R Studio is crashing when I use the following code...
multmerge = function(mypath){
filenames=list.files(path=mypath, full.names=TRUE)
datalist = lapply(filenames, function(x){read.csv(file=x,header=F, sep
= "|")})
Reduce(function(x,y) {merge(x,y, all=T)}, datalist)}
mymergeddata = multmerge("C://FolderName//FolderName")
or when I try to do something like this...
temp1 <- read.csv(file="filename.txt", sep="|")
:
temp10 <- read.csv(file="filename.txt", sep="|")
SomeData = Reduce(function(x, y) merge(x, y), list(temp1...,
temp10))
I seeing errors such as
"Error: C stack usage is too close to the limit r" and
"In scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
Reached total allocation of 8183Mb: see help(memory.size)"
Then I saw a someone asked a question on SO as I am writing this question,
here, so I was wondering if SQL command can used in R Studio or SSMS to merge these large files? If they can how can it be merged to. If it can be done please can you advise me how to do this. I will looking around on the net.
If it can't then what is the best method to merge these rather large files. Can this be achieved in R Studio or is there open source?
I am working on a PC which has 64bit Windows with 8GB RAMS. I have included R and SQL Tags to see what options there are.
Thanks in advance if anyone can help me.
Your machine doesn't have enough memory for your selected operations.
You have 10 files ~ 4GB in total.
When you merge the 10 files you create another object which is also about 4GB, putting you very close to your machine's limit.
Your operating system and R and whatever else you're running also consume RAM so it's no surprise you run out of RAM.
I'd suggest taking a stepwise approach if you don't have access to a bigger maching:
- take the first two files and merge them.
- delete the file objects from R and keep only the merged one.
- load the third object and merge it with the earlier merger.
Repeat until done.

Mono human readable GC statistics in runtime

Is there a Mono profiler mode similar to Java -Xloggc?
I would like to see a human readable GC report while my application is running. Currently Mono can be run with --profile=log option but the output is in binary format and every time I need to run mprof-report to read it. The output file also contains a lot of info which is not interesting for me.
I tried to reduce the file size by specifying heapshot=14400000ms to collect statistics every few hours but it didn't help a lot. In a week I had few gigabytes log.
I also tried to use "sample" profiler but the overhead was too much.
You can use Mono's trace filters for this. Just set the MONO_LOG_MASK to gc and lower the MONO_LOG_LEVEL. Then run your app normally and you will get the human-readable GC statistics while your app is running:
$ export MONO_LOG_MASK=gc
$ export MONO_LOG_LEVEL=debug
$ mono ... # run your application normally ..
...
# notice the human readable GC output
mono: GC_MAJOR: (LOS overflow) pause 26.00ms, total 26.06ms, bridge 0.00ms major 31472K/0K los 1575K/0K
Mono: GC_MINOR: (Nursery full) pause 2.30ms, total 2.35ms, bridge 0.00ms promoted 31456K major 31456K los 5135K
Mono: GC_MINOR: (Nursery full) pause 2.43ms, total 2.45ms, bridge 0.00ms promoted 31456K major 31456K los 8097K
Mono: GC_MINOR: (Nursery full) pause 1.80ms, total 1.82ms, bridge 0.00ms promoted 31472K major 31472K los 11425K

copying kernel and uboot into sdcard

I have a Freescale I.MX ARM board for which I am preparing the bootloader, kernel and Root filesystem on the sdcard.
I am a little confused about the order in which I partition and copy my files into sdcard. Let us say I have an empty sdcard 4GB size. I used gparted to first parition it into:
Firts partition 400 MB size as FAT32 system. this is my boot partition
Second partition is the rest of the card as ext3. This is my root file system partition.
Let us say my sdcard is under /dev/sdb.
Now I have seen many documents differing slightly in the way of copying the boot files.
Which is the right way?
Method 1:
(without mounting sdb partitions:
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
sudo dd if=uImage of=/dev/sdb bs=512 seek=2
Mount sdb2 for copying rootfs:
mount /dev/sdb2 /mnt/rootfs
copy rootfs:
tar -xf tarfile /mnt/rootfs
Method 2:
Mount sdb1 boot partition:
mount /dev/sdb1 /mnt/boot
copy uboot and kernel:
cp u-boot.bin /mnt/boot/
cp uImage /mnt/boot/
Then copy rootfs as above!
Which is the correct one. I tried two but the sddcard is not even booting.
When I tried method 1, the card boot up until it says the rootfs is not found in the partition. I removed the card and inserted and found that the first fat 32 partition is somehow 'destroyed' as it says 'unallocated' on gparted.
Please help.
You need to mark first partition as bootable.
Check your first partition details in gparted or disk utility.
From disk utility you cab mark a partition bootable. by selecting specific partition and going into 'more action' option --> 'edit partition type'.
below is script to flash binaries onto SD card for my
Arndale OCTA board. You can see the placement of bootloader binaries:
BL1
dd iflag=dsync oflag=dsync if=arndale_octa.bl1.bin of=/dev/sde bs=512 seek=1
BL2
dd iflag=dsync oflag=dsync if=../arndale_octa.bl2.bin of=/dev/sde bs=512 seek=31
uboot
dd iflag=dsync oflag=dsync if=u-boot.bin of=/dev/sde bs=512 seek=63
kernel and trust software , ....
Please notes:
1) The partition table is at SDcard offset 0 (seek 0), then you have to run:fdisk /dev/sde
and create paratiions that does not overlapped with blocks ocppuied by kernel or trust software.
2) add the "dsync" option in dd command to gaurantee every written data is immediately flushed into SD card
In the most of the cases, imx processor requires bootloader at 0x400 offset. So whatever you are doing for u-boot is correct, you need to use dd command for that.
sudo dd if=u-boot.bin of=/dev/sdb bs=512 seek=2
While partitioning the sd card, Make sure that you are keeping enough room for u-boot image. So start your 1st bootable partition at let's say 1 MB offset.
You can simply copy your uImage and environment variables (uEnv.txt or boot.scr) through cp command.
For rootfs also, You can follow the same steps as kernel.