Ethereum Full Node needs too much storage - geth

I have a small question regarding my geth node.
I have started the node on my machine with the following command:
geth --snapshot=false --mainnet --syncmode "full" --datadir=$HOME/.ethereum --port 30302 --http --http.addr localhost --http.port 8543 --ws --ws.port 8544 --ws.api personal,eth,net,web3 --http.api personal,eth,net,web3
Currently a full geth node is supposed to take up around 600GB of storage on my disk. But after checking my used disk space (command on ubuntu: #du -h) I spotted this:
Can anyone explain to me, why my full node is using 1.4TB of disk space for chaindata? The node is running for some time (around two weeks) and is fully synced. I am using Ubuntu 20.04.
Thanks in advance!

You set syncmode to "full" and disabled snapshot. This will get you an archive node which is much bigger than 600 GB. You can still get a full (but not archive) node by running with the default snapshot and syncmode settings.

Related

testcontainers-python hanging while showing "waiting to be ready...", then fails

I'm running my unit testing code for neo4j.
My environment:
Ubuntu 20.04LTS server
1Gb Memory
1CPU
Here is what is displayed in the console:
====================================== test session starts ======================================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: ~/morsvq, configfile: pytest.ini
plugins: mock-3.8.2
collected 2 items
---------------------------------------- live log setup -----------------------------------------
INFO testcontainers.core.container:container.py:52 Pulling image neo4j:latest
INFO testcontainers.core.container:container.py:63 Container started: ad7963ed01
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
INFO testcontainers.core.waiting_utils:waiting_utils.py:46 Waiting to be ready...
ERROR neo4j:__init__.py:571 Failed to read from defunct connection IPv4Address(('localhost', 49153)) (IPv4Address(('127.0.0.1', 49153)))
The same code runs successfully on a faster virtual machine with 8Gb Memory. So the code itself shouldn't be faulty. My suspision is that there is something to do with my configuration, so that it now consumes to much memory?
I've checked the official websites' documentation, but it doesn't mention the memory problem. I wonder if someone has encountered similar problem? How to fix this?
Disclaimer: I am a maintainer of tc-java, so I have only some basic experience with tc-python. However, some facts and constraints are universal across Testcontainers language implementations.
As you already wrote, the code runs fine on a more powerful machine, while it fails on an extremely limited machine. 1GB of RAM is not much, I would expect it is generally not enough to successfully start a Neo4j Docker container without memory swapping. Swapping would make the startup and interactions very slow, hence the startup timeout triggers.
For further debugging, you can run the Neo4j container directly using Docker CLI on your environment and see how it behaves.

Qnx-neutrino can't format my partitions with fdisc, process freezes

I'm trying to format my HDD 250Gb in QNX-neutrino OS running on Oracle VM in Windows 10. I mounted my HDD and everything looks fine. I run command
fdisk /dev/hd1
And create 4 partitions with next sizes:
1325, 124684, 17265, 47496 MB
After that I run command:
mount -e /dev/hd1
And when I tried to format created partitions with commands:
mkqnx6fs –q –b4096 /dev/hd1t177
mkqnx6fs –q –b4096 /dev/hd1t178
mkqnx6fs –q –b4096 /dev/hd1t179
mkqnx6fs –q –b4096 /dev/hd1t180
the process starts and never end. Terminal is just frezees. What can I do to fix it? Or there is another way to format my partitions in this system?
I had the same issue when used VM Oracle after markup my HDD. Then I tried VM Ware and this issue was resolved.
VM Ware didn't suit me, because it counted cylinders incorrectly.
I find the solution. The problem was due to poor contact of the USB cable with my hard drive. I tried changing the usb cable and usb port and it worked fine.

Why is fdisk -l showing different results for the same vdi virtual drive when different virtual machines are used in VirtualBox

VirtualBox (Version 5.2.24 r128163 (Qt5.6.2)) user with xubuntu guest (Ubuntu 18.04.2 LTS) and Windows 10 host here.
I recently tried to resize my vdi from ~100GB to 200GB. In windows I used the command:
./VBoxManage modifyhd "D:\xub2\xub2.vdi" --resize 200000
That went fine. Then I used a gparted live cd to create a vm, attached the vdi and resize the partitions:
gparted gui
All looks good. If I then use the 'fdisk -l' command whilst in the gparted vm the increased partition sizes are visible as expected.
fdisk -l results for vdi attached to gparted vm
If I try and resize the file system for one of the newly resized logical drives with 'resize2fs /dev/sda5' I am told it is already 46265856 blocks long and there is nothing to do.
However....
If I then re-attach this vdi to an ubuntu vm and boot up with the vdi, the 'fdisk -l' command gives different results and is basically telling me that the drive is still 100GB in size.
fdisk -l results for the same vdi attached to ubuntu vm
The 'df' command confirms that it is not resized.
df command output with same vdi attached to ubuntu vm
If I try the command 'resize2fs /dev/sda5' I get the result:
The filesystem is already 22003712 (4k) blocks long. Nothing to do!
How can I fix this and make the ubuntu vm see that the disk and partitions have been increase in size?
Ok. I will answer my own question (thank you for the negative vote anonymous internet).
This issue occurs when you have existing snapshots of the drive that you are trying to expand associated with a VirtualBox VM.
I found this described in VirtualBox's documentation.
https://forums.virtualbox.org/viewtopic.php?f=35&t=50661
One suggested solution is to delete the snapshots, however I got an error message when I attempted that.
The solution that worked for me was to clone my VM. The cloned VM (which did not have any snapshots associated with it), behaved as expected and showed the correct size for the resized disk.
To be clear: the situation I described above is 100% true.
Hope that helps someone.

Creating a snapshot for a proxmox VM is not possible either in the GUI or CLI

I'm trying to make a snapshot of one of my VMs via the GUI but the button to creat the snapshot is greyed out, so I wanted to try and do it using the CLI so I could see any helpful output and I got this:
pct snapshot 106 "testing"
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
the list of my VMS:
qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
106 TestingServer running 1024 32.00 23131
I'm not sure what's this about so I was trying to see if somebody here could please give me a hand, I would appreciate it.
I have the same issue on some of the volumes I've attached. So basically, there's a very specific requirement for the storage type you need to have in order to make a snapshot of VM. The list below has the requirements and you can find more information here https://pve.proxmox.com/wiki/Storage#_storage_types
Hope this helps.
You can check the storage type by going to Datacenter > Storage
Once storage is created you cannot change the type of that storage.
The command 'pct snapshot' is a command to snapshot a container (not a QEMU VM). The error is indicating that it can't find a container (LXC) with VM ID 106:
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
The LXC in the path here indicates that it is looking for an LXC container. Your command 'qm list' lists QEMU VMs (not containers). So you are using the wrong command.
You need 'qm snapshot' instead of 'pct snapshot'.

VirtualBox revert to snapshot from inside guest

Is there any way to restore a snapshot from inside a VBox guest machine?
I have a Windows machine that hosts numerous machines. Currently we are working with something using a Ubuntu guest and it is really painful to have to keep switching machines just to revert a snapshot.
What I had in mind is setting the machine to a "base" state and every time I want to go to that I just type some command like:
revertbase
Than the machine would restart in the previous snapshot and I would just need to restart ssh to continue.
You cannot snapshot a running machine, you have to freeze it before, so my guess is that the host itself cannot do that.
In the host machine, from command line you can do this using VBoxManage.
The file is located in
Program Files/Oracle/VirtualBox/VBoxManage.exe
and is used as a command-line interface with VirtualBox.
Using the command:
VBoxManage snapshot "MachineName" take SnapShotName
Them after that:
VBoxManage snapshot "MachineName" discardcurrent -state
To return to the last state, for more read the text bellow, to have easy acess to VBoxManage add it to your path:
PATH=%PATH%;c:\Program Files\Oracle\VirtualBox
Taken from: http://www.linux.com/news/enterprise/systems-management/8224-secrets-for-controlling-virtualbox-from-the-command-line
Managing snapshots
One of the most useful feature of virtualization software is its ability to take snapshots of VMs. It's always a good idea to take a snapshot of a VM before making changes to it. Snapshots help on the hardware level to recover a system that has been rendered unusable due to changes to the hardware configuration, and on the software level they protect against data loss due to accidental deletion or virus.
Taking a snapshot from the VirtualBox CLI is child's play. VBoxManage snapshot "Fedora" take snap1-stable-system takes the snapshot of a stable Fedora VM when everything is working perfectly. Saving a snapshot might take some time, depending on the VM and the resources on the host. To make sure you don't make changes to a system while a snapshot is being taken, VirtualBox grays out the whole VM interface, and you cannot use it until the snapshot has been saved.
With a stable snapshot in hand, go ahead and play with the system. If you get in trouble and your machine won't boot or starts behaving abnormally, you can revert to the snapshot of the stable machine. To do this, first power off the VM with VBoxManage controlvm "Fedora" poweroff, then revert to last snapshot with VBoxManage snapshot "Fedora" discardcurrent -state. If you have multiple snapshots, you can revert to the last but one snapshot with the -all switch instead of -state.
Of course when you revert to an older state, all the changes you made since that snapshot was taken are lost, including all configuration changes and changes to old and new files. You can work around this by specifying that your data should be stored on a "writethrough" disk, which behaves like a normal disk but isn't affected by snapshots. Put another way, when you take a snapshot, VirtualBox ignores the writethrough disk. You can store all your important data and files or your complete /home directory on that disk.
 
To add a writethrough disk, use the -type writethough option when creating a new disk with createvdi. You can also change a disk you created earlier and make it writethrough. To do so, first unattach it from the VM with VBoxManage modifyvm "Fedora" -hdb none, and then unregister it with VBoxManage unregisterimage disk fourgig (using the name of the disk on your system in place of fourgig). Now register it back again but as a writethrough disk with VBoxManage registerimage disk "fourgig" -type writethrough. Finally, attach it back to the VM using VBoxManage modifyvm "Fedora" -hdb fourgig.
Now you can safely save data on this disk, and no matter what state the VM is in, the data will always be safe. But remember not to revert back to a state that was saved before this disk was created; if you do, VirtualBox will simply delete the disk, becase it didn't exist in that state. Also, VirtualBox doesn't currently let you take a snapshot of a VM that has a writethrough disk attached, so you have to unattach a writethough disk before saving the state of the VM and then reattach it. I hope in upcoming VirtualBox versions the presence of a writethrough disk will have no influence on the snapshot process.
An updated answer. You still can't do it from the guest directly.
You could trigger the host to restart the guest by means of a shared drive/folder and a script running on the host which will reload the guest when a shared file is updated.
Included is my windows script to restart the guest. The commands as given by Canesin did not work for me. I have the following in a CMD file.
PATH=%PATH%;c:\Program Files\Oracle\VirtualBox
vboxmanage controlvm "DEMO" poweroff
timeout /t 10
vboxmanage snapshot "DEMO" restorecurrent
timeout /t 10
vboxmanage startvm "DEMO"