There is Proxmox VE 3.0 with 2x1Tb HDD in mirror:
I have 1 huge container (~430Gb), that is located in local. I need to:
extend local with new 2x1Tb disks.
or create new storage (with new disks) and migrate container to new storage
How it can be done without reinstalling Proxmox?
Thanks!
P.S. I can create new storage item as LVM, but LVM is not suitable for containers
update to latest release should be fixed ;)
https://forum.proxmox.com/threads/proxmox-ve-6-2-released.69647/
proxmox ve 3 is running on debian 7 - support ended in 2018, your system is vulnerable like hell and even if you keep it you will probably end up when you will need to install ANY new package - all repositories are already unavailable for some year.
Related
I have a small question regarding my geth node.
I have started the node on my machine with the following command:
geth --snapshot=false --mainnet --syncmode "full" --datadir=$HOME/.ethereum --port 30302 --http --http.addr localhost --http.port 8543 --ws --ws.port 8544 --ws.api personal,eth,net,web3 --http.api personal,eth,net,web3
Currently a full geth node is supposed to take up around 600GB of storage on my disk. But after checking my used disk space (command on ubuntu: #du -h) I spotted this:
Can anyone explain to me, why my full node is using 1.4TB of disk space for chaindata? The node is running for some time (around two weeks) and is fully synced. I am using Ubuntu 20.04.
Thanks in advance!
You set syncmode to "full" and disabled snapshot. This will get you an archive node which is much bigger than 600 GB. You can still get a full (but not archive) node by running with the default snapshot and syncmode settings.
tldr: can I just copy the parent directory with all my repositories to the new machine?
I have a GraphDB (free) server with 8 repositories that I need to move to new hardware. Is there documentation on this?
Yes, you can stop GraphDB and copy it's data folder to a new instance. More information can be found at the documentation - https://graphdb.ontotext.com/documentation/free/backing-up-and-recovering-repo.html#back-up-graphdb-by-copying-the-binary-image
Dears,
I am managing a pool of servers running under RHEL 7.6, I created a local repository of RHEL packages to be able to udpated the other servers by limiting the internet access to the server hosting the local repository.
I used the reposync command to populate my repository but I am downloading a huge number of rpms packages!
I would like to reduce the set of packages to download to the ones already deployed on all the severs, I can do the list using the rpm command, (~750 packages).
I read that there is an includepkgs directive to be used with the reposync command.
How is it working, what is the required format?
I know it is possible to use the yumdownloader command to update the local repository, how is it possible to populate the repository for the first time ?
Any help advice would be appreciated
Regards
Fdv
It seems that the best option is to limit to the last version of the packages by using the option :
-n, --newest-only Download only newest packages per-repo
I'm trying to make a snapshot of one of my VMs via the GUI but the button to creat the snapshot is greyed out, so I wanted to try and do it using the CLI so I could see any helpful output and I got this:
pct snapshot 106 "testing"
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
the list of my VMS:
qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
106 TestingServer running 1024 32.00 23131
I'm not sure what's this about so I was trying to see if somebody here could please give me a hand, I would appreciate it.
I have the same issue on some of the volumes I've attached. So basically, there's a very specific requirement for the storage type you need to have in order to make a snapshot of VM. The list below has the requirements and you can find more information here https://pve.proxmox.com/wiki/Storage#_storage_types
Hope this helps.
You can check the storage type by going to Datacenter > Storage
Once storage is created you cannot change the type of that storage.
The command 'pct snapshot' is a command to snapshot a container (not a QEMU VM). The error is indicating that it can't find a container (LXC) with VM ID 106:
Configuration file 'nodes/pve01/lxc/106.conf' does not exist
The LXC in the path here indicates that it is looking for an LXC container. Your command 'qm list' lists QEMU VMs (not containers). So you are using the wrong command.
You need 'qm snapshot' instead of 'pct snapshot'.
I can't seem to ssh into any instances that are created from a snapshot of an openSUSE instance that's created within Google Cloud (ie: not from a snapshot created locally and then uploaded). I've tested this with three different openSUSE instances, 2 that I had been working on and one that I created only to test this on, and none have been able to produce snapshots that produce instances that allow ssh. To be clear, the instances created from the snapshots start up perfectly fine and show no issues from the console, but neither the console's built in ssh nor any other ssh client (putty, mobaxterm) gets anything more than a time out error. I have successfully created instances from both a Windows and Debian snapshots that I have created myself, so I'm confident it's an issue with the specific OS.
Steps to reproduce:
Create an instance based off of the openSUSE image
Create a snapshot based off of the instance you just created
Create an instance based off of the snapshot you just created
Attempt, and fail, to connect to the instance via ssh
Any help with this would be much appreciated, and thank you very much in advance.
I was able to reproduce your issue. I'll report it to Google. If your run the command
gcloud compute instances get-serial-port-output <your-new-instance>
You will notice that there's an error indicating that couldn't find the disk.
SUSE has fixed the issue yesterday on SLES distros. The following new images are now available (bug-exempt):
sles-11-sp3-v20150310
sles-12-v20150310
We are still working on a fix to openSUSE, and we still don't have a fix for existing instances.
A procedure to address running instances has been posted:
https://forums.suse.com/showthread.php?6142-Image-from-snapshot-will-not-boot&p=26957#post26957
The above post contains all the details, the procedure below addresses the question about "what to do with running instances."
SUSE Linux Enterprise Server 11 SP3 (sles-11-sp3)
1.) Edit /etc/sysconfig/bootloader
In the "DEFAULT_APPEND" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1". Reform the same substitution for the "FAILSAFE_APPEND" assignment.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
2.) Edit /etc/fstab
Replace "/dev/disk/by-id..." with "/dev/sda1"
3.) Edit /boot/menu.lst
Replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda" in both options.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line starting with "kernel"
4.) Reboot the instance
5.) Execute mkinitrd
6.) Edit /etc/udev/rules.d/70-persistent-net.rules (if it exists)
Remove the mac address condition, "ATTR{address}==.....", from the rules.
SUSE Linux Enterprise Server 12 (sles-12)
1.) Edit /etc/sysconfig/bootloader
In the "DEFAULT_APPEND" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda". Perform the same substitution for the "FAILSAFE_APPEND" assignment.
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
2.) Edit /etc/fstab
Replace "/dev/disk/by-id..." with "/dev/sda1"
3.) Edit /etc/default/grub
In the "GRUB_CMDLINE_LINUX_DEFAULT" assignment replace "root=/dev/disk/by-id.." with "root=/dev/sda1" and "disk=/dev/disk/by-id/..." with "disk=/dev/sda".
Add NON_PERSISTENT_DEVICE_NAMES=1 to the end of the line, after "quiet"
4.) Create a new grub configuration (SLES 12)
export GRUB_DISABLE_LINUX_UUID=true
grub2-mkconfig > /boot/grub2/grub.cfg
5.) Execute mkinitrd
6.) Edit /etc/udev/rules.d/70-persistent-net.rules (if it exists)
Remove the mac address condition, "ATTR{address}==.....", from the rules.
A new openSUSE 13.2 image has been published that addresses the issue as well. New instances started from opensuse-13-2-v20150315 will work with no issues with the snapshot feature in GCE. For running instances use the process outlined for SUSE Linux Enterprise 12, that should work. I did not test the procedure on openSUSE.