Archive Opennebula images before transer and extract after - virtual-machine

i have opennebula 3.4 and vm images transferred over ssh, it is possible to archive images(with gzip) and transfer to the host and then extract image?

You have to create new Transfer Manager. Read more here http://opennebula.org/documentation:rel3.4:sd

Related

Is there a way to upload files to the Amazon S3 from SFTP

My idea is this: I have an SFTP host with data on it and I want to create a file in S3 from this data, but to save network resources I don't want to download all of this data to a system first to upload again. So my question is: is it possible to transfer the data directly to the s3 without first downloading it? (preferably with the Amazon S3 Java SDK)

Azure Automation Power Shell- How to download Zip folder from URL and post to Azure blob

I am trying to download zip data from a url (https://data.police.uk/data/archive/2019-10.zip) and place into a Storage Account (blob).
The PS need to run within an Azure automation account.
When using the below, I get an exception
There is not enough space on the disk
$url = "https://data.police.uk/data/archive/2019-10.zip"
$output = "\policedata.zip"
$wc = New-Object System.Net.WebClient
$download = $wc.DownloadFile($url, $output)
Is there a way to move the zip file to an Azure storage blob within downloading it locally first, or extracting the CSV's out of it?
Azure Automation is not the right solution here given the file you are downloading is 1.6 GB. The problem is the limits Azure Automation has:
Maximum amount of disk space allowed per sandbox: 1 GB
Maximum amount of memory given to a sandbox: 400 MB
You can neither download the file before uploading nor hold it in memory and then pipe it to the storage account.
You could try to setup a two way streaming where one stream downloads file and another stream uploads the file. But this will be an overkill for what you are trying to do. I would suggest using vanilla container or VM to automate this.

It is possible to configure Storage Manager(SM) and Archive directory location of SM in different machines?

Installed NuoDB 3.0 Linux version in one machine and while creating database it is possible to give archive directory location is Amazon s3 buckets storage ?
Here SM is running on one machine and archive location is pointing to Amazon s3 buckets(Storage area). i.e (Archive directory is different machine).
If it is possible please share the information how to follow the process.
The archive directory must be a valid location in the file system of the machine it runs on. You'll have mount S3 on the machine on which the SM is running

How to decompress split zip files on AWS S3?

I've got a file (4GB) which is too big to upload on AWS S3 with unstable internet connection, so I split the file into several parts using WinZip.
So, file.csv became a series of files:
- file.z01
- file.z02
- ...
- file.z12
After uploading it on AWS S3 I need to unzip it. How do I do it?
You wont be able to do it without the help of an EC2 instance.
If you have already uploaded these small zip files, launch a new EC2 instance, download these files from S3 using curl or wget, combine them together and upload to s3 again.
Since you are using Winzip, consider launching a Windows based instance, as it will be tough for you find a linux based equivalent for winzip.

Mapping VM Drive to Corresponding VMDK file

Problem:
Lets say i have a VM with three storage LUNS attached to it through SCSI
VMFS stores each drive as a individual VMDK files.
C:(one_c.vmdk)
D:(two_d.vmdk)
F:(three_f.vmdk)
In ESX host these there vmdk files are stored in /container_name/vm_name/
Is there any way of mapping,given (disk serial number or disk id) and vmdk files location can we figure out to which vmdk file this Drive maps to?
Note: i have gone through this link
VMWare VMDK mapping to Windows Drive Letters. But not that keen to do using scripts
You can use ReconfigVM_Task for the virtual machine that has those disks attached.
You need to pass virtualMachineConfigSpec which, as the name implies, specifies the virtual machine's configuration. Under it you will find virtualDeviceConfigSpec
What you want to do is:
Fetch the existing virtualMachineConfigSpec of your VM
Locate the relevant virtualDeviceConfigSpec for your disk
Edit the backing info of that disk
Change the file name to the correct path (starting from the datastore)
Unset the fileOperation (to avoid creating a new disk there)
Execute ReconfigVM_Task on the VM with the updated spec
Please consult the doc for more specific instructions
https://www.vmware.com/support/developer/converter-sdk/conv50_apireference/vim.VirtualMachine.html#reconfigure