Custom Open Virtual Format (OVF) - virtual-machine

As everyone knows OVF is Open Virtual Format for exporting virtual appliances it is helps in many aspects and reliable. I got to know about OVF from wiki Open Virtualization Format. Hypervisors like VMware bare-metal hypervisor, Virtualbox, Hyprer-V, has provided their tools for converting VM to OVF/OVA formats. Got to know from below helpful links VMware, Hyper-v,VirtualBox.
But how can i do the custom OVF if i have only VHD,VHDX,VDI,VMDK files of some Virtual Machine?
Does there any difference between VMDK and VMDK from exported OVF?
Is there any programmatic approach by using which i can do this easily?
Thanks

VMware OVF package consist of sparse disk. I did it simple way with the help of VirtualBox, VirtualBox provides you command line option for disk conversion so you can get your disk in target format and then create package, Package consist of .OVF file and .MF file along with disks in one folder.
.MF File consist of SHA1 check-sum of all files in package
.OVF consist of deployment configuration i.e Controller, Disks, RAM,
CPU etc.
No need to study everything just export some VM in OVF format and then refer that .OVF and do changes as you want and update check-sum in .MF file

how can i do the custom OVF?
VMWare OVF file is just an .xml file. It contains the information about resources like disk file .vmdk, cd/disk file .iso, memory, vCPU, Network Adapter, virtual disk size and host configuration parameters.
For reference you can export ovf file from any VM which is already created/running on host.
Does there any difference between VMDK and VMDK from exported OVF?
We can export VMDK from host not from OVF(.ovf is just file). I think exported VMDK and VMDK are same. Because from exported VMDK can also be used to bring-up VM on host.
Is there any programmatic approach by using which i can do this easily?
You can update the ovf file using any programming language. But I prefer to choose python and library.
I prefer to use .OVA instead of .OVF file.
Basically .OVA is tar of .VMDK, .OVF, .MF(cryptography file of all files in .OVA tar (optional)), .iso(optional), etc.
IF you use .OVF file to bring-up instances, you need to keep all the information provided files in same directory like .VMDK, .iso, etc. There may be chances of missing files or placed in different directory.

Related

Copying large BigQuery Tables to Google Cloud Storage and subsequent local download

My goal is to locally save a BigQuery table to be able to perform some analyses. To save it locally, i tried to export it to Google Cloud Storage as a csv file. Alas the dataset is too big to move it as one file, thus it is splitted into many different files, looking like this:
exampledata.csv000000000000
exampledata.csv000000000001
...
Is there a way to put them back together again in the Google Cloud Storage? Maybe even change the format to csv?
My approach was to download it and try to change it manually. Clicking on it does not work, as it will save it as a BIN.file and is also very time consuming. Furthermore I do not know how to assemble them back together.
I also tried to get it via the gsutil command, and I was able to save them on my machine, but as zipped files. When unzipping with WinRar, it gives me exampledata.out files, which I do not know what to do with. Additionally I am clueless how to put them back together in one file..
How can I get the table to my computer, as one file, and as a csv?
The computer I am working with runs on Ubuntu, but I need to have the data on a Google Virtual Machine, using Windows Server 2012.
try using the following to merge all files into one from the windows command prompt
copy *.cs* merged.csv
Suggest you to save the file as .gzip file, then you can download it from Google Cloud easily as BIN file. If you get these splited files in bigquery as following:
Export Table -> csv format, compression as GZIP, URI: file_name*
Then you can combine them back by doing steps as below:
In windows:
add .zip at the end all these files.
use 7-zip to unzip the first .zip file, with name "...000000000000", then it will automatically detect all the rest .zip files. This is just like the normal way to unzip a splitted .zip file.
In Ubuntu:
I failed to unzip the file following the method I can find in internet. Will update the answer if I figure it out.

Which files in buildroot are required to run OpenWRT with QEMU?

In Fedora20, I have OpenWRT buildroot, I also installed qemu-system-ppc.x86_64 successfully. Which files are required in order to run OpenWRT with this emulator? Where can I find that files in buildroot? In this answer they used only kernel image, but what about filesystem image?
Thanks for you time.
You need to create a filesystem image. In menuconfig->Target Images, select tar.gz/ext4 depending on how you want to use qemu. Then generate files will be present inside bin/{ARCH}/*.tar.gz or bin/{ARCH}/*.ext4

How to access file with unacceptable file name

I don't know is this site a good place to ask this question... A long time ago, my operating system was linux. On linux I made a file with name \/:*?"<>|. Then I installed windows instead of linux, but now I cannot access or delete this file. I tried to delete it using Unlocker, ProceXP, Command Prompt and many other programs, but I couldn't. Also, I tried all commands in Command Prompt which can be used for deleting undeletable files, but this file is still here. If I try to rename it, process explorer.exe crashes. Then I installed linux again and this file become accessable.
Now I have windows and another file with name \/:*?"<>|. Is it possible to access this file without installing linux? Is there a way to access place on filesystem where this file name is stored and manualy change it to any acceptable file name? If yes, can you explain which program is best for it?
Try using DeleteDoctor. I've used it under similar situations as yours with great success. You can download a copy here:
http://www.download25.com/delete-doctor-download.html

RoboCopy , Virtual Hard Disk, or other?

We an distributing 230K files, (873MB) of smallish JPG files on DVD. The install program will place these files in an Apache Virtual folder.
Setup(.exe) is taking a too long for our customers. Our initial approach was to create a ZIP and copy from the DVD and unzip to the client Hard disk.
I just tried a RoboCopy (we have a win7 (64 bit) 4 core computer. I tried with 16 threads. Pretty poor. Over Five Hours.
Options : *.* /V /S /COPY:DAT /NP /MT:16 /R:5 /W:30
Copied
Dirs : 6
Files : 230236
Bytes : 873.80 m
Times : 5:28:56
The DVD needs to be discarded after use, so the files need to be on the target machine. We did also try and ISO image. Not bad, takes about 10 minutes to copy, and then there is software for mounting the ISO as Drive Letter, which can be virtual folders to Apache, but the peformance with Apache is not good (used http://www.magiciso.com/ ) to mount. Besides ISO is limited size and Read-Only.
Now we are considering Virtual Hard Drive http://technet.microsoft.com/en-us/magazine/ee872416.aspx
But I have not given up on Roboform. Should I be using different switches? or is a VHD the best way to go?
Target machines are 4+ core, 10TB 24GB RAM win2008 servers.
I got the answer from a different thread. Basically, we are creating a Microsoft VHD (virtual Hard Disk) and filling in the files with RoboCopy and shipping the VHD.
See: Unzip too slow for transfer of many files

Make Apache virtual directory from the contents of a zip file

I have a couple of compressed zip file with static HTML content (e.g. a directory tree of documentation with several static html pages that link to each other, images, css, etc.) For instance, the javadoc zip file serves as an equivalent example for my purpose.
My question is, if there's an apache module that would allow apache to "mount" a zip file as a virtual directory, whose contents are those of the zip file. The operating system in which I'm hosting apache is Mac OS X Snow Leopard.
There is a zip filesystem for FUSE, which is supported on OS X via the MacFUSE project. This will let you mount a zip file via the mount command, thus allowing Apache -- or any other application -- to access its contents as a normal directory.
I don't have my Mac handy at the moment so I can't actually test it out.
I'm not aware of any existing Apache modules to do this, but you could implement it without touching Apache internals by adding a CGI script which handles access to ZIP archives:
Action zip-archive /cgi-bin/ziphandler.cgi
AddHandler zip-archive .zip
This will make ziphandler.cgi get called for all accesses to .zip files, or (more importantly!) to files in "directories" under .zip files. From there, it should be pretty straightforward.
Using proxy_http you can forward requests to Jetty which will serve any ZIP file.
Download Jetty Runner from here: http://mvnrepository.com/artifact/org.eclipse.jetty/jetty-runner
You can run it using e.g. java -jar jetty-runner-9.3.0.M2.jar --port 8082 myZIPFile.zip. Now set up Apache to forward requests to localhost:8082. You can do that for even only one subdirectory.