I salvaged the HDD from my old desktop, and would like to virtualize it to run under VMware Workstation.
The problem is, the HDD (with several partitions) is 1 TB in size, and when I tried to clone it to an image (using dd), the resulting image is also 1 TB, and I will have problem maintaining a VM of that size.
I know that when creating a new Workstation VM, there is an option to not allocate all the space immediately.
How can I virtualize the HDD and "deflate" the unused parts of the HDD?
Managed to get a "deflated" VMDK file by doing the following.
Mount each partition of the HDD
e.g. mount -t ext4 /dev/sda1 /mnt/tmp
For each partition, fill up empty space with zeros.
e.g. dd if=/dev/zero of=/mnt/tmp/ZERO.TMP
Delete the zeros file.
Clone an image of the entire HDD
e.g. dd if=/dev/sda of=/tmp/image.img
Make a sparse copy of the image file.
e.g. cp --sparse=always /tmp/image.img /tmp/image_sparse.img
Use qemu-img to make a VMDK file from the sparse image file.
e.g. qemu-img convert -O vmdk image_sparse.img image_sparse.vmdk
Related
I wanted to write x86 Assembly code .then Compile to binary file . the program only print a string o the screen.
move ax,0xb800
move ds,ax
move [0x00],word'a'
move [0x02],word's'
move [0x04],word'm'
jmp $
now i have the binary file . but i dont know how to write it into vhd file.(I want to put the code at The first 512 bytes so the code will work after bios starting)
can i just open the hvd file and the binary file then copy byte by byte?
I hope I can get some ideas . If you have the code would be better
On linux, you may create the vhd file via virtualbox first, and execute the command following to copy the content of the mbr sector into the vhd file.
dd if=c05_mbr.bin of=LEARN-ASM.vhd bs=512 count=1 conv=notrunc
With the option 'notrunc', the size of the output file will not change when it is bigger than the input file's.
I know how to read the CSV into numpy and do it from a Python script, and that is good enough for my use case.
But since it has a GUI with data loading functionality, I was expecting it would just work for such an universal data format.
So I tried to go on the menu:
File
Load data
Open file
but when I select a simple CSV file:
i=0; while [ "$i" -lt 10 ]; do echo "$i,$((2*i)),$((4*i))"; i=$((i+1)); done > main.csv
which contains:
0,0,0
1,2,4
2,4,8
3,6,12
4,8,16
5,10,20
6,12,24
7,14,28
8,16,32
9,18,36
an error popup shows on the GUI:
No suitable reader found for file /home/ciro/main.csv
Google led me to this interesting file in the source tree: https://github.com/enthought/mayavi/blob/e2569be1096be3deecb15f8fa8581a3ae3fb77d3/mayavi/tools/data_wizards/csv_loader.py but that just looks like an example of how to do it from a script.
Tested in Mayavi 4.6.2.
From the documentation
One needs to have some data or the other loaded before a Module or Filter may be used. Mayavi supports several data file formats most notably VTK data file formats. Alternatively, mlab can be used to load data from numpy arrays. For advanced information on data structures, refer to the Data representation in Mayavi section.
I've tested importing using the GUI on a Asus Laptop Intel CoreTM i7-4510U CPU # 2.00 GHz with 8 GBs de RAM, using Windows 10, both in and out of a Python virtualenv and always got the same problem:
It all points to CSV files not being directly supported, so had to find another workaround.
My favorite was to use a virtual environment and install on it mayavi, jupyterlab, PyQt5 and Pandas.
Then, using PowerShell, start a Jupyter notebook (jupyter notebook) > Upload > Select the .csv. This imported a 1,25 GBs (153543233 rows x 3 columns) .csv in around 20s, which then became available for usage.
I am running a regular rsync between a local folder and a remoter one via ssh. I got confused when I saw that the remote (and target) folder had a different size, it was smaller. I first suspected excluded files, but that wasn't the case. Instead I discovered the following.
The sizes in a local folder (a subfolder of the one I am syncing) look like this:
112K .
48K ./workspace.xml
12K ./vcs.xml
12K ./preferred-vcs.xml
12K ./pm-client.iml
12K ./modules.xml
12K ./misc.xml
the remote ones, however, like this:
64K .
40K ./workspace.xml
4,0K ./vcs.xml
4,0K ./preferred-vcs.xml
4,0K ./pm-client.iml
4,0K ./modules.xml
4,0K ./misc.xml
When I check the file contents, however, they look just the same. I see this a lot in the target folder, which ultimately leads to big differences in folder sizes.
The rsync I am running looks like this:
rsync -aPEh -e ssh --delete --delete-excluded --stats --exclude-from=<some-ignorelist> /source/folder/ /target/backup/folder
What can be the reason for this?
The sizes that du and ls report are different: du reports the amount of space actually allocated on the filesystem while ls reports the the logical file size.
There are several questions on various StackExchange sites about this.
Why does du report different sizes on your two machines? Because they are either using different filesystems or they are configured differently. It all boils down to the block sizes used on the filesystem, which is what du reports.
I have an XYZ raster file, 1.1GB in EPSG:23700 (EOV), 50 meters resolution.
The aim is to create a GeoTIFF file to be published via GeoServer (EPSG:4326), but I have some performance issues.
If I open the XYZ file from QGIS (2.14.0, Essen), choose Raster » Conversion » Translate and start it with the default options, it completes in several minutes, which is acceptable.
But if I copy the generated gdal_translate command and run it from CLI, than it takes more than an hour or so.
I've tried -co "GDAL_CACHEMAX=500", -co "NUM_THREADS=3", but has no effect. In the process monitor, the QGIS version uses 1 core fully (25% CPU) and the default max memory of 10MByte, but the CLI version only <10% and <10Mbyte mem. The --degub ON option shows "XYZ: New stepX=50.000000000000000" and hangs there.
I've tried to run it from the QGIS Directory \bin folder and the separately downloaded GDAL instance (C:\OSGeo4W64\bin), same results.
Windows Server 2012, 16GB RAM, 2,6 GHz 4 core Xenon CPU.
Any thoughts on this?
Looks like there's some problem with the environment variables. If I use a modified version of the QGIS startup batch file, it all works as expected.
#echo off
call "%~dp0\o4w_env.bat"
#echo off
path %OSGEO4W_ROOT%\apps\qgis\bin;%PATH%
set QGIS_PREFIX_PATH=%OSGEO4W_ROOT:\=/%/apps/qgis
set GDAL_FILENAME_IS_UTF8=YES
set GDAL_CACHEMAX = 500
rem Set VSI cache to be used as buffer, see #6448
set VSI_CACHE=TRUE
set VSI_CACHE_SIZE=1000000
set QT_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\qgis\qtplugins;%OSGEO4W_ROOT%\apps\qt4\plugins
REM This line changed to run my batch file instead of starting QGIS.
call "d:\gdaltest.bat"
I have around 10k images that I need to get the Hex colour from for each one. I can obviously do this manually with PS or other tools but I'm looking for a solution that would ideally:
Run against a folder full of JPG images.
Extract the Hex from dead center of the image.
Output the result to a text file, ideally a CSV, containing the file name and the resulting Hex code on each row.
Can anyone suggest something that will save my sanity please? Cheers!
I would suggest ImageMagick which is installed on most Linux distros and is available for OSX (via homebrew) and Windows.
So, just at the command-line, in a directory full of JPG images, you could run this:
convert *.jpg -gravity center -crop 1x1+0+0 -format "%f,%[fx:int(mean.r*255)],%[fx:int(mean.g*255)],%[fx:int(mean.b*255)]\n" info:
Sample Output
a.png,127,0,128
b.jpg,127,0,129
b.png,255,0,0
Notes:
If you have more files in a directory than your shell can glob, you may be better of letting ImageMagick do the globbing internally, rather than using the shell, with:
convert '*.jpg' ...
If your files are large, you may better off doing them one at a time in a loop rather than loading them all into memory:
for f in *.jpg; do convert "$f" ....... ; done