how can we write binary file to a VHD file on mac - virtual-machine

I wanted to write x86 Assembly code .then Compile to binary file . the program only print a string o the screen.
move ax,0xb800
move ds,ax
move [0x00],word'a'
move [0x02],word's'
move [0x04],word'm'
jmp $
now i have the binary file . but i dont know how to write it into vhd file.(I want to put the code at The first 512 bytes so the code will work after bios starting)
can i just open the hvd file and the binary file then copy byte by byte?
I hope I can get some ideas . If you have the code would be better

On linux, you may create the vhd file via virtualbox first, and execute the command following to copy the content of the mbr sector into the vhd file.
dd if=c05_mbr.bin of=LEARN-ASM.vhd bs=512 count=1 conv=notrunc
With the option 'notrunc', the size of the output file will not change when it is bigger than the input file's.

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

trace32 - Memory dump of multiple address ranges to a single binary file

I'm using the Lauterbach debugger to dump from different memory sections to binary files. So far I've managed to generate a binary file for each address range using
data.save.binary output1.txt var.Range(sDummyArray[startRange1..endRange1])
data.save.binary output2.txt var.Range(sDummyArray[startRange2..endRange2])
...
Is there a way for me to "stitch" multiple binary(memory dump) files together to give one binary file OR append each memory dump to a file using a trace32 command that I have missed?
To save multiple address ranges from target memory to the same binary file use the command Data.SAVE.Binary with its option "/Append". The option appends the new data at the end of the given file.
E.g.:
Data.SAVE.Binary output1.txt Var.RANGE(sDummyArray[startRange1..endRange1])
Data.SAVE.Binary output1.txt Var.RANGE(sDummyArray[startRange2..endRange2]) /Append
For TRACE32 older build 63378 you can use the debugger's virtual memory (if not used for other things) like this:
PRIVATE &size1 &size2
&size1=Var.VALUE((sDummyArray+endRange1)-(sDummyArray+startRange1))
&size2=Var.VALUE((sDummyArray+endRange2)-(sDummyArray+startRange2))
Data.COPY Var.RANGE(sDummyArray[startRange1..endRange1]) VM:0
Data.COPY Var.RANGE(sDummyArray[startRange2..endRange2]) VM:&size1
Data.SAVE.Binary output1.txt VM:0++(&size1+&size2-1)
So the idea is here to collect all the data via Data.COPY in the virtual memory and save it from there to a binary file.
Data.SAVE.Binary doesn't have a /Append option in TRACE32 versions released before Sep 2015.
I was able to append my output files using
OS.Command copy /b output1.txt + output2.txt output.txt

Batch extract Hex colour from images to file

I have around 10k images that I need to get the Hex colour from for each one. I can obviously do this manually with PS or other tools but I'm looking for a solution that would ideally:
Run against a folder full of JPG images.
Extract the Hex from dead center of the image.
Output the result to a text file, ideally a CSV, containing the file name and the resulting Hex code on each row.
Can anyone suggest something that will save my sanity please? Cheers!
I would suggest ImageMagick which is installed on most Linux distros and is available for OSX (via homebrew) and Windows.
So, just at the command-line, in a directory full of JPG images, you could run this:
convert *.jpg -gravity center -crop 1x1+0+0 -format "%f,%[fx:int(mean.r*255)],%[fx:int(mean.g*255)],%[fx:int(mean.b*255)]\n" info:
Sample Output
a.png,127,0,128
b.jpg,127,0,129
b.png,255,0,0
Notes:
If you have more files in a directory than your shell can glob, you may be better of letting ImageMagick do the globbing internally, rather than using the shell, with:
convert '*.jpg' ...
If your files are large, you may better off doing them one at a time in a loop rather than loading them all into memory:
for f in *.jpg; do convert "$f" ....... ; done

read video file from mongodb with pymongo

I have a large video file stored in MongoDB gridFS.
I would like to read it and write it on my disk.
I can find the file in the database with:
file = grid_fs.find_one({"filename":'file_in_database.cin'})
I get back a grid out object gridfs.grid_file.GridOut at 0xa7b7be0
I try to write the file on my disk with:
with open('file_from_database.cin', 'w') as f:
f.write(file.read())
I get the file written but the size of the one download from the database is slightly different from the original size of the file:
05/15/2015 09:09 AM 65,585,808 file_from_database.cin
08/01/2007 01:08 PM 65,585,800 Original_file.cin
I checked the file in the database and the md5 field is the same as the original so the problem must be during the download or writing.
I'm using win7 64 and anaconda64 dirstribution for python 2.7
Any help would be appreciated.
Update
I tried the same code with a jpeg image and I get the same problem, the image is stored well in the database but when I get it and write it to the disk the size is slightly different and I cannot read it.
03/20/2015 02:36 PM 5,422,339 original_image.JPG
05/15/2015 02:44 PM 5,438,750 image_from_database.JPG
Am I doing some simple mistake reading the gridout and writing to the disk?
interesttingly if I open the image with:
PIL.Image.open(file)
I can get the image fine. Any Idea?

Is it possible to direct your output file having the same name as input file, that is to overwrite?

I would like to overwrite the name of input file with the same name of output file owing to limited disk space that I have in my system. Is it possible? I know this is not recommended, but I have the input files already backup. I will have a loop in a shell to do the cut command.
#!/bin/bash
for i in {1..1000}
do
cut --delimiter=' ' --fields=1,3-7 input$i.txt > input$i.txt
done
You could always use a temporary file to which you redirect, and then when you're sure everything went fine, you rename it to the original file.
some gnu utils commands have a -i option (such as sed) that allow you to change a file in place .....most of file filtering and editing (like cut) can be done using sed.
The shell will parse the command and handle the redirections first. When it sees "> afile" it will truncate "afile" and open it for writing. Your data is now destroyed. Then the shell hands the filename to cut which now has nothing to read.
This is how I learned:
some | pipeline < my_file > my_file.tmp
ln my_file my_file.bak # this is a hard link
mv my_file.tmp my_file
That keeps the original data in place for as long as possible.
If you're having disk space issues, you will have to read the input file into memory entirely.
In case of very limited disk space (disk quota) you could try to place a compressed source file in ram (/dev/shm) and use that as the source (uncompressing it to stdout and piping that to your script).