I want to save the files as pdf,jpeg,txt,gif etc to disk from database. In database it is stored as binary data. How can I do it? Is it possible with using memory stream?
bcp "your_query for selecting the row" queryout "c:\TestOut.doc" -T -n
this link should help you:
http://www.sqlservercentral.com/Forums/Topic487470-338-1.aspx
When you read it from the database it comes in as an object that holds and array of bytes.
Cast it to an array of bytes and create a stream with it.
In C#
byte[] content =(byte[])data; // data the database field. e.g. reader["MyPicture"]
return new MemoryStream(content);
After that FileStream will do the saving for you.
Don't forget to seek the beginning, before you save.
Related
I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.
I have a large video file stored in MongoDB gridFS.
I would like to read it and write it on my disk.
I can find the file in the database with:
file = grid_fs.find_one({"filename":'file_in_database.cin'})
I get back a grid out object gridfs.grid_file.GridOut at 0xa7b7be0
I try to write the file on my disk with:
with open('file_from_database.cin', 'w') as f:
f.write(file.read())
I get the file written but the size of the one download from the database is slightly different from the original size of the file:
05/15/2015 09:09 AM 65,585,808 file_from_database.cin
08/01/2007 01:08 PM 65,585,800 Original_file.cin
I checked the file in the database and the md5 field is the same as the original so the problem must be during the download or writing.
I'm using win7 64 and anaconda64 dirstribution for python 2.7
Any help would be appreciated.
Update
I tried the same code with a jpeg image and I get the same problem, the image is stored well in the database but when I get it and write it to the disk the size is slightly different and I cannot read it.
03/20/2015 02:36 PM 5,422,339 original_image.JPG
05/15/2015 02:44 PM 5,438,750 image_from_database.JPG
Am I doing some simple mistake reading the gridout and writing to the disk?
interesttingly if I open the image with:
PIL.Image.open(file)
I can get the image fine. Any Idea?
Say I have a 5 GB file. I want to split it in the following way.
First 100 MB is on the file
The rest go some reserve file
I do not want to use readalllines kind of function because it's too slow for large files.
I do not want to read the whole file to the memory. I want the program to handle only a medium chunk of data at a time.
You may use BinaryReader class and its method to read file in chunks.
Dim chunk() As Byte
chunk = br.ReadBytes(1024)
Using a hex-editor to mount a NTFS volume, I've found an offset within the volume containing data I'm interested in. How can I figure out the full path/name of the file containing this volume offset?
Perhaps there are still some people searching for the solution. There is a tool for this problem: SleuthKit Tools.
Given an byte offset from the beginning of the partition table you have to divide it by the block size of your NTFS-Partition (usually 4096).
ifind /dev/... -d block_offset => inode_number
ffind /dev/... inode_number => Location of file
You need to read the MFT and parse the Data attributes for each file to find the one that includes the particular offset.
Note that you might need to look at every files stream, not only the default, so you have to parse all the Data attributes.
Unfortunately, I couldn't find a quick link to the binary structure of the NTFS Data attribute. you're on your own for this one.
I have a Tcl TK application that has a Sqlite back-end. I pretty much understand the syntax for inserting, manipulating, and reading string data; however, I do not understand how to store pictures or files into Sqlite with Tcl.
I do know I have to create a column that holds BLOB data in Sqlite. I just don't know what to do on the Tcl side of things. If anyone knows how to do this or has a good reference to suggest for me, I would really appreciate it.
Thank you,
Damion
In my code, I basically open the file as a binary, load its content into a Tcl variable, and stuff that into the SQLite db. So, something like this...
# load the file's contents
set fileID [open $file RDONLY]
fconfigure $fileID -translation binary
set content [read $fileID}
close $fileID
# store the data in a blob field of the db
$db eval {INSERT OR REPLACE INTO files (content) VALUES ($content)}
Obviously, you'll want to season to taste, and you're table will probably contain additional columns...
The incrblob command looks like what you want: http://sqlite.org/tclsqlite.html#incrblob
The "incrblob" method
This method opens a TCL channel that
can be used to read or write into a
preexisting BLOB in the database. The
syntax is like this:
dbcmd incrblob ?-readonly?? ?DB? TABLE COLUMN ROWID
The command returns a new TCL channel
for reading or writing to the BLOB.
The channel is opened using the
underlying sqlite3_blob_open()
C-langauge interface. Close the
channel using the close command of
TCL.