How to detemine content type of binary data in the image field of SQL Server 2008? - sql

I need to determine file type (i.e., MimeType) of stored data in the SQL Server 2008.
Is there anyway, if possible using SQL query, to identify the content type or MimeType of the binary data stored in the image column?

I think that, if you need that information, it would probably be better to store it in a separate column. Once it's in the DB, your only options really are guessing it from the file name (if you happen to store that) or by detecting the signature from the first few bytes of data.

There is no direct way in SQL Server to do that - there's no metadata on binary columns stored inside SQL Server, unless you've done it yourself.
For SQL Server, a blob is a blob is a blob - it's just a bunch of bytes, and SQL Server knows nothing about it, really. You need to have that information available from other sources, e.g. by storing a file name, file extension, mime type or something else in a separate column.
Marc

Related

How can I read a very long BLOB column in Oracle?

I want to connect a Node Express API with an Oracle 11g Database which has a table with a BLOB column. I want to read it using a SQL query, but the problem is that the BLOB column can have a very long text, more than 100k characters. How can i do this?
I tried using: select utl_raw.cast_to_varchar2(dbms_lob.substr(COLUMN_NAME)) from TABLE_NAME.
But it returns 'raw variable length too long'.
I can make multiple queries in a loop and then join them if it was necessary, but I haven't found how bring just a part of the blob.
Use the node-oracledb module to access Oracle Database (which you are probably already doing, but don't mention).
By default, node-oracledb will return LOBs as Lob instances that you can stream from. Alternatively you can fetch the data directly as a String or Buffer, which is useful for 'small' LOBs. For 100K, I would just get the data as a Buffer, which you can do by setting:
oracledb.fetchAsBuffer = [ oracledb.BLOB ];
Review the Working with CLOB, NCLOB and BLOB Data documentation, and examples like blobhttp.js and the other lob*.js files in the examples directory.
You may also want to look at https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/ which shows Express and node-oracledb.

Padding ssis input source columns to avoid truncation errors?

First post. In SSIS I am using an ODBC Source, and the database (or ODBC driver) doesn't appear to report column metadata correctly for any of the tables in the database for varchar type columns. Therefore, each time I import a table, I get truncation errors on all the varchar fields. Is there any way to set the size of these fields besides doing it ONE AT A TIME in the advanced editor? When importing a flat file source it lets you select a padding % for string fields. Does something like this exist for OLE or ODBC sources? If not, is there any way I can override the column length to, say, force them all to be VARCHAR(1000)?
I have never experience SQL Server providing the wrong meta data for an ODBC connection and it is unlikely you have a ghost in the machine (Deus Ex Machina). The meta data of the column can be set in the ODBC source via the advanced editor. I am willing to bet that is where the difference is. To confirm this:
Right click the ODBC connection and select the Advanced Editor
Click on the Input/Ouput Properties tab
Expand OLE DB Source Output
Expand both External Columns and Output Columns
Inspect each column pair and verify that the meta data matches
Correct any outages in the meta data
Let me know if that works. If it does not work, please provide data and SQL query you are using.
The VARCHAR field width must be set to the maximum incoming field width. I know the default field width is 50. Regardless, each field must be set. I previously worked on a project with large numbers of columns on the input files. My solution was to store the meta-data for the columns in a database table and then I built a C# application to read in the meta-data and then modify the *.dtsx file and set the meta data on all columns. This is the best solution that I am aware of to automate the task.
Unfortunately, I don't have much experience with pulling data through ODBC. Are you pulling from an Access database? Or, what are you pulling from?

How store a large file into SQL Server 2012

I want to store large files in SQL Server 2012. I have been suggested to use BLOB. All I want to do is to create a table which map the Employee id and the path of his image in database. Whenever user want to access the image he will get the path from the database first and then get the image from referenced database using BLOB.
Can you help me how to access different database from one database.
Generally speaking for large files (over 1 MB, but not a rule) you should use FILESTREAM (Overview) which stores the files on filesystem and not in the database itself.
See this article for a guide to set up using FILESTREAM in your database.
As for your question "Can you help me how to access different database from one database." Referencing objects in SQL is done with dot notation like this
databasename.schemaname.tablename
So you can use it to reference objects (tables) in different databases. For more info see Using Identifiers As Object Names not to reiterate what's there already.

Insert file into access table

I have a table named Reports which has 3 fields ID (auto number), filename (string field), theFile (attachment field).
What I want to is to run a SQL query and insert a PDF file into the attachments field (theFile).
Lets say the PDF file is located in the C: drive (C:\report1.pdf), I have tried the SQL query below but it is not working. I know its not good practice to store files in a database but I just want to try it out:
CurrentDb.Execute "INSERT INTO Reports (filename,theFile) VALUES ('report1'," & C:\report1.pdf & ")"
It's standard practice to store files in a database. Access certainly supports it, but not through SQL. You'll have to use DAO, as detailed at http://msdn.microsoft.com/en-us/library/office/bb258184%28v=office.12%29.aspx
"File" is not appropriate SQL data type supported in Access, available data types.
That is correct Derek, if you try to run a SQL statement like that you will get an error message of one type or another every time. I spent a fair amount of time researching this subject for my own DB, and from what I understand there are a number of options/alternatives; however, having an attachment column type and using SQL to insert a file is not an option with Access' current capabilities.
It is not bad practice to store files in a database, it is actually standard practice; however, it IS best practice to not store files in an ACCESS db. There are a few reasons for this which you can research on your own, but perhaps most notably, Access has a file size limit of 2GB, so if you store files in it you can run out of space quickly and then things get even more complicated.
Here are your options:
Change your column data type to OLE object and use some kind of stream reader to convert the files to binary data, then use a SQL statement to load them into your DB
Use the built in Access user interface for working directly with tables/attachments
Establish a DAO db connection and use Access' recordset.LoadFromFile function
Just store links to the files in the Access DB
The 4th option is the preferred method. It's very simple and you won't have to worry about complex code or the 2GB storage limit.

SQL Server 2008 FILESTREAM Feature with VLDB

I have a bunch of xml files that is about 700 GB in size.
I'm going to load the data within those files into a SQL Server 2008 database table(tabular data).
In addition to the fields that will hold the data in a tabular format, the table will contain a field of SQL Server XML type that holds the xml data as a whole.
I want to use the FILESTREAM feature of SQL Server 2008 instead of loading the whole xml into the field.
I want to know the benefits the performance of the queries that will be made on such a very large-table will gain and the pros and cons of this feature.
Thank you in advance.
I do not expect this will ever be marked as the answer because the true answer will only be discovered after an through study of available solutions.
BUT
The answer I have is really a question for you. How are you going to use this data? IF your are going to shread the xml to retrieve the reporting values and keep the complete xml for a reference then I would goto Filestream. If you are going to run reports directly from the xml then you will have to load the data into the database creating the needed indexes.
Loading all data into SQL Server as a combination of shreaded xml and an xml datatype
PRO
All data is avaiable all the time from one source
A single backup contains all data
Additional data from XML can be shreaded to enhance reports on server side
CON
- Backup size
- Backup time
- Slow if data is in native XML
Loading values from XML into SQL Server and using Filestream
PRO
Data source (filestream) is tied to data values
Source data can be presented to client
Con
Filestream content is not available directly from within query
Filestream and SQL backups to syncronize for disaster recovery
Be aware of your storage needs for backups and the maintenaince window need.