Display blob content using Eclipse database explorer - sql

I'm connected, using Eclipse database development view (standard of Eclipse Indigo), to an Oracle DB in which, for a particular record (that I already know), I want to view one column content in "text" form (although the column contains BLOB data).
When I simply do a
select MYBLOBCOLUMN from MYTABLE where ID='myid'
SQL results view only show an execution log, but no data. So, how can I see that BLOB content ?

The BLOB datatype was invented in order to be able to transfer "custom" objects from one database to another. The Database itself has no idea how to interpret and display the stored data in the blob field.
It can be an image, application, video, audio or anything else. If you have stored normal text in a blob field your database program has no idea that it is regular text.
If you store text in a database, you better use (n)varchar or memo data type.

Related

Padding ssis input source columns to avoid truncation errors?

First post. In SSIS I am using an ODBC Source, and the database (or ODBC driver) doesn't appear to report column metadata correctly for any of the tables in the database for varchar type columns. Therefore, each time I import a table, I get truncation errors on all the varchar fields. Is there any way to set the size of these fields besides doing it ONE AT A TIME in the advanced editor? When importing a flat file source it lets you select a padding % for string fields. Does something like this exist for OLE or ODBC sources? If not, is there any way I can override the column length to, say, force them all to be VARCHAR(1000)?
I have never experience SQL Server providing the wrong meta data for an ODBC connection and it is unlikely you have a ghost in the machine (Deus Ex Machina). The meta data of the column can be set in the ODBC source via the advanced editor. I am willing to bet that is where the difference is. To confirm this:
Right click the ODBC connection and select the Advanced Editor
Click on the Input/Ouput Properties tab
Expand OLE DB Source Output
Expand both External Columns and Output Columns
Inspect each column pair and verify that the meta data matches
Correct any outages in the meta data
Let me know if that works. If it does not work, please provide data and SQL query you are using.
The VARCHAR field width must be set to the maximum incoming field width. I know the default field width is 50. Regardless, each field must be set. I previously worked on a project with large numbers of columns on the input files. My solution was to store the meta-data for the columns in a database table and then I built a C# application to read in the meta-data and then modify the *.dtsx file and set the meta data on all columns. This is the best solution that I am aware of to automate the task.
Unfortunately, I don't have much experience with pulling data through ODBC. Are you pulling from an Access database? Or, what are you pulling from?

change field datatype in Cosmos DB

I have a field in Cosmos DB which is mapped as an number, but it should be a string. I'd like to alter the schema in-place without reloading the data, is this possible with a query in the same way it can be achieved in SQL?
ALTER TABLE EVENTS
MODIFY COLUMN eventAmount varchar;
Have consulted the docs but they only reference simple SQL commands.
DocumentDB is schemaless. There is no structure defined outside documents themselves so each document has their own schema. If you want to enforce some documents follow a certain structure, then that must be enforced by yourself in your application logic.
So, this means you can not "alter schema" for collection to change data types.
What you can and should do, is to fix documents which you consider having wrong schema by updating them. Query docs where eventAmount is stored as JS number and save the document with the value stored as a corresponding javascript string instead.

Insert file into access table

I have a table named Reports which has 3 fields ID (auto number), filename (string field), theFile (attachment field).
What I want to is to run a SQL query and insert a PDF file into the attachments field (theFile).
Lets say the PDF file is located in the C: drive (C:\report1.pdf), I have tried the SQL query below but it is not working. I know its not good practice to store files in a database but I just want to try it out:
CurrentDb.Execute "INSERT INTO Reports (filename,theFile) VALUES ('report1'," & C:\report1.pdf & ")"
It's standard practice to store files in a database. Access certainly supports it, but not through SQL. You'll have to use DAO, as detailed at http://msdn.microsoft.com/en-us/library/office/bb258184%28v=office.12%29.aspx
"File" is not appropriate SQL data type supported in Access, available data types.
That is correct Derek, if you try to run a SQL statement like that you will get an error message of one type or another every time. I spent a fair amount of time researching this subject for my own DB, and from what I understand there are a number of options/alternatives; however, having an attachment column type and using SQL to insert a file is not an option with Access' current capabilities.
It is not bad practice to store files in a database, it is actually standard practice; however, it IS best practice to not store files in an ACCESS db. There are a few reasons for this which you can research on your own, but perhaps most notably, Access has a file size limit of 2GB, so if you store files in it you can run out of space quickly and then things get even more complicated.
Here are your options:
Change your column data type to OLE object and use some kind of stream reader to convert the files to binary data, then use a SQL statement to load them into your DB
Use the built in Access user interface for working directly with tables/attachments
Establish a DAO db connection and use Access' recordset.LoadFromFile function
Just store links to the files in the Access DB
The 4th option is the preferred method. It's very simple and you won't have to worry about complex code or the 2GB storage limit.

Few questions from a Java programmer regarding porting preexisting database which is stored in .txt file to mySQL?

I've been writing a Library management Java app lately, and, up until now, the main Library database is stored in a .txt file which was later converted to ArrayList in Java for creating and editing the database and saving the alterations back to the .txt file again. A very primitive method indeed. Hence, having heard on SQL later on, I'm considering to port my preexisting .txt database to mySQL. Since I've absolutely no idea how SQL and specifically mySQL works, except for the fact that it can interact with Java code. Can you suggest me any books/websites to visit/buy? Will the book Head First with SQL ever help? especially when using Java code to interact with the SQL database? It should be mentioned that I'm already comfortable with using 3rd Party APIs.
View from 30,000 feet:
First, you'll need to figure out how to represent the text file data using the appropriate SQL tables and fields. Here is a good overview of the different SQL data types. If your data represents a single Library record, then you'll only need to create 1 table. This is definitely the simplest way to do it, as conversion will be able to work line-by-line. If the records contain a LOT of data duplication, the most appropriate approach is to create multiple tables so that your database doesn't duplicate data. You would then link these tables together using IDs.
When you've decided how to split up the data, you create a MySQL database, and within that database, you create the tables (a database is just something that holds multiple tables). Connecting to your MySQL server with the console and creating a database and tables is described in this MySQL tutorial.
Once you've got the database created, you'll need to write the code to access the database. The link from OMG Ponies shows how to use JDBC in the simplest way to connect to your database. You then use that connection to create Statement object, execute a query to insert, update, select or delete data. If you're selecting data, you get a ResultSet back and can view the data. Here's a tutorial for using JDBC to select and use data from a ResultSet.
Your first code should probably be a Java utility that reads the text file and inserts all the data into the database. Once you have the data in place, you'll be able to update the main program to read from the database instead of the file.
Know that the connection between a program and a SQL database is through a 'connection program'. You write an instruction in an SQL statement, say
Select * from Customer order by name;
and then set up to retrieve data one record at a time. Or in the other direction, you write
Insert into Customer (name, addr, ...) values (x, y, ...);
and either replace x, y, ... with actual values or bind them to the connection according to the interface.
With this understanding you should be able to read pretty much any book or JDBC API description and get started.

How to detemine content type of binary data in the image field of SQL Server 2008?

I need to determine file type (i.e., MimeType) of stored data in the SQL Server 2008.
Is there anyway, if possible using SQL query, to identify the content type or MimeType of the binary data stored in the image column?
I think that, if you need that information, it would probably be better to store it in a separate column. Once it's in the DB, your only options really are guessing it from the file name (if you happen to store that) or by detecting the signature from the first few bytes of data.
There is no direct way in SQL Server to do that - there's no metadata on binary columns stored inside SQL Server, unless you've done it yourself.
For SQL Server, a blob is a blob is a blob - it's just a bunch of bytes, and SQL Server knows nothing about it, really. You need to have that information available from other sources, e.g. by storing a file name, file extension, mime type or something else in a separate column.
Marc