Inserting a single row in SQL, but loading the contents from a file - sql

I can INSERT a row into SQL like this:
INSERT INTO MyTable VALUES (0, 'This is some text...');
However, what if I wanted This is some text... to be the contents of C:\SomeFile.txt. Is there a method in Oracle that makes this possible?
I dug through the Docs a bit and found the LOAD FILE method, however it appears to be for bulk loading data. For example, it wants a FIELDS TERMINATED BY parameter and what not. I want to simply INSERT a single row and set a single column to by the contents of a file on the local disk.

You should never be reading files from the DATABASE SERVER's file system to insert into the db.
Why do you want to do this? You really should read the file in your application code, then insert the string or binary data through standard SQL.
If you really must do this you will need to use the oracle UTL_FILE function. and write some PL/SQL to store in a variable, then insert.

My first time answering a question, so forgive me.
If you're using PHP (which you may well not be, but this is what I know), then you could do something like this:
File: "importantfile.php"
$variable 1 = "0";
$variable 2 = "This is some text...";
File that inserts text: "index.php"
require "importantfile.php";
$query = mysql_query("INSERT INTO MyTable VALUES ('$variable1', '$variable2')");
I hope this helps you in some way :)

To do this I think you would have to use PL/SQL. This is technically an "advanced SQL".
search up a bit about PL/SQL. You can copy your SQL directly into PL/SQL and use the same database.

Related

Save PDF-File into BLOB-Column Oracle DB with INSERT statement

I have to save a PDF-File into a BLOB-Column from a Oracle DB.
I can't use Java and have to use an INSERT-Statement.
The only solutions I've found while searching were very complex.
Is there an easy solution like: INSERT INTO (BLOB_COLUMN) VALUES(BLOBPDF("myPDF.pdf") or something like that?
I would suggest that you use a stored procedure in Oracle where you pass the path to your PDF file and calling the stored procedure does the insert.
Look at the last two sample example here.
If the load is "one-shot" you can use SQLDeveloper.
Otherwise you can use sqlloader (http://docs.oracle.com/cd/B19306_01/server.102/b14215/ldr_params.htm) that is designed for this type of operations

perl execute sql file (DBI oracle)

I have the following problem, i have a SQL file to execute with DBI CPAN module Perl
I saw two solution on this website to solve my problem.
Read SQL file line by line
Read SQL file in one instruction
So, which one is better, and what the real difference between each solution ?
EDIT
It's for a library. I need to retrieve output and the return code.
Kind of files passed might be as following:
set serveroutput on;
set pagesize 20000;
spool "&1."
DECLARE
-- Récupération des arguments
-- &2: FLX_REF, &3: SVR_ID, &4: ACQ_STT, &5: ACQ_LOG, &6: FLX_COD_DOC, &7: ACQ_NEL, &8: ACQ_TYP
VAR_FLX_REF VARCHAR2(100):=&2;
VAR_SVR_ID NUMBER(10):=&3;
VAR_ACQ_STT NUMBER(4):=&4;
VAR_ACQ_LOG VARCHAR2(255):=&5;
VAR_FLX_COD_DOC VARCHAR2(30):=&6;
VAR_ACQ_NEL NUMBER(10):=&7;
VAR_ACQ_TYP NUMBER:=&8;
BEGIN
INSERT INTO ACQUISITION_CFT
(ACQ_ID, FLX_REF, SVR_ID, ACQ_DATE, ACQ_STT, ACQ_LOG, FLX_COD_DOC, ACQ_NEL, ACQ_TYP)
VALUES
(TRACKING.SEQ_ACQUISITION_CFT.NEXTVAL, ''VAR_FLX_REF'',
''VAR_SVR_ID'', sysdate, VAR_ACQ_STT, ''VAR_ACQ_LOG'',
''VAR_FLX_COD_DOC'', VAR_ACQ_NEL, VAR_ACQ_TYP);
END;
/
exit;
I have another question to ask, again with DBI Oracle module.
May i use the same code for SQL file and for Control file ?
(Example of SQL Control file)
LOAD DATA
APPEND INTO TABLE DOSSIER
FIELDS TERMINATED BY ';'
(
DSR_IDT,
DSR_CNL,
DSR_PRQ,
DSR_CEN,
DSR_FEN,
DSR_AN1,
DSR_AN2,
DSR_AN3,
DSR_AN4,
DSR_AN5,
DSR_AN6,
DSR_PI1,
DSR_PI2,
DSR_PI3,
DSR_PI4,
DSR_NP1,
DSR_NP2,
DSR_NP3,
DSR_NP4,
DSR_NFL,
DSR_NPG,
DSR_LTP,
DSR_FLF,
DSR_CLR,
DSR_MIM,
DSR_TIM,
DSR_NDC,
DSR_EMS NULLIF DSR_EMS=BLANKS "sysdate",
JOB_IDT,
DSR_STT,
DSR_DAQ "CASE WHEN :DSR_DAQ IS NOT NULL THEN SYSDATE ELSE NULL END"
)
Reading a table one row at a time is more complex, but it can use less memory - provided you structure your code to make use of the data per item and not need it all later.
Often you want to process each item separately (e.g. to do work on the data), in which case you might as well use the read line-by-line approach to define your loop.
I tend to use single-instruction approach by default, but as soon as I am concerned about number of records (especially in long-running batch processes), or need to loop through the data as the first task, then I read records one-by-one.
In fact, the two answers you reference propose the same solution, to read and execute line-by-line (but the first is clearer on the point). The second question has an optional answer, where the file contains a single statement.
If you don't execute the SQL line-by-line, it's very difficult to trap any errors.
"Line by line" only makes sense if each SQL statement is on a single line. You probably mean statement by statement.
Beyond that, it depends on what your SQL file looks like and what you want to do.
How complex is your SQL file? Could it contain things like this?
select foo from table where column1 = 'bar;'; --Get foo; it will be used later.
The simple way to read an SQL file statement by statement is to split by semicolons (or whatever the statement delimiter is). But this method will fail if you might have semicolons in other places, like comments or strings. If you split this statement by semicolons, you would try to execute the following four "commands":
select foo from table where column1 = 'bar;
';
--Get foo;
it will be used later.
Obviously, none of these are valid. Handling statements like this correctly is no simple matter. You have to completely parse SQL to figure out what the statements are. Unfortunately, there is no ready-made module that can do this for you (SQL::Script is a good start on an SQL file processing module, but according to the documentation it just splits on semicolons at this point).
If your SQL file is simple, not containing any statement delimiters within statements or comments; or if it is predictable in some other way (such as having one statement per line), then it is easy to split the file into statements and execute them one by one. But if you have to handle arbitrary SQL syntax, including cases such as above, this will be a complex task.
What kind of task?
Do you need to retrieve the output?
Is it important to detect errors in any individual statement, or is it just a batch job that you can run and not worry about it?
If this is something that you can just run and forget about, you could just have Perl execute a system command, telling Oracle to process the file. This will be simpler than handling all of the statements yourself. But if you need to process the results or handle errors within Perl, doing it yourself statement by statement will be a necessity.
Update: based on your response, you want to write a library that can handle arbitrary SQL statements. In that case, you definitely need to parse the SQL and execute the statements one at a time. This is do-able, but not simple. The possibility of BEGIN...END blocks means that you have to be able to correctly handle semicolons within a statement.
The SQL::Statement class of modules may be helpful.

Get all values from one column of a CSV string using Stored Procedure

Within a stored procedure, I need to take a whole CSV file as a string, then pick out all the values in one "column" to do a further query on the database.
I cannot use a saved doc - so i think that rules out openrowset, and the whole thing has to be done within a stored procedure.
Have spent hours googling and trying, but can find a good answer. One possible was http://www.tainyan.com/articles/entry-32/converting-csv-to-sql-data-table-with-stored-procedure.html but it doesnt work and i can find the error.
How should this be done please?
I don't really like this but it will work, provided your csv column remains at the same column index. I'd be wary of the performance of this but it might work.
See Fiddle here: http://sqlfiddle.com/#!3/336b7/1
Basically convert your csv file to xml, cast to an xml type, then perform queries on the xml.

Sophisticated JPQL String Query

I am trying to execute a pretty-sophisticated query on a string field in the database. I am not very experienced at JPQL, so I thought I would try to get some help.
I have a field in the database called FILE_PATH. Within the FILE_PATH field, there will be values such as:
'C:\temp\files\filename.txt'
'file:\\\C:\testing\testfolder\innerfolder\filename2.txt'
I need to be able to do a search from a user-given query on the file name only. So, instead of simply doing a SELECT Table FROM Table AS t WHERE t.filePath LIKE '%:query%', things will have to get a bit more complicated to accomodate for just the filename portion of the path. The file path and file name are dynamic data, so I can't just hard-code a prefix string in there. This has me pretty confused, but I know there are some string expressions in JPQL that might be able to handle this requirement.
Basically, I just need to return all rows that match the given query on whatever comes after the last '\' in the FILE_PATH field. Is this possible?
Thanks for the help.
EDIT: Database that is being used is SQL Server.
Probably the best solution is to add a separate column that contains just the file name. If you can't, then this might work (depending on the database you use):
drop table test;
create table test(name varchar(255));
insert into test values('C:\temp\name2\filename.txt');
insert into test values('file:\\\C:\\innerfolder\filename2.txt');
select * from test
where substring(name, locate('\', name, -1)) like '%name2%'
This is pure SQL, but as far as I understand all the functions are supported within JPQL: http://www.datanucleus.org/products/accessplatform/jpa/jpql_functions.html
One problem is the locate(,,-1). It means 'start from the end of the string'. It works for the H2 database, but not MySQL and Apache Derby. It might work for Oracle, SQL Server (I didn't test it). For some databases may need to replace '\' with '\\' (MySQL, PostgreSQL; not sure if Hibernate does that for you).
Final WHERE Clause:
LOWER(SUBSTRING(fs.filePath, LENGTH(fs.filePath) - (LOCATE('\\', REVERSE(fs.filePath)) - 2), (LOCATE('\\', REVERSE(fs.filePath)) - 1))) LIKE '%:query%'
NOTE: For performance, you might want to save the location of the slash.
Thanks to Thomas Mueller for the assistance.

Using an sql-function when reading and writing with NHibernate

I have the following problem.
I have a special column in my tables (a blob). In order to read and write to that column, I need to call an SQL function on its value - to convert to a string when I read, and to convert from a string to this blob when I write.
The read part is easy - I can use a formula to run the sql function against the column. But formulas are read only. Using IUserType also did not seem to assist - I can get the blob and write my own code to convert it to my own type, but I don't want to do that - I already have a database function that does this work for me.
Any ideas?
You can specify sql to insert and update, see the reference documentation, "Custom SQL for create, update and delete". Here is an example from Ayende which uses stored procedures (which is not the same, just to see how it works).
Or you could write a database trigger which does this transformation.