I'm interested is it possible with PL/SQL block to transfer the content of a Oracle table into text file on the Hard Drive. I need a PL/SQL block which can download the content of a table witch will be used to store log data into text file.
Regards
you can use UTL_file package for this..
you can try below type of block --
declare
p_file util_file.file_type;
l_table <your_table_name>.ROWTYPE;
l_delimited varchar2(1) := '|';
begin
p_file:= utl_file.fopen('<file_path>','<file_name>','W');
for l_table in (select * from <your_table_name>) loop
utl_file.putline(p_file,l_table.col1||l_delimited||l_table.col2||l_delimited||l_table.col3||l_delimited||l_table.col4||l_delimited <continue with column list .........> ||chr(10));
end loop;
utl_file.fclose_all();
end;
pratik garg's answer is a good one.
But, you might want to consider also the use of an EXTERNAL TABLE.
Basically, it's a table which is mapped to a file. So every row inserted to the table is automatically written to a file.
you can see an example here
Related
I try to insert a picture to the next table:
create table Picture
(
pic BLOB,
title varchar2(30),
descript varchar2(200),
tags varchar2(100),
date_created varchar2(100),
actualdate date
);
I have a picture and 5 varchar2 paramaters. Here is the procedure where I want to insert:
create or replace procedure addKep (pic BLOB, title varchar2,descript varchar2, tags varchar2 , date_created varchar2, hiba out varchar2)
is
my_date date;
v_blob BLOB;
begin
--get actual date
SELECT TO_date
(SYSDATE, 'YYYY/MM/DD HH:MI:SS')into my_date
FROM DUAL;
INSERT INTO Picture (pic)
VALUES (empty_blob());
insert into Picture Values(pic,title,descript,tags,date_created,my_date);
--hiba:='Sikeres!';
commit;
end;
After I try to test my procedure:
declare
something varchar2(20);
BEGIN
addKep('c:\xampp\htdocs\php_web\_projekt\pic\akosfeladatok.jpg','Title','Description','tags','2020-06-15',something);
END;
But I will get the next error:
PLS-00306: wrong number or types of arguments in call to 'ADDKEP'
However, I have the same argument list
Thank you for your help
You don't pass a path to a file as a BLOB, you pass the actual bytes of the file - see Using PL/SQL how do you I get a file's contents in to a blob?
Though, I (personally) dare say your problematic code has the right idea; I'd recommend to NOT store your pictures inside your database. Store them in the file system as files and then use the paths in the DB. It's handy for all sorts of things such as being able to individually back them up, manipulate or substitute them, and you don't need to write code in order to serve them over the web; web servers already know how to serve files out of the file system but as soon as you put your picture (or any bytes data) into a DB, it becomes much harder to work with and you have to write all the code that pulls it out, and puts it back - and that's really all you can do. By storing files in your mission critical DB it means the DB now has to dedicate resources to fetching those files out any time they're needed - really files like pictures should be put on a CDN and made available close to the users who will use them/the job of storing, caching and serving them be handed off to existing technologies dedicated to the task
ps; There are a lot of reasonable arguments for and against, in https://dba.stackexchange.com/questions/2445/should-binary-files-be-stored-in-the-database - my thoughts, personally, align with Tek's
I am able to get the result as CSV in Oracle by using this simple query with a hint.
SELECT /*csv*/ * FROM dual;
This returns
"DUMMY"
"X"
Now I would like to use exactly the same hint in PL/SQL in order not to reinvent the wheel.
SET SERVEROUTPUT ON;
declare
cur sys_refcursor;
csv_line varchar2(4000);
begin
open cur for select /*csv*/ * from dual;
loop
fetch cur into csv_line;
exit when cur%NOTFOUND;
dbms_output.put_line(csv_line);
end loop;
close cur;
end;
Unfortunately this prints only
X
which seems to ignore the hint.
Any way to do it that simple or do I have to write a special piece of code for exporting the data as CSV?
The /*csv*/ hint is specific to SQL Developer and it's sibling SQLCl; and is somewhat superseded by the set sqlformat csv option.
It is not a hint recognised by the optimiser; those are denoted by a plus sign, e.g. /*+ full(...) */ or less commonly --+ full(...).
If you're creating the output in PL/SQL you will need to construct the string yourself, adding the double quotes and delimiters. You can either have the cursor query do that so you can select into a single string even when you have multiple columns; or have a 'normal' query that selects into a record and have PL/SQL add the extra characters around each field as it's output.
It would be more normal to use utl_file than dbms_output as the client may not have that enabled anyway, but of course that writes to a directory on the server. If you're writing to a file on the client then PL/SQL may not be appropriate or necessary.
If you need to do some manipulation of the data in PL/SQL then one other option is to use a collection type and have an output/bind ref cursor, and then have SQL Developer print that as CSV. But you don't normally want to be too tied to a single client.
The second part of the question: How to do the same (get ALL results, without any loops) with SQL*Plus.
I'm writing some PL/SQL scripts to test the data integrity using Jenkins.
I'm having a script like this:
declare
temp_data SOME_PACKAGE.someRefCurFunction; // type: CURSOR
DATA1 NUMBER;
DATA2 NUMBER;
DATA3 SOMETHING.SOMETHING_ELSE%TYPE;
begin
cursor := SOME_PACKAGE.someFunction('some',parameters,here);
LOOP
FETCH cursor INTO DATA1,DATA2,DATA3;
EXIT WHEN temp_data%NOTFOUND;
dbms_output.put_line(DATA1||','||DATA2||','||DATA3);
END LOOP;
end;
Relsults look like this:
Something1,,Something2
Something3,Something4,Something5
Something6,Something7,Something8
Sometimes the results are null, as in the 1st line. It doesnt matter, they should be.
The purpose of this script is simple - to fetch EVERYTHING from the cursor, comma separate the data, and print lines with results.
The example here is simple as hell, but It's just and example. The "Real life" Packages contain sometimes hundreds of variables, processing enormous database tables.
I need it to be as simple as possible.
Is there any method to fetch EVERYTHING from the cursor, comma separate single results if possible, and send it to output? The final output in the Jenkins test should be a text file, to be able to diff it with other results.
Thanks in advance :)
If you're truly open to a SQL*Plus script, rather than a PL/SQL block
SQL> set colsep ','
SQL> variable rc refcursor;
SQL> exec :rc := SOME_PACKAGE.someFunction('some',parameters,here);
SQL> print rc;
should execute the procedure and fetch all the data from your cursor. You could spool the resulting CSV output to a file using the spool command. Of course, you then may encounter issues where SQL*Plus isn't displaying the data in a clean format because of the linesize or other similar issues-- that may force you to add some additional SQL*Plus formatting commands (i.e. set linesize, column <<column name>> format <<format>>, etc.)
It's not obvious to me that a SQL*Plus script buys you much over writing some dynamic SQL that generates the PL/SQL script that you posted initially or (if you're on 12c) writing some code that uses dbms_sql to fetch data from the cursor that is returned.
The answer seems obvious. You currently have a function which returns a cursor itself returning a data set of hundreds (tho you show only three) fields. You want instead a single string with the comma-separated values. So change the function or write another one based on the same query.
package body SOME_PACKAGE
...
-- The current function
function someFunction ...
returns ref_cursor ...
-- create cursor based on:
select f1, f2, f3 --> Three (or n) fields
from ...
where ...;
return generated_cursor;
end function;
-- The new function
function someOtherFunction ...
returns ref_cursor ...
-- create cursor based on:
select f1 || ',' || f2 || ',' || f3 as StringField --> one string field
from ...
where ...;
return generated_cursor;
end function;
end package;
This isn't quite all that you asked for. It does save declaring variables (one instead of hundreds) to read the data in one row at a time, but you still read it in one row at a time instead of, as I read your question, reading in every row in one operation. But if such a super-fetch were possible, it would require massive amounts of memory. Sometimes we do things that just require massive amounts of memory and we just work with that the best we can. But your "requirement" seems to be only a matter of convenience for the developers. That, imnsho, lies way down in the list of priorities for consuming resources.
Developing a cursor that returns the data in the final form you want would seem to me to the best of all alternatives.
How does it make sense that I can access Nested Table's data after
using Pipe Row command? Where is the data actually saved?
I know that in pipelined functions the data is sent directly to the
invoker client, and this reduces the dynamic memory of the process
on the SGA. But when I've tried to access that information after pipelining, I succeeded. This suprises me because that means the data is actually saved somewhere, but not on the local SGA. So where?
Here is an example I've made:
CREATE OR REPLACE TYPE collection_t IS OBJECT (
ID NUMBER,
NAME VARCHAR2(50));
/
CREATE OR REPLACE TYPE collection_nt_t IS TABLE OF collection_t;
/
CREATE OR REPLACE FUNCTION my_test_collection_p(rownumvar NUMBER)
RETURN collection_nt_t
PIPELINED
IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
PIPE ROW (collection_nt(i));
END LOOP;
DBMS_OUTPUT.PUT_LINE(collection_nt(3).id);--"queries" the collection successfully! Where is the collection being saved?
RETURN;
END;
/
SELECT * FROM TABLE(my_test_collection_p(100));
I have tried the idea of batches vs row-by-row (slow by
slow) by using bulk collect and forall. But doesn't pipelining mean
LOTS of context switches by itself? To return a row each time sounds
like a bad idea. In addition, how do I choose the bulk size of pipe
line function, if any?
Why by querying "SELECT * FROM TABLE(..)" sqlplus displays the
data by batches of 15, and PL/SQL Developer displays batches of
100? What does happen behind the scenes - batches of 1 row, no?
And why does all the world uses sqlplus, when you have nice IDEs like the convenient PL/SQL Developer? Even small clues from you would help me, Thanks a lot!
Details For 1: When I am doing the same with a regular Function table, the memory in use is something like 170MB per 1M rows. But when using the above pipeline table function, the memory is not increased. I am trying to understand what exactly is taking the memory - the fact that the collection is saved as a variable?
CREATE OR REPLACE FUNCTION my_test_collection(rownumvar NUMBER) RETURN
collection_nt_t IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
END LOOP;
RETURN collection_nt;
END;
SELECT * FROM TABLE(my_test_collection(1000000));--170MB of SGA delta!
SELECT * FROM TABLE(my_test_collection_p(1000000));--none. but both of them store the whole collection!
this main problem is to try to get the sql loader to execute inside the package.
procedure addGroup
is
num number;
name1 Varchar2(20);
load Varchar2(200);
begin
load := 'host sqlldr kevonia_workspace/pass123 control = C:\Users\Kevonia\Desktop\DBA\usernames.ctl log = C:\Users\Kevonia\Desktop\DBA\usernames.log';
Execute immediate(load);
for num in 1..10
loop
select username into name1 from loaded where userid=num;
DBA_PACKAGE.NewUser(name1);
dbms_output.put_line(name1|| ': was added' );
end loop;
end addGroup;
ORA-00900: invalid SQL statement
You have several options
Create external
table.
You can then manipulate the table as you would for a normal table.
This is the best option.
If you don`t have privileges to create a table and you insist on using host commands look at
this
(but really, don't do it).
Same as above but using a dbms_scheduler executable job.