ive got a problem for which i couldnt find answwer so far.
Is there a way to read blob file from oracle table using sql or pl/sql and measure time of reading it? I mean like reading whole of it, i dont need it displayed anywhere. All i found was to read 4000 bytes of file but thats not enough.
For importing there is simply
SET TIMING ON and OFF option in sqlplus but using select on tablle gives only small portion of file and doesnt matter how big it is, it always takes the same time pretty much.
Any help anybody?
Not quite sure what you're trying to achieve, but you can get some timings in a PL/SQL block using dbms_utility.get_time as LalitKumarB suggested. The initial select is (almost) instant though, it's reading through or processing the data that's really measurable. This is reading a blob with three different 'chunk' sizes to show the difference it makes:
set serveroutput on
declare
l_start number;
l_blob blob;
l_byte raw(1);
l_16byte raw(16);
l_kbyte raw(1024);
begin
l_start := dbms_utility.get_time;
select b into l_blob from t42 where rownum = 1; -- your own query here obviously
dbms_output.put_line('select: '
|| (dbms_utility.get_time - l_start) || ' hsecs');
l_start := dbms_utility.get_time;
for i in 1..dbms_lob.getlength(l_blob) loop
l_byte := dbms_lob.substr(l_blob, 1, i);
end loop;
dbms_output.put_line('single byte: '
|| (dbms_utility.get_time - l_start) || ' hsecs');
l_start := dbms_utility.get_time;
for i in 1..(dbms_lob.getlength(l_blob)/16) loop
l_16byte := dbms_lob.substr(l_blob, 16, i);
end loop;
dbms_output.put_line('16 bytes: '
|| (dbms_utility.get_time - l_start) || ' hsecs');
l_start := dbms_utility.get_time;
for i in 1..(dbms_lob.getlength(l_blob)/1024) loop
l_kbyte := dbms_lob.substr(l_blob, 1024, i);
end loop;
dbms_output.put_line('1024 bytes: '
|| (dbms_utility.get_time - l_start) || ' hsecs');
end;
/
For a sample blob that gives something like:
anonymous block completed
select: 0 hsecs
single byte: 950 hsecs
16 bytes: 61 hsecs
1024 bytes: 1 hsecs
So clearly reading the blob in larger chunks is more efficient. So your "measure time of reading it" is a bit flexible...
I guess you already have the solution to access the BLOB data. For getting the time, use DBMS_UTILITY.GET_TIME before and after the step in your PL/SQL code. You could declare two variables, start_time and end_time to capture the respective times, and just subtract them to get the time elapsed/taken for the step.
See this as an example, http://www.oracle-base.com/articles/11g/plsql-new-features-and-enhancements-11gr1.php
Related
We have a procedure that writes a XML File using SQL and we're asked to improve it's performance. Currently, it prints the XML line-by-line like the code below:
begin
dbms_output.put_line('<tns:jpk>');
-- header section
for i in (select xmlelement("tns:header", header_num)
,xmlelement("tns:customer", customer_name)
,xmlelement("tns:po_number", po_number) li_xml
from header_tbl) loop
dbms_output.put_line( i.li_xml.getclobval() );
dbms_output.put_line('<tns:lines>');
-- Lines section
for x in (select xmlelement("tns:line_num", line_num)
,xmlelement("tns:order", order_dtl)
,xmlelement("tns:qty", qty)
from Lines_tbl) lx loop
dbms_output.put_line( x.lx_xml.getclobval() );
dbms_output.put_line('<tns:Summary>');
-- Summary section
for y in (select xmlelement("tns:sum_num", sum_num)
,xmlelement("tns:total_amt, total_amt)
,xmlelement("tns:total_qty, total_qty)
from Summary_Tbl) lc_xml loop
dbms_output.put_line( y.lc_xml.getclobval() );
end loop;
dbms_output.put_line('<tns:Summary>');
dbms_output.put_line('</tns:lines>');
end loop;
end loop;
dbms_output.put_line('</tns:jpk>');
end;
the above sample code looks like this:
<tns:jpk>
<tns:header>1</tns:header>
<tns:customer>1000</tns:customer>
<tns:po_number>909090</tns:po_number>
<tns:lines>
<tns:line_num>1</tns:line_num>
<tns:order>Other FA Open Asset Cost Pozostale Srodki Trwale -Wartosc poczatkowa</tns:order>
<tns:qty>1</tns:qty>
<tns:Summary>
<tns:sum_num>1</tns:sum_num>
<tns:total_amt>1000</tns:total_amt>
<tns:total_qty>1</tns:total_qty>
</tns:Summary>
</tns:lines>
</tns:jpk>
I've managed to make it a bit faster by using a different approach, like the code below:
declare
xml_c xmltype;
procedure print_clob( p_clob in clob ) is
v_offset number := 1;
v_chunk_size number := 10000;
--v_chunk_size number := 32767;
begin
loop
exit when v_offset > dbms_lob.getlength(p_clob);
dbms_output.put_line( dbms_lob.substr( p_clob, v_chunk_size, v_offset ) );
v_offset := v_offset + v_chunk_size;
end loop;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('print_clob. others ' || SQLERRM);
end print_clob;
begin
select xmlagg(xmlconcat("<tns:jpk>",
-- header section
(select xmlagg(xmlconcat(
xmlelement("tns:header", header_num)
,xmlelement("tns:customer", customer_name)
,xmlelement("tns:po_number", po_number)
,xmlelement("tns:lines",
-- lines section
,(select
xmlagg(xmlconcat(
xmlelement("tns:line_num", line_num)
, xmlelement("tns:order", order_dtl)
, xmlelement("tns:qty", qty)
, (select
xmlagg(xmlconcat(
xmlelement("tns:sum_num", sum_num)
, xmlelement("tns:total_amt", total_amt)
, xmlelement("tns:total_qty", total_qty)))
from Summary_Tbl)
))
from Lines_tbl)
)
))
from header_tbl)
))
into xml_c
from dual;
print_clob( xml_c.getclobval );
end;
The above code makes a really long XML, and cuts it up to pieces and prints it.
It works fine and faster based on the execution times.
However, whenever there's a really long piece of string, the output sometimes gets skwered like below:
<tns:jpk>
<tns:header>1</tns:header>
<tns:customer>1000</tns:customer>
<tns:po_number>909090</tns:po_number>
<tns:lines>
<tns:line_num>1</tns:line_num>
<tns:order>Other FA Open Asset Cost Pozostale
Srodki Trwale -Wartosc poczatkowa</tns:order>
<tns:qty>1</tns:qty>
<tns:Summary>
<tns:sum_num>1</tns:sum_num>
<tns:total_amt>1000</tns:total_amt>
<tns:total_qty>1</tns:total_qty>
</tns:Summary>
</tns:lines>
</tns:jpk>
Is there a way for me to find out if the tag and it's contents will exceed the limit before I print it?
I have a query which return 50 millions rows. I want to generate XML files for each row (file max. size is 100k). Of course I know the tags but I don't know how to write this in the most efficient way. Any help ?
Thanks
I wouldn't recommend trying to write 50M files to disk, but here's some code you can play with to demonstrate why its not a good idea
1: a function that writes files to disk using a directory
create or replace function WRITETOFILE (dir in VARCHAR2,fn in VARCHAR2, dd IN clob) return clob AS
ff UTL_FILE.FILE_TYPE;
l_amt number := 30000;
l_offset number := 1;
l_length number := nvl(dbms_lob.getlength(dd),0);
buf varchar2(30000);
begin
ff := UTL_FILE.FOPEN(dir,fn,'W',32760);
while ( l_offset < l_length ) loop
buf := dbms_lob.substr(dd,l_amt,l_offset);
utl_file.put(ff, buf);
utl_file.fflush(ff);
utl_file.new_line(ff);
l_offset := l_offset+length(buf);
end loop;
UTL_FILE.FCLOSE(ff);
return dd;
END WRITETOFILE;
/
2: statement creating a table with all rows from a query that makes use of function above - suggesting that you keep the number of rows small to see how it plays
create table tmptbl as
select writetofile('DMP_DIR','xyz-'||level||'.xml', xmlelement("x", level).getClobVal()) tmpcol, systimestamp added_at
from dual CONNECT BY LEVEL <= 10;
3: drop table to repeat create table statement with more rows
drop table tmptbl purge;
I did 10k files in 10 seconds - which would give 1000 seconds for 1M files and 50000 seconds for 50M files (i.e. just under 14 hours).
I am trying to create a button on a page in my application that will download the full table I am referencing as a CSV file. I cannot use interactive reports > actions > download CSV because the interactive reports have hidden columns. I need all columns to populate in the CSV file.
Is there a way to create a SQL Script and reference it in the button?
I have already tried the steps referenced in this link: Oracle APEX - Export a query into CSV using a button but it does not help as my queries will contain columns that are hidden in the Interactive Report.
Welcome to StackOverflow!
One flexible option would be to use an application process, to be defined in the shared components (process point = ajax callback).
Something like this:
declare
lClob clob;
lBlob blob;
lFilename varchar2(250) := 'filename.csv';
begin
lClob := UNISTR('\FEFF'); -- was necessary for us to be able to use the files in MS Excel
lClob := lClob || 'Tablespace Name;Table Name;Number of Rows' || utl_tcp.CRLF;
for c in (select tablespace_name, table_name, num_rows from user_tables where rownum <= 5)
loop
lClob := lClob || c.tablespace_name || ';' || c.table_name || ';' || c.num_rows || utl_tcp.CRLF;
end loop;
lBlob := fClobToBlob(lClob);
sys.htp.init;
sys.owa_util.mime_header('text/csv', false);
sys.htp.p('Conent-length: ' || dbms_lob.getlength(lBlob));
sys.htp.p('Content-Disposition: attachment; filename = "' || lFilename || '"');
sys.htp.p('Cache-Control: no-cache, no-store, must-revalidate');
sys.htp.p('Pragma: no-cache');
sys.htp.p('Expires: 0');
sys.owa_util.http_header_close;
sys.wpg_docload.download_file(lBlob);
end;
This is the function fClobToBlob:
create function fClobToBlob(aClob CLOB) RETURN BLOB IS
tgt_blob BLOB;
amount INTEGER := DBMS_LOB.lobmaxsize;
dest_offset INTEGER := 1;
src_offset INTEGER := 1;
blob_csid INTEGER := nls_charset_id('UTF8');
lang_context INTEGER := DBMS_LOB.default_lang_ctx;
warning INTEGER := 0;
begin
if aClob is null then
return null;
end if;
DBMS_LOB.CreateTemporary(tgt_blob, true);
DBMS_LOB.ConvertToBlob(tgt_blob, aClob, amount, dest_offset, src_offset, blob_csid, lang_context, warning);
return tgt_blob;
end fClobToBlob;
On the page, you need to set your button action to "Redirect to Page in this Application", the target Page to "0". Under "Advanced", set Request to "APPLICATION_PROCESS=downloadCSV", where downloadCSV is the name of your application process.
If you need to parameterize your process, you can do this by accessing page items or application items in your application process.
Generating the CSV data can be cumbersome, but there are several packages out there that make it easier. The alexandria packages are one of them:
https://github.com/mortenbra/alexandria-plsql-utils
An example on how to use the CSV Package is here:
https://github.com/mortenbra/alexandria-plsql-utils/blob/master/demos/csv_util_pkg_demo.sql
I have created a package with two procedures and two cursors in it, but while executing the procedure, it is executed successful, but same record executed multiple times and a buffer overflow occurred.
I also tried removing the loop from the cursor but for 1 record that will be fine and for multiple record it won't work as expected.
EXPECTED
I just need to remove multiple execution of same record from the procedure where i am getting multiple execution of same record
for single procedure and single cursor it is working properly but for multiple cursor and multiple procedure i am getting problem here which caused buffer overflow too where i need different record
Is there any alternative way that I can fix the problem ?
CREATE OR REPLACE PACKAGE test.report AS
PROCEDURE distribution (
code_in IN user.test.code%TYPE,
fromdate date,
todate date
);
PROCEDURE tdvalue (
id IN user.test.custid%TYPE
);
END report;
/
Package Body
CREATE OR REPLACE PACKAGE BODY test.report as
----------VARIABLE DECLARATION----------------
code_in user.test.code%TYPE;
custidin user.test.custid%TYPE;
fromdate DATE;
todate DATE;
diff number(17,2);
---------------CURSOR DECLARATION--------------
CURSOR td_data(code_in user.test.code%TYPE,
fromdate date,
todate date
) IS
( SELECT
test.code,
COUNT(test.code) AS count,
SUM(test2.Deposit_amount) AS total,
test.currency
FROM
user.test2
JOIN user.test ON test2.acid = test.acid
WHERE
user.test2.open_effective_date BETWEEN TO_DATE(fromdate, 'dd-mm-yyyy') AND TO_DATE(todate, 'dd-mm-yyyy')
and
user.test.code = code_in
GROUP BY
test.code,test.currency
);
td__data td_data%rowtype;
CURSOR C_DATA(custidin user.test.custid%TYPE) IS SELECT
test.custid,
test2.id,
TO_DATE(test2.initial_date, 'dd-mm-yyyy') - TO_DATE(test2.end_date, 'dd-mm-yyyy') AS noofdays,
round(((test2.deposit_amount *((TO_DATE(test2.initial_date, 'dd-mm-yyyy') - TO_DATE(test2.end_date, 'dd-mm-yyyy'
)) / 365) * test4.interest_rate) / 100), 2) + test2.deposit_amount AS calculated_amount,
SUM(test.flow_amt) + test2.deposit_amount AS system_amount
FROM
user.test
JOIN user.test2 ON test3.entity_id = test2.id
WHERE
test.custid = custidin
GROUP BY
test.custid,
test2.id;
c__data c_data%ROWTYPE;
PROCEDURE distribution
(
code_in IN user.test.code%TYPE,
fromdate in date,
todate in date
)
AS
BEGIN
OPEN td_data(code_in,fromdate,todate);
loop
FETCH td_data INTO td__data;
dbms_output.put_line(td__data.code
|| ' '
|| td__data.count
|| ' '
||td__data.currency
||' '
||td__data.total
);
end loop;
CLOSE td_data;
END distribution;
PROCEDURE tdvalue (
custidin IN user.test.custid%TYPE
)
AS
BEGIN
open c_data(custidin);
fetch c_data into c__data;
loop
diff:= c__data.calculated_amount- c__data.system_amount;
dbms_output.put_line(c__data.custid
|| ' '
|| c__data.noofdays
|| ' '
|| c__data.end_date
|| ' '
|| c__data.initial_date
|| ' '
|| c__data.calculated_amount
||' '
||diff
);
end loop;
close c_data;
END tdvalue;
END report;
/
To run
ALTER SESSION set nls_date_format='dd-mm-yyyy';
SET SERVEROUTPUT ON;
EXEC REPORT.DISTRIBUTION('872328','01-02-2016','08-02-2019');
/
EXEC REPORT.tdvalue('S9292879383SS53');
Buffer overflow - ORU-10027 - happens when the total number of bytes displayed through DBMS_OUTPUT exceeds the size of the serveroutput buffer. The default is only 20000 bytes (who knows why?). Your session is using that default because of how you enable serveroutput. Clearly one record is less than 2000 and you only hit that limit when you run for multiple records.
To fix this try this
SET SERVEROUTPUT ON size unlimited
It's not actually unlimited, but the upper bound is the PGA limit (session memory) and you really shouldn't hit that limit with DBMS_OUTPUT. Apart from anything else who would read all that?
So the other problem with your code - as #piezol points out - is that your loops have no exit points. You should test whether the FETCH actually fetched anything and exit if it didn't:
loop
FETCH td_data INTO td__data;
exit when td_data%notfound;
dbms_output.put_line(td__data.code
|| ' '
|| td__data.count
|| ' '
||td__data.currency
||' '
||td__data.total
);
end loop;
Remembering to do this is just one reason why implicit cursors and cursor for loops are preferred over explicit cursors.
The second cursor loop is even worse because not only does it not have an exist point, the fetch is outside the loop. That's why you have repeated output for the same record.
So let's rewrite this ...
open c_data(custidin);
fetch c_data into c__data; -- should be inside
loop
diff:= c__data.calculated_amount- c__data.system_amount;
… as a cursor for loop:
PROCEDURE tdvalue (
custidin IN user.test.custid%TYPE
)
AS
BEGIN
for c__data in c_data(custidin)
loop
diff:= c__data.calculated_amount- c__data.system_amount;
dbms_output.put_line(c__data.custid
|| ' '
|| c__data.noofdays
|| ' '
|| c__data.end_date
|| ' '
|| c__data.initial_date
|| ' '
|| c__data.calculated_amount
||' '
||diff
);
end loop;
END tdvalue;
No need for OPEN, CLOSE or FETCH, and no need to check when the cursor is exhausted.
In PL/SQL, the preferred mechanism for setting the DBMS_OUTPUT buffer size would be within your procedure. This has the benefit of working in any client tool, such as Java or Toad (though it is still up to the client tool to retrieve the output from DBMS_OUTPUT).
DBMS_Output.ENABLE
Pass in a parameter of NULL for unlimited buffer size.
It would go like this:
BEGIN
DBMS_OUTPUT.ENABLE(NULL);
FOR I IN 1..1000 LOOP
DBMS_OUTPUT.PUT_LINE('The quick red fox jumps over the lazy brown dog.');
END LOOP;
END;
/
Bonus fact:
You can use the other functions and procedures in DBMS_OUTPUT to roll your own if you aren't using SQL*Plus or a DBMS_OUTPUT-savvy tool like Toad.
You can use the GET_LINE or GET_LINES procedures from your client code to get whatever may have been written to DBMS_OUTPUT.
GET_LINE
I have below code in my PL/SQL procedure, which I called in API_XXX.put(it calls utl_file.put) in a while loop. And the l_xmldoc is CLOB from a function of getReportXML, which returns the xml clob.
the code I write to write xml into a file is like:
l_offset := 1;
WHILE (l_offset <= l_length)
LOOP
l_char := dbms_lob.substr(l_xmldoc,1,l_offset);
IF (l_char = to_char(10)) ---I also tried if (l_char=chr(10)) but it did not work
THEN
API_XXXX.new_line(API_XXX.output, 1);
ELSE
API_XXXX.put(fnd_API_XXX.output, l_char);
END IF;
l_offset := l_offset + 1;
END LOOP;
Please note that the API_XXX is the existing package which I am not able to modify, and this api calls fflush in the end of put.
API_XXX.put's part is like below("WHICH" is the first param):
elsif WHICH = API_XXX.OUTPUT then
temp_file := OUT_FNAME;
utl_file.put(F_OUT, BUFF);
utl_file.fflush(F_OUT);
API_XXX.new_line is like(LINES is the number of lines to write):
elsif WHICH = API_XXX.OUTPUT then
temp_file := OUT_FNAME;
utl_file.new_line(F_OUT, LINES);
utl_file.fflush(F_OUT);
I notice a that the put/new_line procedure in my customer's side will sometimes raise UTL_FILE.WRITE_ERROR for unknown reason(maybe due to the l_length is too large(up to 167465)) in the while loop from my customer.
I read Oracle PL/SQL UTL_FILE.PUT buffering
. And I found that this is the same cause, my l_xmldoc is really large and when I loop it, I found that it is without a new line terminator so the buffer is up to 32767 even though I fflush every time.
So, how should I convert the l_xmldoc into a varchar with new line terminator.
PS: I confirmed that my customer is using Oralce 11g
Post the Oracle Version you are using! Or we can just guess around...
Your fflush will not work as you expect - From the documentation:
FFLUSH physically writes pending data to the file identified by the file handle. Normally, data being written to a file is buffered. The FFLUSH procedure forces the buffered data to be written to the file. The data must be terminated with a newline character.
tbone is abolutely right the line TO_CHAR(10) is wrong! Just try SELECT TO_CHAR(10) FROM DUAL; you will get 10 which you then compare to a single character. A single character will never be '10' since 10 has two characters!
Your problem is most likely a buffer-overflow with too large XML-Files, but keep in mind, also other problems on the target system can lead to write_errors, which should be handled.
Solutions
Quick&Dirty: Since you don't seem to care about performance anyways you can just close the file every X byte and reopen it with A for append. So just add to the loop:
IF MOD( l_offset, 32000 ) = 0
THEN
UTL_FILE.FCLOSE( f_out );
UTL_FILE.FOPEN( out_fpath, out_fname, f_out, 'a', 32767 );
END IF;
Use the right tool for the right job: UTL_FILE is not suited for handling complex data. The only usecase for UTL_FILE are small newline-separated lines of text. For everything else you should write RAW bytes! (Which will also allow you porper control over ENCODING, which is currently just mini-vanilly-lucky-guess)
Write a Java-Stored-Procedure with NIO-Filechannels - fast, safe, nice... But be careful, your program might run 10 times as fast!
Just a guess, but instead of "to_char(10)" you might try chr(10) to determine/write a newline. Not sure if this will solve your problem, but sometimes very long lines (without newlines) can cause issues.
For example:
declare
l_clob clob;
l_char char;
begin
l_clob := 'Line 1' || chr(10) || 'Line 2' || chr(10);
for i in 1 .. DBMS_LOB.GETLENGTH(l_clob)
loop
l_char := dbms_lob.substr(l_clob, 1, i);
if (l_char = chr(10)) then
--if (l_char = to_char(10)) then
dbms_output.put_line('Found a newline at position ' || i);
end if;
end loop;
end;
Notice the difference between chr(10) and to_char(10). Easy enough to test if this solves your problem anyway.