Fetching BLOB data and inserting into a Table - sql

I have a BLOB content stored in a table (x_files) with MIME_TYPE = 'text/plain' , I want to parse this BLOB data and want to insert the data into a table (TEMP_UPLOAD_DATA) having a VARCHAR2 column.
Can you please help me with a sample code .. If you have? I Have written a code as mentioned below - if I do a DBMS_OUTPUT things are working fine, but when I am trying to insert the data into the table things are not working fine
Is it that I need to convert the BLOB to CLOB first and then parse and insert the data into the Table? Request your inputs
Tables in my database:
Table_name: X_files
ID BLOB_CONTENT MIME_TYPE FILE_NAME LAST_UPDATED CHARECTER_SET
7 BLOB text/plain testing_blob.txt 01/28/2013 -
Table name: TEMP_UPLOAD_DATA
Column Name Data Type Nullable Default Primary Key
UPLOAD_COLUMN VARCHAR2(1000) Yes -
-
Code which I have written :-
This is working fine when I do a DBMS_OUTPUT , but not working fine when i am trying to insert the data into the table.
DECLARE
l_num NUMBER(8);
i NUMBER(4);
lob_loc BLOB;
update_details VARCHAR2(10000);
BEGIN
SELECT BLOB_CONTENT INTO lob_loc FROM x_files WHERE id = 7;
update_details := UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc, 10000, 1));
l_num := (LENGTH(update_details)-LENGTH(REPLACE(update_details,'##')))/LENGTH('##');
for i in 1..l_num
LOOP
DBMS_OUTPUT.PUT_LINE('STRING IS---' || SUBSTR(update_details,instr(update_details,'##',1,i),(instr(update_details,'##',1,i)-instr(update_details,'##',1,i)+1)));
--INSERT INTO TEMP_UPLOAD_DATA VALUES(SUBSTR(update_details,instr(update_details,'##',1,i),(instr(update_details,'##',1,i)-instr(update_details,'##',1,i)+1)));
END LOOP;
END;
Thanks

This example shows how simple it can be to insert substrings of blob data into a varchar2 column. The original question has a myriad of incomplete setup scripting the code is blending in more commands than the question demands.
create table blob_table ( id number primary key, blob_content blob );
create table char_table ( id number, char_content varchar2(10) );
insert into blob_table
values (1, utl_raw.cast_to_raw('ABCDEFGHIJKLMNOPQRSTUVWXYZ'));
insert into char_table (id, char_content)
select bt.id, utl_raw.cast_to_varchar2(dbms_lob.substr(bt.blob_content, 10, 1))
from blob_table bt
where bt.id = 1;
select * from char_table;
ID CHAR_CONTENT
---------- ------------
1 ABCDEFGHIJ

Related

How to get the the values from a csv file which is loaded in a table in sql/plsql

I have a table that stores all the relative data for a file (blob). From apex, i load into the table csv files only and it accepts it. With that csv file i have stored in there, i want to select the contents from the file which is stored in the column.
In others words, I have a csv file (employees.csv) stored in a table (Table A) column (File_upload). I want to access the contents of the csv file without exporting it but simply from a sql query.
It is an oracle database. The table includes ID (number), file_name (varchar2), file_uploaded (blob)
I have a sample i tried but its not working, this includes:
select csv.*
from tableA d, table(csv_util_pkg.clob_to_csv(d.file_uploaded)) csv
where d.id= 1;
It is not necessary to fix this code, alternatives are very welcomed.
Thank you in advance!
I think your problem ist the data type. You store the file in blob and your table function needs a clob.
I don't know your package csv_util_pkg, but i think the following is a solution for you.
Setup:
I used the Package csv_util_pkg from https://github.com/mortenbra/alexandria-plsql-utils
types.sql
csv_util_pkg.pks
csv_util_pkg.pkb
create table
create table tableA (id number(10),
file_name varchar2(255),
file_uploaded blob);
upload data
utl_raw.cast_to_raw() to create blob data
insert into tableA (id,file_name, file_uploaded)
values (
1,
'employees.csv',
utl_raw.cast_to_raw(
'"EMPLOYEE_ID","FIRST_NAME","LAST_NAME","EMAIL","PHONE_NUMBER","HIRE_DATE","JOB_ID","SALARY","COMMISSION_PCT","MANAGER_ID","DEPARTMENT_ID"
100,"Steven","King","SKING","515.123.4567",17.06.2003,"AD_PRES",24000,,,90
101,"Neena","Kochhar","NKOCHHAR","515.123.4568",21.09.2005,"AD_VP",17000,,100,90
102,"Lex","De Haan","LDEHAAN","515.123.4569",13.01.2001,"AD_VP",17000,,100,90')
);
create helper function
get it from https://stackoverflow.com/a/12854297/12277315
create function clobfromblob(p_blob blob) return clob is
l_clob clob;
l_dest_offsset integer := 1;
l_src_offsset integer := 1;
l_lang_context integer := dbms_lob.default_lang_ctx;
l_warning integer;
begin
if p_blob is null then
return null;
end if;
dbms_lob.createTemporary(lob_loc => l_clob
,cache => false);
dbms_lob.converttoclob(dest_lob => l_clob
,src_blob => p_blob
,amount => dbms_lob.lobmaxsize
,dest_offset => l_dest_offsset
,src_offset => l_src_offsset
,blob_csid => dbms_lob.default_csid
,lang_context => l_lang_context
,warning => l_warning);
return l_clob;
end;
/
use helper function
select d.file_name, csv.*
from tableA d, table(csv_util_pkg.clob_to_csv(clobfromblob(d.file_uploaded))) csv
where d.id= 1;
Result
note, my example is limited to 20 columns. See type t_csv_line in types.sql

How to access full OLD data in SQL Trigger

I have a trigger whose purpose is to fire whenever there is a DELETE on a particular table and insert the deleted data into another table in json format.
The trigger works fine if I am specifying each column explicitly. Is there any way to access the entire table row?
This is my code.
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object ('code' VALUE :old.id,
'name' VALUE :old.text,
'description' VALUE :old.text) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
This works fine. However, what I want is something like this. Instead of explicity specifying each column, I want to convert the entire :OLD data
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object (:old) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
Any suggestion please.
The short and correct answer is you can't. We have a few tables in our application where we do this and the developer is responsible for updating the trigger when they add a column: this is enforced with code reviews and is probably the cleanest solution for this scenario.
The long answer is you can get close, but I wouldn't do this in production for several reasons:
Triggers are terrible for performance
Triggers are terrible for code clarity
This requires reading the row again using flashback query so
You aren't getting the values of this row from inside your current transaction: if you update the row in your transaction and then delete it the JSON will show what the values were BEFORE your update
There is a performance penalty for reading from UNDO
There is potential that UNDO won't be available and your trigger will fail
Your user needs permission to execute flashback queries
Your database needs to meet all the perquisites to support flashback queries
Deleting a lot of rows will cause the ROWID collection to get large and consume PGA
There are probably more reasons, but in the interest of "can it be done" here you go...
DROP TABLE t1;
DROP TABLE t2;
DROP TRIGGER t1_ad;
CREATE TABLE t1 (
id NUMBER,
name VARCHAR2(100),
description VARCHAR2(100)
);
CREATE TABLE t2 (
dt TIMESTAMP(9),
json_data CLOB
);
INSERT INTO t1 VALUES (1, 'A','aaaa');
INSERT INTO t1 VALUES (2, 'B','bbbb');
INSERT INTO t1 VALUES (3, 'C','cccc');
INSERT INTO t1 VALUES (4, 'D','dddd');
CREATE OR REPLACE TRIGGER t1_ad
FOR DELETE ON t1
COMPOUND TRIGGER
TYPE t_rowid_tab IS TABLE OF ROWID;
v_rowid_tab t_rowid_tab := t_rowid_tab();
AFTER EACH ROW IS
BEGIN
v_rowid_tab.extend;
v_rowid_tab(v_rowid_tab.last) := :old.rowid;
END AFTER EACH ROW;
AFTER STATEMENT IS
v_scn v$database.current_scn := dbms_flashback.get_system_change_number;
v_json_data CLOB;
v_sql CLOB;
BEGIN
FOR i IN 1 .. v_rowid_tab.count
LOOP
SELECT 'SELECT json_arrayagg(json_object(' ||
listagg('''' || lower(t.column_name) || ''' VALUE ' ||
lower(t.column_name),
', ') within GROUP(ORDER BY t.column_id) || ') RETURNING CLOB) FROM t1 AS OF SCN :scn WHERE rowid = :r'
INTO v_sql
FROM user_tab_columns t
WHERE t.table_name = 'T1';
EXECUTE IMMEDIATE v_sql
INTO v_json_data
USING v_scn, v_rowid_tab(i);
INSERT INTO t2
VALUES
(current_timestamp,
v_json_data);
END LOOP;
END AFTER STATEMENT;
END t1_ad;
/
UPDATE t1
SET NAME = 'zzzz' -- not captured
WHERE id = 2;
DELETE FROM t1 WHERE id < 3;
SELECT *
FROM t2;
-- 13-NOV-20 01.08.15.955426000 PM [{"id":1,"name":"A","description":"aaaa"}]
-- 13-NOV-20 01.08.15.969755000 PM [{"id":2,"name":"B","description":"bbbb"}]

Updating json array of objects in Oracle

Hi I have a json column in Oracle database with a data like [{"id":100, "make":"BMW"},{"id":110,"make":"mercedes"}]..... now how can I update the make of object with id 110 to Toyota using sql/plsql..Thank you..
You can use json_table() function for version 12.1.0.2+ to parse a json column. Btw, a string type column such as clob, varchar2 might be checked out by adding a check constraint for concerned string type column. So use :
update tab
set jsdata=(select case when js.id = 110 then replace(jsdata,js.make,'toyota') end
from tab
cross join
json_table(jsdata, '$[*]'
columns(make varchar2(50) path '$.make',
id int path '$.id')) js
where js.id = 110 )
Demo
Unfortunately, it is not easy to change data within an array.
create table t(
id number primary key,
json_ds varchar2(4000) check(json_ds is json)
);
insert into t values (1, '[{"id":100, "make":"BMW"},{"id":110,"make":"mercedes"}]');
commit;
update /*+ WITH_PLSQL */ t a
set json_ds = (
with function update_json(p_in in varchar2) return varchar2 is
l_ja json_array_t;
l_po json_object_t;
l_id number;
begin
l_ja := json_array_t.parse(p_in);
for i in 0..l_ja.get_size - 1 loop
l_po := json_object_t(l_ja.get(i));
l_id := l_po.get_number('id');
if l_id = 110 then
l_po.put('make', 'Toyota');
end if;
end loop;
return l_ja.to_string;
end update_json;
select update_json(a.json_ds) from dual
)
where id = 1;
/
select * from t;
ID JSON_DS
1 [{"id":100,"make":"BMW"},{"id":110,"make":"Toyota"}]
Best regards, Stew Ashton
You can use JSON_TRANSFORM() in 19.8 and up.
with example as
(select '[{"id":100, "make":"BMW"},{"id":110,"make":"mercedes"}]' as json from dual)
select json_transform(example.json, set '$[*]?(#.id==110).make' = 'Toyota') as newjson from example;
Output:
[{"id":100,"make":"BMW"},{"id":110,"make":"Toyota"}]
You can also use JSON_MERGEPATCH (19c and up) but it can't update within arrays, need to update the whole array
Regards

How to insert 3.5k ints to array in Oracle DB

I have custom type:
create or replace type integer_varray as varray (4000) of int;
Then table which uses this array:
create table plan_capacities
(
id int generated by default as identity not null constraint plan_capacities_pkey primary key,
line_id int references lines (id) on delete cascade,
model_id int references models (id) on delete cascade,
plan_id int references plans (id) on delete cascade,
capacity integer_varray
);
And then some data I would like to insert in. The problem is that in Oracle I can't use more than 1000 items (I have 3 500 items) in the array "constructor" so simple statement
INSERT INTO plan_capacities ("model_id", "line_id", "plan_id", "capacity") VALUES (1,1,1,integer_varray(1,2,3.....35000))
Is not possible to use. (data are some capacities and the have to be in the specific order).
Data that should be inserted into array are in a string I have to put into script. -> {1,10,11,10,20,0,0,0,1,10 ....}
How can I insert that load of data?
I tried to insert them into temp table and then filling array with them - this works but that sql script has 3500 rows (to create just one records to plan_capacities) which is awful and big.
You can use your array as a table to insert its values into a table with a single SQL statement; for example:
declare
vArray integer_varray;
begin
-- some code to populate vArray
insert into someTable(col)
select column_value from table(vArray);
end;
If you can populate your array with a query, you don't need the array, simply using your query as a data source for the insert statement; for example:
insert into someTable(col)
select something
from someOtherTable
If you need a way to create a set of numbers, say 1, 2, ... 3500, this is a commonly used way:
select level
from dual
connect by level <= 3500
About a way to build a set of numbers from a string, this is a quite usual way:
SQL> create or replace type integer_varray as varray (4000) of int
2 /
Type created.
SQL> create table someTable(n number);
Table created.
SQL> declare
2 vString varchar2(32000) := '1,10,11,10,20,0,0,0,1,10';
3 vArray integer_varray;
4 begin
5 insert into someTable(n)
6 select to_number(regexp_substr(vString, '[^,]+', 1, level))
7 from dual
8 connect by instr(vString, ',', 1, level - 1) > 0;
9 end;
10 /
PL/SQL procedure successfully completed.
SQL> select * from someTable;
N
----------
1
10
11
10
20
0
0
0
1
10
10 rows selected.
SQL>
So my final solution is following:
create or replace type integer_varray as varray (4000) of int;
/
create or replace type varchar_varray as varray (10) of varchar(32767);
/
declare
data_to_be_stored varchar_varray := varchar_varray(
'0,0,0,0,.....',
'0,0,0,0,0,0....',
'0,0,0,0,0,0....'
);
array_to_store integer_varray := integer_varray();
begin
for i in 1 .. data_to_be_stored.COUNT loop
for j in (select to_number(trim(regexp_substr(data_to_be_stored(i), '[^,]+', 1, LEVEL))) value
from dual
connect by LEVEL <= regexp_count(data_to_be_stored(i), ',') + 1
) loop
array_to_store.extend;
array_to_store(array_to_store.count) := j.value;
end loop;
end loop;
insert into table_with_that_array (array) values (array_to_store);
end;
\
I had to you use varchar_varraybecause my data/strings are bigger than max capacity of varchar2so I split it into multiple strings in array.

Creating an update from insert in oracle sql

I already have an insert fully working. However i am unable to get the update working. I am using application express and using oracle sql. Below is what i have come up with. However it only seems to be adding new rows, creating a copy. Not updating the current row of data.
DECLARE
l_upload_size INTEGER;
l_upload_blob BLOB;
l_image_id NUMBER;
l_image ORDSYS.ORDImage;
l_name VARCHAR2(100);
l_address VARCHAR2(100);
l_postcode VARCHAR2(100);
l_description VARCHAR2(100);
BEGIN
--
-- Get the BLOB of the new image from the APEX_APPLICATION_TEMP_FILES (synonym for WWV_FLOW_TEMP_FILES)
-- APEX 5.0 change from APEX_APPLICATION_FILES which has been deprecated
-- APEX_APPLICATION_TEMP_FILES has fewer columns and is missing doc_size
--
SELECT
blob_content
INTO
l_upload_blob
FROM
apex_application_temp_files
WHERE
name = :P3_filename;
--
-- Insert row into the table, initialising the image and
-- returning the newly allocated image_id for later use
--
INSERT
INTO
bars
(
filename,
image,
name,
address,
postcode,
description
)
VALUES
(
:P3_filename,
ORDSYS.ORDImage(),
:P3_NAME,
:P3_ADDRESS,
:P3_POSTCODE,
:P3_DESCRIPTION
)
RETURNING
image_id, image
INTO
l_image_id, l_image;
-- find the size of BLOB (get doc_size)
l_upload_size := dbms_lob.getlength(l_upload_blob);
-- copy the blob into the ORDImage BLOB container
DBMS_LOB.COPY( l_image.SOURCE.localData, l_upload_blob, l_upload_size );
-- set the image properties
l_image.setProperties();
create_blob_thumbnail(l_image_id);
UPDATE
bars
SET
image = l_image -- original ORDImage image
WHERE
image_id = l_image_id;
END;
It seems that you are looking for MERGE command. Try something like this:
MERGE INTO bars DEST_TABLE
USING (select :P3_filename as filename from dual) SOURCE_TABLE
ON (DEST_TABLE.name = SOURCE_TABLE.filename)
WHEN MATCHED THEN
UPDATE SET image = ORDSYS.ORDImage()
WHEN NOT MATCHED THEN
INSERT (
filename,
image,
name,
address,
postcode,
description)
VALUES (:P3_filename,
ORDSYS.ORDImage(),
:P3_NAME,
:P3_ADDRESS,
:P3_POSTCODE,
:P3_DESCRIPTION);