Extract first line from all subpartitions in all tables in Oracle database - sql

I want to get the first value for all subpartitions of all tables in my Oracle database.
From my resultset, I want to update another table with the values returned and insert a comment('No value' for example) for subpartitions whic are empty.
I have written a piece of code to begin with one table.
Can you please tell me if the algorithm I am using is right and why there is no output?
DECLARE
BEGIN
for i in (select table_name, subpartition_name
from dba_tab_subpartitions
where table_name ='my_table'
order by 1)
loop
execute immediate ' insert into bkp_part (partition_name) values('||i.subpartition_name||')';
commit;
execute immediate 'select *
from '|| i.table_name||' subpartition('||i.subpartition_name||')
where rownum < 2';
end loop;
end;
END;
/

There is no output because you aren't creating any. Your dynamic query isn't going to be executed because you don't select the columns into anything; if you really want the entire row you'd need a %rowtype variable to select into, but I'm not sure if you really want the whole row or just one value from it. And having selected into something, you then need to use that variable - putting it into a table from what you've said, or displaying for debug purpoises via dbms_output.
Your insert statement doesn't need to be dynamic; you can just do:
insert into bkp_part (partition_name) values (i.subpartition_name);
To get the first row from each subpartition, you need to establish what you mean by 'first'. In this example I'm using a hash subpartition, so for the sake of argument I'll decide that the 'first' row is the lowest value in that hash bucket, and to keep it simple I'm only interested in that single column.
I've set up a dummy table with:
create table my_table (id number, part_date date)
partition by range (part_date)
subpartition by hash (id)
subpartition template (
subpartition subpart_1,
subpartition subpart_2,
subpartition subpart_3)
(
partition part_1 values less than (date '2015-01-01')
);
insert into my_table values (1, date '2014-12-29');
insert into my_table values (2, date '2014-12-30');
insert into my_table values (4, date '2014-12-31');
Looking at how the data is distributed, I have nothing in the first subpartition, IDs 1 and 4 in the second, and ID 2 in the third.
That means I can do:
set serveroutput on
declare
l_id my_table.id%type;
begin
for i in (
select table_name, subpartition_name
from user_tab_subpartitions
where table_name = 'MY_TABLE'
order by table_name, subpartition_position
) loop
-- skipping because I don't have that table
-- insert into bkp_part (partition_name) values (i.subpartition_name);
execute immediate 'select min(id) from ' || i.table_name
|| ' subpartition(' || i.subpartition_name || ')'
into l_id;
-- just for debugging/demo
dbms_output.put_line('Subpartition ' || i.subpartition_name
|| ' minimum ID is ' || l_id);
-- do something else with the value; update or insert...
-- update bkp_part set min_id = l_id
-- where partition_name = i.subpartition_name;
end loop;
end;
/
anonymous block completed
Subpartition PART_1_SUBPART_1 minimum ID is
Subpartition PART_1_SUBPART_2 minimum ID is 1
Subpartition PART_1_SUBPART_3 minimum ID is 2
You can adapt that to get the columns you're interested in and do something more useful with them (like inserting/updating your bkp_part table)

Related

Oracle how to select and update one field value and insert records quickly

We have the below scenario:
create table
create table tt (id number, name varchar2(30), note varchar2(4));
insert into tt values (1,'AC','m');
insert into tt values (2,'Test','f');
we want to select the records from table tt and insert another record by updating one field value:
INSERT INTO tt SELECT id, 'x1', note FROM tt where id=1;
As our table has many columns and we would like to find one way doing the above things without listing all the columns names in the step2's sql.
Is there any way can help that?
You have many ways to get your response.
I will try to explain to you the first one which will occur me now.
I wish you work in version 12c+.
If that is the case:
You have a beautiful function name LISTAGG.
In addition, you have the table [DBA | ALL | SOURCE ]_TAB_COLUMNS.
*If you are not working in 12c+ you must concatenate the fields of all_tab_columns into a VARCHAR2 query like:
CURSOR my_cursor IS SELECT COLUMN_NAME FROM ALL_TAB_COLUMNS WHERE TABLE_NAME = 'MY_TABLE';
You can obviously order it.
...
BEGIN
FOR cur IN my_cursor LOOP
v_fields := v_fields || ',' cur.COLUMN_NAME;
END LOOP;
EXECUTE IMMEDIATE 'SELECT ' || v_fields || ' FROM MY_TABLE';
END;
I am not sure but in v_field you may remove the last comma.
*
If you can Access to LISTAGG.
It is so easy.
SELECT LISTAGG(COLUMN_NAME, ', ') WITHIN GROUP (ORDER BY COLUMN_ID) INTO v_fields
FROM ALL_TAB_COLUMNS
WHERE TABLE_NAME = <MY_TABLE>;
You can ORDER for anything, even just replace all the ORDERs parenthesis by (NULL), but not empty.
Where v_field must be like VARCHAR2(2000)
Now, finally. You need some array or collection that has the list of fields which you want to modify.
For example, if you have like...:
v_field := 'col1, col2, col3, col3 ... coln'
and array v_varray := myvarray('col1', 'col3', colx);
BEGIN
FOR i IN 1..v_varray.COUNT LOOP
v_field := REPLACE(v_field, v_varray(i), v_array(i) * <MY_MODIFICATION>)
END LOOP;
END;
With this method, You must detect your critical fields.
You must work with VARCHARS.
I wish to help you.
If you need more detail contact me.
Regards.

How to access full OLD data in SQL Trigger

I have a trigger whose purpose is to fire whenever there is a DELETE on a particular table and insert the deleted data into another table in json format.
The trigger works fine if I am specifying each column explicitly. Is there any way to access the entire table row?
This is my code.
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object ('code' VALUE :old.id,
'name' VALUE :old.text,
'description' VALUE :old.text) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
This works fine. However, what I want is something like this. Instead of explicity specifying each column, I want to convert the entire :OLD data
TRIGGER1
AFTER DELETE
ON QUESTION
FOR EACH ROW
DECLARE
json_doc CLOB;
BEGIN
select json_arrayagg (
json_object (:old) returning clob
) into json_doc
from dual;
PROCEDURE1(json_doc);
END;
Any suggestion please.
The short and correct answer is you can't. We have a few tables in our application where we do this and the developer is responsible for updating the trigger when they add a column: this is enforced with code reviews and is probably the cleanest solution for this scenario.
The long answer is you can get close, but I wouldn't do this in production for several reasons:
Triggers are terrible for performance
Triggers are terrible for code clarity
This requires reading the row again using flashback query so
You aren't getting the values of this row from inside your current transaction: if you update the row in your transaction and then delete it the JSON will show what the values were BEFORE your update
There is a performance penalty for reading from UNDO
There is potential that UNDO won't be available and your trigger will fail
Your user needs permission to execute flashback queries
Your database needs to meet all the perquisites to support flashback queries
Deleting a lot of rows will cause the ROWID collection to get large and consume PGA
There are probably more reasons, but in the interest of "can it be done" here you go...
DROP TABLE t1;
DROP TABLE t2;
DROP TRIGGER t1_ad;
CREATE TABLE t1 (
id NUMBER,
name VARCHAR2(100),
description VARCHAR2(100)
);
CREATE TABLE t2 (
dt TIMESTAMP(9),
json_data CLOB
);
INSERT INTO t1 VALUES (1, 'A','aaaa');
INSERT INTO t1 VALUES (2, 'B','bbbb');
INSERT INTO t1 VALUES (3, 'C','cccc');
INSERT INTO t1 VALUES (4, 'D','dddd');
CREATE OR REPLACE TRIGGER t1_ad
FOR DELETE ON t1
COMPOUND TRIGGER
TYPE t_rowid_tab IS TABLE OF ROWID;
v_rowid_tab t_rowid_tab := t_rowid_tab();
AFTER EACH ROW IS
BEGIN
v_rowid_tab.extend;
v_rowid_tab(v_rowid_tab.last) := :old.rowid;
END AFTER EACH ROW;
AFTER STATEMENT IS
v_scn v$database.current_scn := dbms_flashback.get_system_change_number;
v_json_data CLOB;
v_sql CLOB;
BEGIN
FOR i IN 1 .. v_rowid_tab.count
LOOP
SELECT 'SELECT json_arrayagg(json_object(' ||
listagg('''' || lower(t.column_name) || ''' VALUE ' ||
lower(t.column_name),
', ') within GROUP(ORDER BY t.column_id) || ') RETURNING CLOB) FROM t1 AS OF SCN :scn WHERE rowid = :r'
INTO v_sql
FROM user_tab_columns t
WHERE t.table_name = 'T1';
EXECUTE IMMEDIATE v_sql
INTO v_json_data
USING v_scn, v_rowid_tab(i);
INSERT INTO t2
VALUES
(current_timestamp,
v_json_data);
END LOOP;
END AFTER STATEMENT;
END t1_ad;
/
UPDATE t1
SET NAME = 'zzzz' -- not captured
WHERE id = 2;
DELETE FROM t1 WHERE id < 3;
SELECT *
FROM t2;
-- 13-NOV-20 01.08.15.955426000 PM [{"id":1,"name":"A","description":"aaaa"}]
-- 13-NOV-20 01.08.15.969755000 PM [{"id":2,"name":"B","description":"bbbb"}]

Oracle SQL/PLSQL: change type of specific columns in one time

Assume following table named t1:
create table t1(
clid number,
A number,
B number,
C number
)
insert into t1 values(1, 1, 1, 1);
insert into t1 values(2, 0, 1, 0);
insert into t1 values(3, 1, 0, 1);
clid A B C
1 1 1 1
2 0 1 0
3 1 0 1
Type of columns A, B, and C is number. What I need to do is to change types of those columns to VARCHAR but in a quite tricky way.
In my real table I need to change datatype for hundred of columns so it is not so convenient to write a statement like following hundred of time:
ALTER TABLE table_name
MODIFY column_name datatype;
What i need to do is rather to convert all columns to VARCHAR except CLID column like we can do that in Python or R
Is there any way to do so in Oracle SQL or PLSQL?
Appreciate your help.
Here is a example of procedure that can help...
It accepts two parameters that should be a name of your table and list of columns you do not want to change...
At the begining there is a cursor that gets all the column names for your table except the one that you do not want to change...
Then it loop's though the columns and changes them...
CREATE OR REPLACE procedure test_proc(p_tab_name in varchar2
, p_col_names in varchar2)
IS
v_string varchar2(4000);
cursor c_tab_cols
is
SELECT column_name
FROM ALL_TAB_COLS
WHERE table_name = upper(p_tab_name)
and column_name not in (select regexp_substr(p_col_names,'[^,]+', 1, level) from dual
connect by regexp_substr(p_col_names, '[^,]+', 1, level) is not null);
begin
FOR i_record IN c_tab_cols
loop
v_string := 'alter table ' || p_tab_name || ' modify '
|| i_record.column_name || ' varchar(30)';
EXECUTE IMMEDIATE v_string;
end loop;
end;
/
Here is a demo:
DEMO
You can also extend this procedure with a type of data you want to change into... and with some more options I am sure....
Unfortunately, that isn't as simple as you'd want it to be. It is not a problem to write query which will write query for you (by querying USER_TAB_COLUMNS), but - column must be empty in order to change its datatype:
SQL> create table t1 (a number);
Table created.
SQL> insert into t1 values (1);
1 row created.
SQL> alter table t1 modify a varchar2(1);
alter table t1 modify a varchar2(1)
*
ERROR at line 1:
ORA-01439: column to be modified must be empty to change datatype
SQL>
If there are hundreds of columns involved, maybe you can't even
create additional columns in the same table (of VARCHAR2 datatype)
move values in there
drop "original" columns
rename "new" columns to "old names"
because there'a limit of 1000 columns per table.
Therefore,
creating a new table (with appropriate columns' datatypes),
moving data over there,
dropping the "original" table
renaming the "new" table to "old name"
is probably what you'll finally do. Note that it won't be necessarily easy either, especially if there are foreign keys involved.
A "query that writes query for you" might look like this (Scott's sample tables):
SQL> SELECT 'insert into dept (deptno, dname, loc) '
2 || 'select '
3 || LISTAGG ('to_char(' || column_name || ')', ', ')
4 WITHIN GROUP (ORDER BY column_name)
5 || ' from emp'
6 FROM user_tab_columns
7 WHERE table_name = 'EMP'
8 AND COLUMN_ID <= 3
9 /
insert into dept (deptno, dname, loc) select to_char(EMPNO), to_char(ENAME), to_char(JOB) from emp
SQL>
It'll save you from typing names of hundreds of columns.
I think its not possible to change data type of a column if values are there...
Empty the column by copying values to a dummy column and change data types.

Finding every record of specific string in database

I have a database with the column "endpointid" in a lot of tables. I am looking for a search function that would find every table containing a specific endpointid in order to write a query to delete that endpoint. I have tried a delete function to delete it from all tables but that is not working properly since a specific endpointid might not be in all tables. I know the following query gives all tables with the column name:
select table_name from all_tab_columns where lower(column_name) like lower('%endpointid%');
How can I extend that query to search for a specific record of endpointid?
Here is an example to delete rows with a specific endpointid value:
CREATE TABLE mytest (
endpointid NUMBER
);
INSERT INTO mytest VALUES ( 1 );
INSERT INTO mytest VALUES ( 2 );
DECLARE
ep NUMBER := 2;
BEGIN
FOR t_rec IN (
SELECT
table_name
FROM
all_tab_columns
WHERE
lower(column_name) LIKE lower('%endpointid%')
) LOOP
EXECUTE IMMEDIATE 'delete from '
|| t_rec.table_name
|| ' where endpointid = :1'
USING ep;
END LOOP;
END;
Note that if these tables have foreign key relationships, this may fail, as it does not take into account the ordering of the table references. If that is needed, then you would need to structure your metatada query to find those relationships.

nzsql - Converting a subquery into columns for another select

Goal: Use a given subquery's results (a single column with many rows of names) to act as the outer select's selection field.
Currently, my subquery is the following:
SELECT column_name
FROM information_schema.columns
WHERE table_name = 'test_table' AND column_name not in ('colRemove');
What I am doing in this subquery is grabbing all the column names from a table (i.e. test_table) and outputting all except for the column name specified (i.e. colRemove). As stated in the "goal", I want to use this subquery as such:
SELECT (*enter subquery from above here*)
FROM actual_table
WHERE (*enter specific conditions*)
I am working on a Netezza SQL server that is version 7.0.4.4. Ideally, I would like to make the entire query executable in one line, but for now, a working solution would be much appreciated. Thanks!
Note: I do not believe that the SQL extensions has been installed (i.e. arrays), but I will need to double check this.
A year too late, here's the best I can come up with but, as you already noticed, it requires a stored procedure to do the dynamic SQL. The stored proc creates a view with the all the columns from the source table minus the one you want to exclude.
-- Create test data.
CREATE TABLE test (firstcol INTEGER, secondcol INTEGER, thirdcol INTEGER);
INSERT INTO test (firstcol, secondcol, thirdcol) VALUES (1, 2, 3);
INSERT INTO test (firstcol, secondcol, thirdcol) VALUES (4, 5, 6);
-- Install stored procedure.
CREATE OR REPLACE PROCEDURE CreateLimitedView (varchar(ANY), varchar(ANY)) RETURNS BOOLEAN
LANGUAGE NZPLSQL AS
BEGIN_PROC
DECLARE
tableName ALIAS FOR $1;
columnToExclude ALIAS FOR $2;
colRec RECORD;
cols VARCHAR(2000); -- Adjust as needed.
isfirstcol BOOLEAN;
BEGIN
isfirstcol := true;
FOR colRec IN EXECUTE
'SELECT ATTNAME AS NAME FROM _V_RELATION_COLUMN
WHERE
NAME=UPPER('||quote_literal(tableName)||')
AND ATTNAME <> UPPER('||quote_literal(columnToExclude)||')
ORDER BY ATTNUM'
LOOP
IF isfirstcol THEN
cols := colRec.NAME;
ELSE
cols := cols || ', ' || colRec.NAME;
END IF;
isfirstcol := false;
END LOOP;
-- Should really check if 'LimitedView' already exists as a view, table or synonym.
EXECUTE IMMEDIATE 'CREATE OR REPLACE VIEW LimitedView AS SELECT ' || cols || ' FROM ' || quote_ident(tableName);
RETURN true;
END;
END_PROC
;
-- Run the stored proc to create the view.
CALL CreateLimitedView('test', 'secondcol');
-- Select results from the view.
SELECT * FROM limitedView WHERE firstcol = 4;
FIRSTCOL | THIRDCOL
----------+----------
4 | 6
You could have the stored proc return a resultset directly but then you wouldn't be able to filter results with a WHERE clause.