I got many tables of which some end with digits (date).
Due to technical reasons I had to create a view on every table (just a workaround)
The current state is something like:
begin
for i in (select table_name
from user_tables
where table_name like 'XXX_%'
)
loop
execute immediate 'CREATE VIEW '||replace(i.TABLE_NAME,'XXX_','YYY_')||' AS SELECT * FROM '||i.TABLE_NAME;
end loop;
end;
/
All it does is creating views for every table which begins with 'XXX_'. Its almost what I need.
But now, sometimes tablenames look like "XXX_tablename_20210302", which is a manual backup.
Its not always 8 digits (date), sometimes shorter, soemtimes longer numbers. I would like to avoid all tables, which do have numbers in the end of the tablename (from right to left till first "_" checkup, its a number maybe?)
Does anyone know how to solve it?
Im kind of stuck here.
You can use regular expressions with regexp_like:
create table xxx_t ( c1 int );
create table xxx_t_20210401 ( c1 int );
select table_name from user_tables
where regexp_like ( table_name, '^XXX_.*[^0-9]+$' );
TABLE_NAME
XXX_T
This finds all the tables that
Start with XXX_ XXX_
Followed by any characters .*
That do not end with a digit [^0-9]+$
Related
I am having an issue with my postgresql database. I added 5 Tables with a lot of data and a lot of columns. Now I noticed I added the columns with a mix of upper and lowercase letters, which makes it difficult to query them using sqlalchemy or pandas.read_sql_query, because I need double quotes to access them.
Is there a way to change all values in the column names to lowercase letters with a single command?
Im new to SQL, any help is appreciated.
Use an anonymous code block with a FOR LOOP over the table columns:
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT table_schema,table_name,column_name
FROM information_schema.columns
WHERE table_schema = 'public' AND
table_name = 'table1'
LOOP
EXECUTE format('ALTER TABLE %I.%I RENAME COLUMN %I TO %I',
row.table_schema,row.table_name,row.column_name,lower(row.column_name));
END LOOP;
END $$;
Demo: db<>fiddle
If you wish to simply ensure that the query returns lowercase (without changing the original entries), you can simply input:
select lower(variable) from table;
On the other hand, if you wish to actually change the case in the table itself, you must use an UPDATE command.
UPDATE table SET variable = LOWER(variable);
Something like that should do the trick:
SELECT LOWER(column) FROM my_table;
Is there a way to select a field from a table and if that field doesn't exist then select a different field from the same table? example:
SELECT MY_FIELD from MY_TABLE
error: "MY_FIELD": invalid identifier
is there any way to check if it exists and if it does then use that field for the query, if it doesn't exist then use example:
SELECT my_field2 from client.
My problem is
I am writing a report that will be used on two databases, but the field names on occasion can be named slightly different depending on the database.
What you really need to do is talk to your management / development leads about why the different databases are not harmonized. But, since this is a programming site, here is a programming answer using dynamic SQL.
As has been pointed out, you could create views in the different databases to provide yourself with a harmonized layer to query from. If you are unable to create views, you can do something like this:
create table test ( present_column NUMBER );
insert into test select rownum * 10 from dual connect by rownum <= 5;
declare
l_rc SYS_REFCURSOR;
begin
BEGIN
OPEN l_rc FOR 'SELECT missing_column FROM test';
EXCEPTION
WHEN others THEN
OPEN l_rc FOR 'SELECT present_column FROM test';
END;
-- This next only works in 12c and later
-- In earlier versions, you've got to process l_rc on your own.
DBMS_SQL.RETURN_RESULT(l_rc);
end;
This is inferior to the other solutions (either harmonizing the databases or creating views). For one thing, you get no compile time checking of your queries this way.
That won't compile, so - I'd say not. You might try with dynamic SQL which reads contents of the USER_TAB_COLUMNS and create SELECT statement on-the-fly.
Depending on reporting tool you use, that might (or might not) be possible. For example, Apex offers (as reports's source) a function that returns query, so you might use it there.
I'd suggest a simpler option - create views on both databases which have unified column names, so that your report always selects from the view and works all the time. For example:
-- database 1:
create view v_client as
select client_id id,
client_name name
from your_table;
-- database 2:
create view v_client as
select clid id,
clnam name
from your_table;
-- reporting tool:
select id, name
from v_client;
This can be done in a single SQL statement using DBMS_XMLGEN.GETXML, but it gets messy. It would probably be cleaner to use dynamic SQL or a view, but there are times when it's difficult to create supporting objects.
Sample table:
--Create either table.
create table my_table(my_field1 number);
insert into my_table values(1);
insert into my_table values(2);
create table my_table(my_field2 number);
insert into my_table values(1);
insert into my_table values(2);
Query:
--Get the results by converting XML into rows.
select my_field
from
(
--Convert to an XMLType.
select xmltype(clob_results) xml_results
from
(
--Conditionally select either MY_FIELD1 or MY_FIELD2, depending on which exists.
select dbms_xmlgen.GetXML('select my_field1 my_field from my_table') clob_results
from user_tab_columns
where table_name = 'MY_TABLE'
and column_name = 'MY_FIELD1'
--Stop transformations from running the XMLType conversion on nulls.
and rownum >= 1
union all
select dbms_xmlgen.GetXML('select my_field2 my_field from my_table') clob_results
from user_tab_columns
where table_name = 'MY_TABLE'
and column_name = 'MY_FIELD2'
--Stop transformations from running the XMLType conversion on nulls.
and rownum >= 1
)
--Only convert non-null values.
where clob_results is not null
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns
my_field number path 'MY_FIELD'
);
Results:
MY_FIELD
--------
1
2
Here's a SQL Fiddle if you want to see it running.
All i want is to select all rows from a table and once it is selected and displayed, the data residing in table must get completely deleted. The main concern is that this must be done using sql only and not plsql. Is there a way we can do this inside a package and call that package in a select statement? Please enlighten me here.
Dummy Table is as follows:
ID NAME SALARY DEPT
==================================
1 Sam 50000 HR
2 Max 45000 SALES
3 Lex 51000 HR
4 Nate 66000 DEV
Any help would be greatly appreciated.
select * from Table_Name;
Delete from Table_Name
To select the data from a SQL query try using a pipelined function.
The function can define a cursor for the data you want (or all the data in the table), loop through the cursor piping each row as it goes.
When the cursor loop ends, i.e. all data has been consumed by your query, the function can perform a TRUNCATE table.
To select from the function use the following syntax;
SELECT *
FROM TABLE(my_function)
See the following Oracle documentation for information pipelined functions - https://docs.oracle.com/cd/B28359_01/appdev.111/b28425/pipe_paral_tbl.htm
This cannot be done inside a package, because " this must be done using sql only and not plsql". A package is PL/SQL.
However it is very simple. You want two things: select the table data and delete it. Two things, two commands.
select * from mytable;
truncate mytable;
(You could replace truncate mytable; with delete from mytable;, but this is slower and needs to be followed by commit; to confirm the deletion and end the transaction.)
Without pl/sql it's not possible.
Using pl/sql you can create a function which will populate a row, and then delete
Here is example :
drop table tempdate;
create table tempdate as
select '1' id from dual
UNION
select '2' id from dual
CREATE TYPE t_tf_row AS OBJECT (
id NUMBER
);
CREATE TYPE t_tf_tab IS TABLE OF t_tf_row;
CREATE OR REPLACE FUNCTION get_tab_tf RETURN t_tf_tab PIPELINED AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR rec in (select * from tempdate) LOOP
PIPE ROW(t_tf_row(rec.id));
END LOOP;
delete from tempdate ; commit;
END;
select * from table(get_tab_tf) -- it will populate and then delete
select * from tempdate --you can check here result of deleting
you can use below query
select * from Table_demo delete from Table_demo
The feature you seek is SERIALIZABLE ISOLATION LEVEL. This feature enables repeatable reads, which in particular guarantee that both SELECTand DELETEwill read and process the same identical data.
Example
Alter session set isolation_level=serializable;
select * from tempdate;
--- now insert from other session a new record
delete from tempdate ;
commit;
-- re-query the table old records are deleted, new recor preserved.
thanks for taking time to read and maybe answer my question!
Note that I am a beginner and should not be considered a pro but i did search for the answer without finding it , maybe due to my uncommon problem and/or lack of knowledge about it.
I have the following problem at work, i know it is not really supposed to happen, but here it is, on my desk... :
I have a table (conv_temp1) with the following columns:
ID No Sigle Info COLUMN_1 COLUMN_2 COLUMN_3 COLUMN_4 COLUMN_5 .. COLUMN_50
I have this cursor:
CURSOR c_minis IS
SELECT *
FROM conv_temp1;
I am trying to do something as the following:
FOR v_rsrec IN c_minis LOOP
l_column_i := 1;
dbms_output.put_line('----- Beginning - v_rsrec.id ----');
FOR boucle IN REVERSE 1.. 50 LOOP
--this is my problem, i am trying to acces a cursor column "dynamically"
EXECUTE IMMEDIATE 'v_declared_varchar2 := COLUMN_'|| l_column_i ||';';
IF v_declared_varchar2 IS NOT NULL THEN
dbms_output.put_line('I am doing something with the information!');
--(Dont worry, YES I am re-structuring it in a new table...)
END IF;
l_column_i := l_column_i + 1;
END LOOP;
dbms_output.put_line('-----c end - v_rsrec.id ----');
END LOOP;
Is there a way to perform such a thing as accessing a different column (only the number changes in the name of those) depending on where i am in my iterations?
ex, if I have already done 10 iterations, i will recover information from COLUMN_11 in my cursor.
A better solution would be to normalize the table. Break it into two tables as:
CREATE TABLE CONV_TEMP_HEADER
(ID_NO NUMBER
CONSTRAINT PK_CONV_TEMP_HEADER
PRIMARY KEY
USING INDEX,
SIGLE_INFO VARCHAR2(100)); -- or whatever
CREATE TABLE CONV_TEMP_DETAIL
(ID_DETAIL NUMBER,
ID_NO NUMBER
CONSTRAINT CONV_TEMP_DETAIL_FK1
REFERENCES CONV_TEMP_HEADER(ID_NO)
ON DELETE CASCADE,
IDX NUMBER,
COLUMN_VALUE VARCHAR2(100)
CONSTRAINT CONV_TEMP_DETAIL_UQ1
UNIQUE(ID_NO, IDX));
This way, instead of having to generate column names dynamically and figure out how to use DBMS_SQL, you can get your data using a simple join:
SELECT h.*, d.*
FROM CONV_TEMP_HEADER h
LEFT OUTER JOIN CONV_TEMP_DETAIL d
ON d.ID_NO = h.ID_NO;
Share and enjoy.
For other people with basic knowledge and the same needs, here is my solution:
CREATE OR REPLACE VIEW temp_view_name AS
SELECT ROWNUM AS ind, t.* FROM (
SELECT DISTINCT m.*
FROM conv_temp1 m ) t ;
This should be accessible from a EXECUTE IMMEDIATE query while a cursor isnt.
Hi everyone what I'm wondering if I can do is create a table that lists the record counts of other tables. It would get those table names from a table. So let's assume I have the table TABLE_LIST that looks like this
name
---------
sports_products <-- contains 10 records
house_products <-- contains 8 records
beauty_products <-- contains 15 records
I would like to write a statement that pulls the names from those tables to query them and coount the records and ultimately produce this table
name numRecords
------------------------------
sports_products 10
house_products 8
beauty_products 15
So I think I would need to do something like this pseudo code
select *
from foreach tableName in select name from table_list
select count(*) as numRecords
from tableName
loop
You can have a function that is doing this for you via dynamic sql.
However, make sure to declare it as authid current_user. You do not want anyone to gain some sort of privilege elevation by exploiting your function.
create or replace function SampleFunction
(
owner in VarChar
,tableName in VarChar
) return integer authid current_user is
result Integer;
begin
execute immediate 'select count(*) from "' || owner || '"."' || tableName || '"'
INTO result;
return result;
end;
One option is to simply keep your DB statistics updated, use dbms_stats package or EM, and then
select num_rows
from all_tables
where table_name in (select name from table_list);
I think Robert Giesecke solution will work fine.
A more exotic way of solving this is by using dbms_xmlgen.getxml.
See for example: Identify a table with maximum rows in Oracle