I need to get the table names queried in a pl/sql package file.
I know that there is an option for this in Notepad++ by regex but I don't know what regex to apply for get the table names (I understand that must be some regex to take the keyword "FROM" and get the next string after space, I think so).
For the next example code:
CREATE OR REPLACE PACKAGE BODY pac_example AS
FUNCTION f1 RETURN NUMBER IS
BEGIN
SELECT * FROM table1;
RETURN 1;
END f1;
FUNCTION f2 RETURN NUMBER IS
BEGIN
SELECT * FROM table2;
RETURN 1;
END f2;
END pac_example;
And I expect replace all and get the file with only its table names:
table1
table2
If you're interested only in table names that are directly referred from the PACKAGE BODY, a simple and straight-forward method is to query all_dependencies or user_dependencies.
SELECT owner,
referenced_name as table_name
FROM all_dependencies
WHERE type IN (
'PACKAGE BODY'
) AND name IN (
'PAC_EXAMPLE'
) AND referenced_type = 'TABLE';
DEMO
To my knowledge no one has done this with 100% accuracy. The closest you get is the ALL/DBA_DEPENDENIES but does not tell you if the table is accessed in a SELECT, INSERT, UPDATE or DELETE.
It will however resolve synonyms.
The downside of this is that it will not include tables referenced in dynamic SQL.
If you have a database that uses particular naming convention for tables (e.g. Tnnn_XXXXX ) you could do:
SELECT DISTINCT c.text, c.name, c.type, t.table_name
FROM user_source c, user_tables t
WHERE UPPER(t.text) like '%' || t.name_name || '%' -- Maybe REGEXP_LIKE better
ORDER BY 2, 1, 4;
I worked on a project decades ago where they wanted a CRUD matrix of programs (PLSQL, SQL, Oracle Forms/Reports, ProC, ProCOBOL) and what tables each accessed.
The only solution available at the time was for me to write a parser (in C) that parsed the codebase looking for SQL and processing it. Mine even reported columns as well as tables. The C program parsed the source, looking for KEYWORDS and characters to control a state engine. It took a a couple of weeks to refine and get working across all the different codebase types.
By the end, the only thing it could not do was dynamic queries where the table name was built up from variable values. But the workaround here was to capture the tkprof files and process these.
Tragically, I do not have the source code for this anymore.
However, if I were to do it again, I would use Lex/Yacc/Bison to parser SQL and build a system around these tools.
A quick search found this:
https://github.com/jgarzik/sqlfun
https://www.oreilly.com/library/view/flex-bison/9780596805418/ch04.html
Not a small undertaking.
Ctrl+H
Find what: (?:\A(?:(?!FROM).)*|\G)FROM\s+(\w+(?:\s*,\s*\w+)*)(?:(?!FROM).)*
Replace with: " #a space and a double quote
check Wrap around
check Regular expression
CHECK . matches newline
Replace all
Explanation:
(?: # start non capture group
\A # beginning of file
(?:(?!FROM).)* # Tempered greedy token, make sure we haven't FROM before
| # OR
\G # restart from last match position
) # end group
FROM\s+ # literally FROM followed by 1 or more spaces
( # start group 1
\w+ # 1 or more word characters (table name)
(?:\s*,\s*\w+)* # non capture group spaces comma spaces and 1 or more word characters, optional more tables
) # end group
(?:(?!FROM).)* # Tempered greedy token, make sure we haven't FROM
Replacement:
$1 # content of group 1, table name
Screen capture:
You can use following regex to search for table names.
Regex: FROM\s([^;]+)
Replacement: \n%\1%\n
Then follow this answer for replacing other data in file.
the earlier mentioned tables
all_dependencies or user_dependencies
can list out the dependencies as mentioned earlier, but it wont cover the dynamic queries. and the search if done in notepad using key words like 'from' then only tables referred by statements after 'From' statements will be covered.
the below code snippet can be considered for full analysis :- line by line, word by word and analysis the tables (referred to the sample mentioned by you)
declare
l_line varchar2(2000);
ln_start_string number;
ln_last_string number;
ln_string_length number;
l_word varchar2(4000);
l_table_flag varchar(2):='N';
cursor l_pkg_body_cur
is
select TEXT from all_source where upper(name) like upper('pac_example') and type = 'PACKAGE BODY';
-- to get the source compiled in package boby, mentioned the package to be searched here
begin
for rec in l_pkg_body_cur
loop
-- line by line processing
select TRIM(rec.text) into l_line from dual;
ln_string_length := length(l_line);
loop
-- word by word processing
l_table_flag :='N';
select instr(l_line,' ') into ln_last_string from dual;
select substr(l_line,0,ln_last_string) into l_word from dual;
begin
select 'Y' into l_table_flag from all_tables where upper(table_name) like upper(trim(l_word)) and rownum=1; -- to validate it is table or not
exception
when others then
l_table_flag := 'N';
end;
IF l_table_flag = 'Y'
then
dbms_output.put_line(trim(l_word) ); -- table name
end if;
select length (l_word) into ln_start_string from dual;
select trim(substr(replace(l_line,';',null),ln_start_string)) into l_line from dual;
exit when l_line is NULL;
end loop;
end loop;
end;
--output:
Statement processed.
table1
table2
similar way this query can be altered as wanted to find views or synonyms deparated by changing the base table- all_views,all_synonyms, accordingly as per required.
this is like most straight forward approach- might take more processing time depends on the package size
same can be done using UNIX scripting if needed
if needed to check from the file directly UNIX scripting can be used (UTF_file operations can also be used ) to get line by line from the file and have sql session to do the above validation and to display the results
but hopefully this would provide most accurate results.
Related
I am having an issue with my postgresql database. I added 5 Tables with a lot of data and a lot of columns. Now I noticed I added the columns with a mix of upper and lowercase letters, which makes it difficult to query them using sqlalchemy or pandas.read_sql_query, because I need double quotes to access them.
Is there a way to change all values in the column names to lowercase letters with a single command?
Im new to SQL, any help is appreciated.
Use an anonymous code block with a FOR LOOP over the table columns:
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT table_schema,table_name,column_name
FROM information_schema.columns
WHERE table_schema = 'public' AND
table_name = 'table1'
LOOP
EXECUTE format('ALTER TABLE %I.%I RENAME COLUMN %I TO %I',
row.table_schema,row.table_name,row.column_name,lower(row.column_name));
END LOOP;
END $$;
Demo: db<>fiddle
If you wish to simply ensure that the query returns lowercase (without changing the original entries), you can simply input:
select lower(variable) from table;
On the other hand, if you wish to actually change the case in the table itself, you must use an UPDATE command.
UPDATE table SET variable = LOWER(variable);
Something like that should do the trick:
SELECT LOWER(column) FROM my_table;
I am new to Pl/SQL and came across an issue. I made a dummy table of some football teams names. I am using a cursor and calling teams where the capacity is greater than 50000 assigning it to the cursor. My issue is after the for loop I do not know how to check if the index contains a certain string ('Man') after which I was to put print a prefix in front of the teams name. e.g. the SELECT block returns "Man United" and "Chelsea". I have searched through google and seen examples of using contains and like and INSTR etc but none work, would someone please help.
DECLARE
CURSOR cur_team IS
SELECT DISTINCT TEAM_NAME
FROM TEAMS
WHERE CAP > 50000;
BEGIN
FOR n_inx IN cur_team LOOP
IF.....(PART I DONT KNOW)
DBMS_OUTPUT.PUT_LINE('Favourite: ' || n_inx);
ELSE
DBMS_OUTPUT.PUT_LINE('Rival: ' || n_inx);
END IF;
END LOOP;
END;
That is what I have so far and I am struggling with the IF statement.
DECLARE
CURSOR cur_team IS
SELECT DISTINCT TEAM_NAME
FROM TEAMS
WHERE CAP > 50000;
BEGIN
FOR n_inx IN cur_team LOOP
IF n_inx.team_name LIKE '%Man%'
THEN
DBMS_OUTPUT.PUT_LINE('Favourite: ' || n_inx.team_name);
ELSE
DBMS_OUTPUT.PUT_LINE('Rival: ' || n_inx.team_name);
END IF;
END LOOP;
END;
/
Just to clarify some terms, in this construction:
for n_inx in cur_team loop
n_inx is not an index but a PL/SQL record, implicitly defined based on the columns of cur_team. (I think the documentation uses rather confusing wording here.) Records have fields. In this case the record has one field, team_name, and you can refer to it using dot notation: n_inx.team_name. Perhaps a better name for the record would be r_team, or just r. It's not an index, and there is nothing numeric about it (if that's what the n_ prefix means).
Also, the procedure dbms_output.put_line accepts a single varchar2 parameter, not a record, so dbms_output.put_line(some_record) will be rejected.
Within PL/SQL programming, collections (arrays) have indexes. The array index is the key that identifies one element in an array, for example 1 and 2 below, not the text values:
myCollection(1) := 'Manchester United';
myCollection(2) := 'Chelsea';
(You could also define an associative array of numbers (or anything else) indexed by varchar2, in which case you might have
myCollection('Manchester United') := 1;
myCollection('Chelsea') := 2;
and then to find the array elements whose index contained 'Man', you would have to loop through it from beginning to end. However, this isn't your situation.)
So to reword your question, you are looking for a way (not necessarily a function) to check for a text pattern within a field of a record. You can break that down into two issues:
How to extract a single field's value from a record:
Use recordname.fieldname, in your example n_inx.team_name (but I'd recommend choosing a different name for the record).
How to match a wildcard or regex pattern in a string value:
Use simple wildcards % and _ with like conditions, or more sophisticated regexes with the regexp_like function.
I have one table with id, name and complex queries. Below is just a sample of that table..
ID name Query
1 advisor_1 "Select * from advisor"
2 student_1 "Select * from student where id = 12"
3 faculty_4 "Select * from student where id = 12"
I want to iterate over this table and save each record into the csv file
Is there any way I can do it though Anonymous block automatically.
I don't want to do this manually as table has lots of rows.
Can anyone please help?
Not being superuser means the export can't be done in a server-side DO block.
It could be done client-side in any programming language that can talk to the database, or assuming a psql-only environment, it's possible to generate a list of \copy statements with an SQL query.
As an example of the latter, assuming the unique output filenames are built from the ID column, something like this should work:
SELECT format('\copy (%s) TO ''file-%s.csv'' CSV', query, id)
FROM table_with_queries;
The result of this query should be put into a file in a format such that it can be directly included into psql, like this:
\pset format unaligned
\pset tuples_only on
-- \g with an argument treats it as an output file.
SELECT format('\copy (%s) TO ''file-%s.csv'' CSV', query, id)
FROM table_with_queries \g /tmp/commands.sql
\i /tmp/commands.sql
As a sidenote, that process cannot be managed with the \gexec meta-command introduced in PG 9.6, because \copy itself is a meta-command. \gexec iterates only on SQL queries, not on meta-commands. Otherwise the whole thing could be done by a single \gexec invocation.
You may use a function like: (IF your problem is the code)
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT id, query FROM table_name
LOOP
EXECUTE('COPY (' || rec.query || ') TO ' || QUOTE_LITERAL('d:/csv' || rec.id || '.csv') || ' CSV');
END LOOP;
END;
for permission problem, You should use some places on server that you have writing access to them (or request from vendor).
I've created one table "Meta_Data_Table_Names" where I inserted forty eight table names in the MetaTableName column. And there is another column to provide Row count with corresponding table name.
I wanted to fetch the table name from “Meta_Data_Table_Names” and execute SELECT Query sequentially through Loop for validation purpose.
Whenever, I execute from TOAD , It’s throwing an error:
Table or view does not exist.
Do we need to make a place holder for 'Meta_name' which can be scanned? Or any particular syntax to read the value during Query?
DECLARE
CURSOR c1 IS SELECT MetaTableName FROM Meta_Data_Table_Names;
CURSOR c2 IS SELECT ROW_COUNT FROM Meta_Data_Table_Names;
Meta_name Meta_Data_Table_Names.MetaTableName%TYPE;
Count_num Meta_Data_Table_Names.ROW_COUNT%TYPE;
BEGIN
OPEN c1;
OPEN c2;
FOR i IN 1..48 LOOP
FETCH c1 INTO Meta_name;
FETCH c2 INTO Count_num;
IF (Count_num > 2000)
THEN
SELECT * FROM Meta_Name X
MINUS
SELECT * from ASFNCWK07.Meta_Name#NCDV.US.ORACLE.COM Y
UNION ALL
SELECT * FROM ASFNCWK07.Meta_Name#NCDV.US.ORACLE.COM Y
MINUS
SELECT * FROM Meta_Name X;
ELSE
DBMS_OUTPUT.PUT_LINE ('No Validation is required');
END IF;
END LOOP;
END;
Oracle does not allow you to do queries with dynamic table names, i.e. if the table name is not known at compile time. Do do that, you need Dynamic SQL, which is a bit too broad to go into here.
There are a number of problems with your code.
Firstly, we cannot use variable names in normal SQL: for this we need dynamic SQL. For instance:
execute immediate 'select 1 from '|| Meta_Name || into n;
There are a lot of subtleties when working with dynamic SQL: the PL/SQL documentation devotes a whole chapter to it. Find out more.
Secondly, when executing SQL in PL/SQL, we need need to provide a target variable. This must match the projection of the executed query. When selecting a whole row the %ROWTYPE keyword is useful. Again the documentation covers this: find out more. Which leads to ...
Thirdly, because you're working with dynamic SQL and you don't know in advance which tables will be in scope, you can't easily declare target variables. This means you'll need to use ref cursors and/or Type 4 dynamic SQL techniques. Yikes! Read Adrian Billington's excellent blog article here.
Lastly (I think), the UNION ALL in your query doesn't allow you to identify which rows are missing from where. Perhaps that doesn't matter.
I have 2 procedures inside a package. I am calling one procedure to get a comma separated list of user ids.
I am storing the result in a VARCHAR variable. Now when I am using this comma separated list to put inside an IN clause in it is throwing "ORA-01722:INVALID NUMBER" exception.
This is how my variable looks like
l_userIds VARCHAR2(4000) := null;
This is where i am assigning the value
l_userIds := getUserIds(deptId); -- this returns a comma separated list
And my second query is like -
select * from users_Table where user_id in (l_userIds);
If I run this query I get INVALID NUMBER error.
Can someone help here.
Do you really need to return a comma-separated list? It would generally be much better to declare a collection type
CREATE TYPE num_table
AS TABLE OF NUMBER;
Declare a function that returns an instance of this collection
CREATE OR REPLACE FUNCTION get_nums
RETURN num_table
IS
l_nums num_table := num_table();
BEGIN
for i in 1 .. 10
loop
l_nums.extend;
l_nums(i) := i*2;
end loop;
END;
and then use that collection in your query
SELECT *
FROM users_table
WHERE user_id IN (SELECT * FROM TABLE( l_nums ));
It is possible to use dynamic SQL as well (which #Sebas demonstrates). The downside to that, however, is that every call to the procedure will generate a new SQL statement that needs to be parsed again before it is executed. It also puts pressure on the library cache which can cause Oracle to purge lots of other reusable SQL statements which can create lots of other performance problems.
You can search the list using like instead of in:
select *
from users_Table
where ','||l_userIds||',' like '%,'||cast(user_id as varchar2(255))||',%';
This has the virtue of simplicity (no additional functions or dynamic SQL). However, it does preclude the use of indexes on user_id. For a smallish table this shouldn't be a problem.
The problem is that oracle does not interprete the VARCHAR2 string you're passing as a sequence of numbers, it is just a string.
A solution is to make the whole query a string (VARCHAR2) and then execute it so the engine knows he has to translate the content:
DECLARE
TYPE T_UT IS TABLE OF users_Table%ROWTYPE;
aVar T_UT;
BEGIN
EXECUTE IMMEDIATE 'select * from users_Table where user_id in (' || l_userIds || ')' INTO aVar;
...
END;
A more complex but also elegant solution would be to split the string into a table TYPE and use it casted directly into the query. See what Tom thinks about it.
DO NOT USE THIS SOLUTION!
Firstly, I wanted to delete it, but I think, it might be informative for someone to see such a bad solution. Using dynamic SQL like this causes multiple execution plans creation - 1 execution plan per 1 set of data in IN clause, because there is no binding used and for the DB, every query is a different one (SGA gets filled with lots of very similar execution plans, every time the query is run with a different parameter, more memory is needlessly used in SGA).
Wanted to write another answer using Dynamic SQL more properly (with binding variables), but Justin Cave's answer is the best, anyway.
You might also wanna try REF CURSOR (haven't tried that exact code myself, might need some little tweaks):
DECLARE
deptId NUMBER := 2;
l_userIds VARCHAR2(2000) := getUserIds(deptId);
TYPE t_my_ref_cursor IS REF CURSOR;
c_cursor t_my_ref_cursor;
l_row users_Table%ROWTYPE;
l_query VARCHAR2(5000);
BEGIN
l_query := 'SELECT * FROM users_Table WHERE user_id IN ('|| l_userIds ||')';
OPEN c_cursor FOR l_query;
FETCH c_cursor INTO l_row;
WHILE c_cursor%FOUND
LOOP
-- do something with your row
FETCH c_cursor INTO l_row;
END LOOP;
END;
/