Suppose I have a table of custom column names that all have the pattern COL##, where ## is any integer. So a typical query would be:
select COL12 from MyCustomTable;
So in another table, I have all those integers and I'd like to create a query using the table of integers to construct a dynamic query into MyCustomTable.
Something like:
select 'COL' || (select colId from IdTable where Id = 12) from MyCustomTable;
But instead of just returning the string 'COL12' for every row, return the actual values identified by the column name COL12.
Don't worry about my overall problem :) I'm just curious to know if I can do this from a sqldeveloper window directly without writing any code/procedures/functions, etc.
An obvious and absolutely insecure way of doing this would be usage of EXECUTE IMMEDIATE statement.
Another is to use SPOOL command to output results into temporary file, then set SPOOL OFF and execute this file with # directive.
Related
I have a long query and I'm looking for a way to simplify it for the executer.
For example, I have this query:
select function_1(r_set) from (select collect_set(records) as r_set from (select function_2(<column_name>) as records from <table_name>) as record_t) as record_set_t;
function_1 and function_2 are custom UDFs.
Since everything besides the table name and column name are constant, Is it possible to define some kind of alias or procedure to a query, with column name and table name as a parameter?
or even wrap it somehow with shorter execution command?
I look for something like:
# set alias for long query somehow
set MyQueryAlias = select function_1(r_set) from (select collect_set(records) as r_set from (select function_2(<column_name>) as records from <table_name>) as record_t) as record_set_t;
# execute the query with table name and column name as a parameter
exec MyQueryAlias <table_name> <column_name>
My purpose is to make it easy for other users to use the saved query on different tables and columns.
I have a stored procedure that gets passed a string of values separated with spaces, which then does a search in the table and returns data where a column has any of those values. All went well until a user needed to pass 'INDEX END UNKNOWN PROCESS' which didn't return anything, even though there is data with those values:
CREATE OR REPLACE PROCEDURE Searches
(
QUEUE IN TYPES.CHAR50,
P_CURSOR IN OUT SYS_REFCURSOR
)
AS
BEGIN
OPEN P_CURSOR FOR
SELECT *
FROM tablez t
WHERE /* If the subquery returns UNKNOWN, END, PROCESS, INDEX which are Oracle reserved words the main query won't return any results */
/* In order to pass this inconsistency, I concatenated XYZ to both sides when using IN Clause */
CONCAT(LTRIM(RTRIM(t.QUEUECD)),'XYZ') IN ( SELECT CONCAT(LTRIM(RTRIM(tr.prom)),'XYZ')
FROM ( SELECT regexp_substr(QUEUE,'[^ ]+', 1, LEVEL) prom
FROM dual
CONNECT BY regexp_substr(QUEUE, '[^ ]+', 1, LEVEL) IS NOT NULL
) tr
)
;
END Searches;
So, I changed the code to use regexp_substr, and only concatenating 'XYZ' returned values when doing the comparison. But this is a temporary fix, because QUEUECD is an indexed column in the database and using CONCAT in WHERE clause led to performance issues, on big data.
Do you have any suggestions how to improve the performance or pass the list of values in a different way?
Thank you!
Oracle SQL Stored Procedure with Oracle reserved words passed to
variable
It looks like a different problem. Look: there is no way to pass "reserved words" as a value of variable - when you've got varchar variable then the value is a text - nothing more.
I've made a sample table and tested a query without concatenation 'XYZ' - and I don't have such problems. Maybe there are some white, non-printable characters at the end or beginning in records??
Regarding to:
Do you have any suggestions how to improve the performance or pass the list of values in a different way?
Yes. Pass collection (nested table) as parameter. For example:
create or replace type T_TAB_STRING as table of varchar2(4000);
Next, change type of QUEUE from TYPES.CHAR50 to T_TAB_STRING.
Then you can use table() expression to unnest collection inside query like that:
SELECT *
FROM tablez t
WHERE t.QUEUECD IN ( SELECT /*+ DYNAMIC_SAMPLING(tr, 2) */
*
FROM TABLE(QUEUE) tr
)
;
The dynamic sampling hint is for forcing DB to check how many elements is inside collection. Without that DB assume it is the size of 1 block (usually 8k), so CBO could choose to do full scan instead of index scan.
If you cannot use that hint, or it doesn't work for some reason, there is other way to help CBO with collections in queries. It's implementing Extensible Optimiser interface for that collection. It has been written by Adrian Billington in this article how to do it.
I have a form where people can type in a start and end date, as well as a column name prefix.
In the backend, I want to do something along the lines of
SELECT *, CAST('{{startDate}}' AS TIMESTAMP) AS ({{prefix}} + '_startDate')
Is this possible? Basically, I want to dynamically create the name of the new column. The table is immediately returned to the user, so I don't want to mutate the underlying table itself. Thanks!
You can execute dynamic query that you have prepared by using EXECUTE keyword, otherwise it is not possible to have dynamic structure of SQL.
Since you are preparing your SQL outside database, you can use something like:
SELECT *, CAST('{{startDate}}' AS TIMESTAMP) AS {{prefix}}_startDate
Assuming that {{prefix}} is replaced with some string by your template before it is sent to database.
I want to append new rows to a table-1 d:\dl based on the equality constraint lower(rdl.subdir) = lower(tr.n1), where rdl and tr would be prospective aliases for f:\rdl and f:\tr tables respectively.
I get a function name is missing ). message when running the following command in VFP9:
INSERT INTO d:\dl SELECT * FROM f:\rdl WHERE (select LOWER(subdir)FROM f:\rdl in (select LOWER(n1) FROM f:\tr))
I am using the in syntax, instead of the alias based equality statement lower(rdl.subdir) = lower(tr.n1) because I do not know where to define aliases within this command.
In general, the best way to get something like this working is to first make the query work and give you the results you want, and then use it in INSERT.
In general, in SQL commands you assign aliases by putting them after the table name, with or without the keyword AS. In this case, you don't need aliases because the ones you want are the same as the table names and that's the default.
If what you're showing is your exact code and you're running it in VFP, the first problem is that you're missing the continuation character between lines.
You're definitely doing too much work, too. Try this:
INSERT INTO d:\dl ;
SELECT * ;
FROM f:\rdl ;
JOIN f:\tr ;
ON LOWER(rdl.subdir) = LOWER(tr.n1)
I'd like to get all the records from a huge table where any of the number columns countains a value greater than 0. What's the best way to do it?
E.g.:
/* table structure*/
create table sometable (id number,
somestring varchar2(12),
some_amount_1 number(17,3),
some_amount_2 number(17,3),
some_amount_3 number(17,3),
...
some_amount_xxx number(17,3));
/* "xxx" > 100, and yeah I did not designed that table structure... */
And I want any row where any of the some_amount_n > 0 (even better solution is to add a field in the first place to show which field(s) are greater than zero).
I know I can write this with a huge some_amount_1 > 0 OR some_amount_2 > 0 OR ... block (and the field names with some case when but is there should be some more elegant solution, isn't there?
Possible solutions:
Normalize the table. You said you are not allowed to. Try to convince those that forbid such a change by explaining the benefits (performance, ease of writing queries, etc).
Write the huge ugly OR query. You could also print it along with the version of the query for the normalized tables. Add performance tests (you are allowed to create another test table or database, I hope.)
Write a program (either in PL/SQL or in another procedural language) that produces the horrible OR query. (Again, print along with the elegant version)
Add a new column, say called Any_x_bigger_than_zero which is automatically filled with either 0 or 1 via a trigger (that uses a huge ugly OR). Then you just need to check: WHERE Test_x_bigger_than_zero = 1 to see if any of the rows is > 0
Similar to previous but even better, create a materialized view with such a column.
First, create a table to sort the data into something more easily read from...something simple like id,column_name,column_value. You'll have to bear with me, been a while since I've operated in oracle, so this is heavy pseudo code at best:
Quick dynamic sql blurb...you can set a variable to a sql statement and then execute that variable. There are some security risks and it's possible this feature is disabled in your environment...so confirm you can run this first. Declare a variable, set the variable to 'select 1' and then use 'execute immediate' to execute the sql stored in your variable.
set var = 'select id, ''some_amount_' || 1 || '', some_amount || 1 || ' from table where some_amount_' || 1 || ' <> 0'
Assuming I've got my oracle syntax right...( pipe is append right? I believe a 3 single quote as ''' should result in one ' when in a variable too, you may have to trial and error this line until you have the var set to):
select id, 'some_amount_1',some_amount_1
from table
where some_amount_1 <> 0
This should select the ID and the value in some_amount_1 for each id in your database. You can turn this into an insert statement pretty easily.
I'm assuming some_amount_xxx has an upper limit...next trick is to loop this giant statement. Once again, horrible pseudo code:
declare sql_string
declare i and set to 1
for i = 1 to xxx (whatever your xxx is)
set sql_string to the first set var statement we made, replacing the '1' with the i var here.
execute sql
increment i
loop
Hopefully it makes sense...it's one of the very few scenarios you would ever want to loop dynamic sql on. Now you have a relatively straight forward table to read from and this should be a relatively easy query from here