Define a VIEW in Oracle without using CREATE - sql
I do not have sufficient privileges to use a CREATE statement, I can only SELECT. I have a script that has three distinct parts and calculations that need to run and each one references the same complicated WITH statement selection that redundantly clutters the code and is a pain to maintain in three separate locations.
I have tried creating temp tables and views, but again, privileges do not support. Is there a way using either SQL or PL/SQL syntax to define my WITH statement ONCE without using CREATE, and then reference it like I would any other table? Example:
--Define the temp table
WITH tempview AS (SELECT .... FROM ...);
--First query
SELECT ... FROM tempview;
/
--Second query
SELECT ... FROM tempview;
/
--Third query
SELECT ... FROM tempview;
/
Getting the correct permissions and creating permanent objects is the best approach. It sounds like this view would only be used in a single script, which doesn't necessarily make it any less valid to create it, but you might find it harder to justify depending on your DBA and policies. It's certainly worth trying that approach, as #DCookie suggested.
If that fails then there may be hacky workarounds, depending on the client you will run this script in.
For instance, in SQL*Plus it's possible to abuse substitution variables to get something close to what you describe. This uses the define command to create a substitution variable that contains the 'view' query, and then uses that variable inside a WITH clause. (You can't replace the entire with like this, but it's maybe clearer like this anyway). I'm used a trivial dummy query:
define tempview_query = 'SELECT * -
FROM dual -
UNION ALL -
SELECT * -
FROM dual'
WITH tempview AS (&tempview_query)
SELECT * FROM tempview;
WITH tempview AS (&tempview_query)
SELECT * FROM tempview;
When the script is run the output produced is:
D
-
X
X
2 rows selected.
D
-
X
X
2 rows selected.
I've also executed set verify off to hide the substitutions, but turning it on might be instructive to see what's happening.
Notice the dashes at the end of each line of the query; that's the continuation character, and as the define docs mention:
If the value of a defined variable extends over multiple lines (using the SQL*Plus command continuation character), SQL*Plus replaces each continuation character and carriage return with a space.
so the 'new' query shown by set verify on will have your entire view query on a single line (if you display it). It's feasible that with a long enough query you'd hit some line length limit but hopefully you won't reach that point (except you did; see below).
You can do the same thing in SQL Developer, but there the continuation needs to use two dashes, so:
define tempview_query = 'SELECT * --
FROM dual --
UNION ALL --
SELECT * --
FROM dual'
except it isn't quite the same as the continuation in SQL*Plus; here the define has to end with a dash, but it is not replaced in the way the SQL*Plus docs describe - so with a single dash the define works but the query ends up invalid. (At least in 4.2.0; possibly a bug...) By using two dashes the multi-line define still works, the dashes remain part of the query, but they're treated as comment markers; so they make the substituted query look odd (again, if you display it) but don't stop it working. You won't notice with set verify off unless someone looks in v$sql.
If your query exceeds 240 characters - which is rather likely unless it's trivial enough to repeat anyway - you'll hit something like:
string beginning "'SELECT * ..." is too long. maximum size is 240 characters.
Both SQL*Plus and SQL Developer allow you to set a substitution variable from a query, using the column ... new_value command:
column tempalias new_value tempview_query
set termout off
select q'[SELECT *
FROM dual
UNION ALL
SELECT *
FROM dual]'
FROM dual;
set termout on
The query selects the text of your view query as a string; I've used the alternative quoting mechanism, with [] as the delimiters, so you don't have to escape any single quotes in the view query. (You need to pick a delimiter that can't appear in the query too, of course). Also note that you don't need the line continuation character any more.
The text literal that query generates is aliased as tempalias. The column command sets the tempview_query substitution variable to whatever that aliased column expression contains. Using the substitution variable is then the same as in the previous examples.
WITH tempview AS (&tempview_query)
SELECT * FROM tempview;
The set termout lines just hide that generating query; you can temporarily omit the off line to see what the query produces, and that it does exactly match the view query you expected.
Other clients might have similar mechanisms, but those are the only two I'm really familiar with. I should probably also reiterate that this is a bit of a hack, and not something I'd necessarily recommend...
Another trick with SQL*Plus is to import code from a second SQL script. You can do this with the # command (making sure to put the # at the very start of a line), e.g.:
tempview.sql
WITH tempview AS (SELECT .... FROM ...)
(notice there is no ending semicolon ; here, and make sure you either don't have a blank line at the end of the file or set sqlblanklines on)
main.sql
--First query
#tempview.sql
SELECT ... FROM tempview
/
--Second query
#tempview.sql
SELECT ... FROM tempview
/
--Third query
#tempview.sql
SELECT ... FROM tempview
/
Related
SQL define with prefix and suffix
I'm struggling, in our DWH I'm trying to automate a proces for generating views instead of manual actions all the time. Here is simple version of what i'm trying to do in a Oracle Database: define SLobject = 'ObjectTest' select * from '&&SLobject_Src'; -- This one doens't work becauce '_Src' is included but this needs to be excluded. select * from 'MV_&&SLobject'; -- This works since its a prefix before the '&&' characters.
You have to add a dot ('.') after the name like this: select * from &&SLobject._Src; and remove the quotes (if using as an object name)
define a variable as a result of a query
I have a variable date_from that I can use in my queries: define date_from = '01.11.2019'; Is it possible to define a new variable as a result of a query? The following statement doesn't work: define month_from = (select month_yyyymm from t_calendar where date_orig = '&date_from'); I'm using Oracle SQL Developer and I don't want to go into PL/SQL.
It is not possible to do what you want, directly. Note also that substitution variables (those created / assigned to with the define command and called with a leading &) are a SQL*Plus concept; they are pre-processed by the client software (in your case SQL Developer, which understands and honors most of, even though not all of, SQL*Plus). You can do almost that, though, with the SQL*Plus option new_value to the column command. It goes something like this (not tested since you didn't provide sample data): define date_from = '01.11.2019' column month_yyyymm new_value month_from select month_yyyymm from t_calendar where date_orig = '&date_from'; This is it - at this point the variable month_from stores the value of month_yyyymm returned by your query. Note that the first two commands are SQL*Plus (scripting) commands; only the last statement, the select, is ever seen by the database itself.
how do I find how a view was created?
How do I find out what text was used to create a view in oracle sql, especially how do I find out what columns may be hidden? select view_name, text from all_views where viewname = 'XYZ'; will give me the first few words only sql developer has a tab for this, I'm just wondering if there is a way to do it from the command line in sqlplus.
By default SQL*Plus only shows the first 80 characters of long and clob columns. You can do set long 32767 (or some other large number; 30000 seems to be common, I think from older releases where the limit was 32K and that was easier to type, but it is 2M now) and reissue your query. You can also use the dbms_metadata package to get the view DDL, if it is your view (in your schema) or you have the select catalog role. select dbms_metadata.get_ddl('VIEW', 'XYZ') You'll need to do set long for that to show you a useful amount of output too; and specify the schema with the third argument if it isn't in your schema.
What does "SELECT INTO" do?
I'm reading sql code which has a line that looks like this: SELECT INTO _user tag FROM login.user WHERE id = util.uuid_to_int(_user_id)::oid; What exactly does this do? The usual way to use SELECT INTO requires specifying the columns to select after the SELECT token, e.g. SELECT * INTO _my_new_table WHERE ...; The database is postgresql.
This line must appear inside of a PL/pgSQL function. In that context the value from column tag is assigned to variable _user. According to the documentation: Tip: Note that this interpretation of SELECT with INTO is quite different from PostgreSQL's regular SELECT INTO command, wherein the INTO target is a newly created table. and The INTO clause can appear almost anywhere in the SQL command. Customarily it is written either just before or just after the list of select_expressions in a SELECT command, or at the end of the command for other command types. It is recommended that you follow this convention in case the PL/pgSQL parser becomes stricter in future versions.
perl execute sql file (DBI oracle)
I have the following problem, i have a SQL file to execute with DBI CPAN module Perl I saw two solution on this website to solve my problem. Read SQL file line by line Read SQL file in one instruction So, which one is better, and what the real difference between each solution ? EDIT It's for a library. I need to retrieve output and the return code. Kind of files passed might be as following: set serveroutput on; set pagesize 20000; spool "&1." DECLARE -- Récupération des arguments -- &2: FLX_REF, &3: SVR_ID, &4: ACQ_STT, &5: ACQ_LOG, &6: FLX_COD_DOC, &7: ACQ_NEL, &8: ACQ_TYP VAR_FLX_REF VARCHAR2(100):=&2; VAR_SVR_ID NUMBER(10):=&3; VAR_ACQ_STT NUMBER(4):=&4; VAR_ACQ_LOG VARCHAR2(255):=&5; VAR_FLX_COD_DOC VARCHAR2(30):=&6; VAR_ACQ_NEL NUMBER(10):=&7; VAR_ACQ_TYP NUMBER:=&8; BEGIN INSERT INTO ACQUISITION_CFT (ACQ_ID, FLX_REF, SVR_ID, ACQ_DATE, ACQ_STT, ACQ_LOG, FLX_COD_DOC, ACQ_NEL, ACQ_TYP) VALUES (TRACKING.SEQ_ACQUISITION_CFT.NEXTVAL, ''VAR_FLX_REF'', ''VAR_SVR_ID'', sysdate, VAR_ACQ_STT, ''VAR_ACQ_LOG'', ''VAR_FLX_COD_DOC'', VAR_ACQ_NEL, VAR_ACQ_TYP); END; / exit; I have another question to ask, again with DBI Oracle module. May i use the same code for SQL file and for Control file ? (Example of SQL Control file) LOAD DATA APPEND INTO TABLE DOSSIER FIELDS TERMINATED BY ';' ( DSR_IDT, DSR_CNL, DSR_PRQ, DSR_CEN, DSR_FEN, DSR_AN1, DSR_AN2, DSR_AN3, DSR_AN4, DSR_AN5, DSR_AN6, DSR_PI1, DSR_PI2, DSR_PI3, DSR_PI4, DSR_NP1, DSR_NP2, DSR_NP3, DSR_NP4, DSR_NFL, DSR_NPG, DSR_LTP, DSR_FLF, DSR_CLR, DSR_MIM, DSR_TIM, DSR_NDC, DSR_EMS NULLIF DSR_EMS=BLANKS "sysdate", JOB_IDT, DSR_STT, DSR_DAQ "CASE WHEN :DSR_DAQ IS NOT NULL THEN SYSDATE ELSE NULL END" )
Reading a table one row at a time is more complex, but it can use less memory - provided you structure your code to make use of the data per item and not need it all later. Often you want to process each item separately (e.g. to do work on the data), in which case you might as well use the read line-by-line approach to define your loop. I tend to use single-instruction approach by default, but as soon as I am concerned about number of records (especially in long-running batch processes), or need to loop through the data as the first task, then I read records one-by-one.
In fact, the two answers you reference propose the same solution, to read and execute line-by-line (but the first is clearer on the point). The second question has an optional answer, where the file contains a single statement. If you don't execute the SQL line-by-line, it's very difficult to trap any errors.
"Line by line" only makes sense if each SQL statement is on a single line. You probably mean statement by statement. Beyond that, it depends on what your SQL file looks like and what you want to do. How complex is your SQL file? Could it contain things like this? select foo from table where column1 = 'bar;'; --Get foo; it will be used later. The simple way to read an SQL file statement by statement is to split by semicolons (or whatever the statement delimiter is). But this method will fail if you might have semicolons in other places, like comments or strings. If you split this statement by semicolons, you would try to execute the following four "commands": select foo from table where column1 = 'bar; '; --Get foo; it will be used later. Obviously, none of these are valid. Handling statements like this correctly is no simple matter. You have to completely parse SQL to figure out what the statements are. Unfortunately, there is no ready-made module that can do this for you (SQL::Script is a good start on an SQL file processing module, but according to the documentation it just splits on semicolons at this point). If your SQL file is simple, not containing any statement delimiters within statements or comments; or if it is predictable in some other way (such as having one statement per line), then it is easy to split the file into statements and execute them one by one. But if you have to handle arbitrary SQL syntax, including cases such as above, this will be a complex task. What kind of task? Do you need to retrieve the output? Is it important to detect errors in any individual statement, or is it just a batch job that you can run and not worry about it? If this is something that you can just run and forget about, you could just have Perl execute a system command, telling Oracle to process the file. This will be simpler than handling all of the statements yourself. But if you need to process the results or handle errors within Perl, doing it yourself statement by statement will be a necessity. Update: based on your response, you want to write a library that can handle arbitrary SQL statements. In that case, you definitely need to parse the SQL and execute the statements one at a time. This is do-able, but not simple. The possibility of BEGIN...END blocks means that you have to be able to correctly handle semicolons within a statement. The SQL::Statement class of modules may be helpful.