I need to make a procedure for editing a table 'exemplare'
edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value ???) which updates the value in column of row with this ID.
Problem is that columns of this table have diffrent data types. Can I do something like p_value exemplare.p_column%TYPE? Or do I have to set it to VARCHAR2 and then (somehow) use converting to correct data type?
Can I do something like p_value exemplare.p_column%TYPE?
No. The signature of a procedure must be static. What you could do is overload the procedure in a package:
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value VARCHAR2);
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value DATE);
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value NUMBER);
However, you're still need dynamic SQL to interpret the metadata of p_column so your code will remain clunky.
This approach reminds me of the getter and setter paradigm which is still prevalent in object-oriented programming. This is not an approach which fits SQL. To edit three table columns you will make three procedural calls which will generate and execute three dynamic UPDATE statements. This does not scale well, and it is the sort of thing which causes OO developers to assert that databases are slow, when it fact the problem is at the calling end.
There are various ways to solve this, and which is the correct one will depend on the precise details of what you're trying to do. The key point is: a single transaction should execute *no more than one** update statement per record. A properly set-based operation which updates multiple records in one statement is even better.
you can use {table}%ROWTYPE as an input.this is a good example
Related
Whenever I try to call a stored procedure in PostgreSQL that goes beyond inserting data, it takes forever to run, and it isn't that the query is complicated or the dataset is huge. The dataset is small. I cannot return a table from a stored procedure and I cannot even return 1 row or 1 data point from a stored procedure. It says it is executing the query for a very long time until I finally stop the query from running. It does not give me a reason. I can't let it run for hours. Any ideas on what might be happening? I have included stored procedures that I have tried to call.
Non-working example #1:
CREATE PROCEDURE max_duration(OUT maxD INTERVAL)
LANGUAGE plpgsql AS $$
DECLARE maxD INTERVAL;
BEGIN
SELECT max(public.bikeshare3.duration)
INTO maxD
FROM public.bikeshare3;
END;
$$ ;
CALL max_duration(NULL);
Non-working example #2:
CREATE PROCEDURE getDataByRideId2(rideId varchar(16))
LANGUAGE SQL
AS $$
SELECT rideable_type FROM bikeshare3
WHERE ride_id = rideId
$$;
CALL getDataByRideId2('x78900');
Working example
The only one that worked when called is an insert procedure:
CREATE OR REPLACE PROCEDURE genre_insert_data(GenreId integer, Name_b character varying)
LANGUAGE SQL
AS $$
INSERT INTO public.bikeshare3 VALUES (GenreId, Name_b)
$$;
CALL genre_insert_data(1, 'testName');
FUNCTION or PROCEDURE?
The term "stored procedure" has been a widespread misnomer for the longest time. That got more confusing since Postgres 11 added CREATE PROCEDURE.
You can create a FUNCTION or a PROCEDURE in Postgres. Typically, you want a FUNCTION. A PROCEDURE mostly only makes sense when you need to COMMIT in the body. See:
How to return a value from a stored procedure (not function)?
Nothing in your question indicates the need for a PROCEDURE. You probably want a FUNCTION.
Question asked
Adrian already pointed out most of what's wrong in his comment.
Your first example could work like this:
CREATE OR REPLACE PROCEDURE max_duration(INOUT _max_d interval = NULL)
LANGUAGE plpgsql AS
$proc$
BEGIN
SELECT max(b.duration) INTO _max_d
FROM public.bikeshare3 b;
END
$proc$;
CALL max_duration();
Most importantly, your OUT parameter is visible inside the procedure body. Declaring another instance as variable hides the parameter. You could access the parameter by qualifying with the function name, max_duration.maxD in your original. But that's a measure of last resort. Rather don't introduce duplicate variable names to begin with.
I also did away with error-prone mixed-case identifiers in my answer. See:
Are PostgreSQL column names case-sensitive?
I made the parameter INOUT max_d interval = NULL. By adding a default value, we don't have to pass a value in the call (that's not used anyway). But it must be INOUT instead of OUT for this.
Also, OUT parameters only work for a PROCEDURE since Postgres 14. The release notes:
Stored procedures can now return data via OUT parameters.
While using an OUT parameter, this advise from the manual applies:
Arguments must be supplied for all procedure parameters that lack
defaults, including OUT parameters. However, arguments matching OUT
parameters are not evaluated, so it's customary to just write NULL
for them. (Writing something else for an OUT parameter might cause
compatibility problems with future PostgreSQL versions.)
Your second example could work like this:
CREATE OR REPLACE PROCEDURE get_data_by_ride_id2(IN _ride_id text
, INOUT _rideable_type text = NULL) -- return type?
LANGUAGE sql AS
$proc$
SELECT b.rideable_type
FROM public.bikeshare3 b
WHERE b.ride_id = _ride_id;
$proc$;
CALL get_data_by_ride_id2('x78900');
If the query returns multiple rows, only the first one is returned and the rest is discarded. Don't go there. This only makes sense while ride_id is UNIQUE. Even then, a FUNCTION still seems more suitable ...
I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.
This question already has answers here:
Oracle: Select From Record Datatype
(6 answers)
Closed 5 years ago.
A shared package exists which defines a student record type and a function which returns a student:
CREATE OR REPLACE PACKAGE shared.student_utils IS
--aggregates related data from several tables
TYPE student_rec IS RECORD (
id student.id%TYPE,
username student.username%TYPE,
name student.name%TYPE,
phone phone.phone%TYPE
/*etc.*/
);
FUNCTION get_student(student_id IN student.id%TYPE) RETURN student_rec;
END;
Now, I'm writing a package that provides a API for an Apex application to consume. In particular, I need to provide the student record and other relevant data in a format that can be selected via SQL (and displayed in a report page in Apex.)
So far I've been trying to find the most direct way to select the data in SQL. Obviously, a record type cannot be used in SQL, so my quick-and-dirty idea was to define a table type in my package spec and a PIPELINED function in my package spec/body:
CREATE OR REPLACE PACKAGE my_schema.api IS
TYPE student_tab IS TABLE OF shared.student_utils.student_rec;
FUNCTION get_all_student_data(student_id IN student.id%TYPE) RETURN student_tab PIPELINED;
END;
/
CREATE OR REPLACE PACKAGE BODY my_schema.api IS
FUNCTION get_all_student_data(student_id IN student.id%TYPE) RETURN student_tab PIPELINED IS
BEGIN
PIPE ROW(shared.student_utils.get_student(student_id));
END;
END;
...which lets me select it like so:
SELECT * FROM TABLE(my_schema.api.get_all_student_data(1234));
This works, but building a pipelined table just for one row is overkill, and Oracle's explain plan seems to agree.
Supposedly in Oracle 12c, there should be more options available:
More PL/SQL-Only Data Types Can Cross PL/SQL-to-SQL Interface
...but I can't seem to figure it out in my scenario. Changing the function to:
FUNCTION get_all_student_data RETURN student_tab IS
r_student_tab student_tab;
BEGIN
r_student_tab(1) := shared.student_utils.get_student(student_id);
RETURN r_student_tab;
END;
...will compile, but I cannot SELECT from it as before.
OK, enough rambling, here's my actual question - what is the most direct method to call a PL/SQL function that returns a record type and select/manipulate the result in SQL?
The killer line in the documentation is this one:
A PL/SQL function cannot return a value of a PL/SQL-only type to SQL.
That appears to rule out querying directly from a function which returns a PL/SQL Record or associative array like this:
select * from table(student_utils.get_students(7890));
What does work is this, which is technically SQL (because the docs define anonymous blocks as SQL not PL/SQL):
declare
ltab student_utils.students_tab;
lrec student_utils.student_rec;
rc sys_refcursor;
begin
ltab := student_utils.get_students(1234);
open rc for select * from table(ltab);
fetch rc into lrec;
dbms_output.put_line(lrec.name);
close rc;
end;
/
This is rather lame. I suppose there are a few times when we want to open a ref cursor from an array rather than just opening it for the SQL we would use to populate the array, but it's not the most pressing of use cases.
The problem is Oracle's internal architecture: the kernel has C modules for SQL and C modules for PL/SQL (this is why you will hear people talking about "context switches"). Exposing more PL/SQL capabilities to the SQL engine requires modifying the interfaces. We can only imagine how difficult it is to allow the SQL compiler to work against the definitions of PL/SQL data structures, which are extremely unstable (potentially they can change every time we run create or replace package ....
Of course this does work for PIPELINED functions but that's because under the bonnet Oracle creates SQL types for the PL/SQL types referenced in the function. It cannot create objects on the fly for an arbitrary function we might wish to slip into a table() call. Some day it might be possible but just consider this one sticking point: what happens when a user who has execute on our package but lacks CREATE TYPE privilege tries to use the function?
I want to include the following function inside of another user-defined function in Oracle.
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,TABLE_IN)
SCHEMA_IN and TABLE_IN are arguments to the user-defined function. However, I get the following error.
ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML
How can I resolve this? Below is my SQL script.
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
is
L_TEXT VARCHAR2(32767) := NULL;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,NAME_IN);
FOR CUR_REC IN (SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = name_in AND NUM_NULLS = 0) LOOP
L_TEXT := L_TEXT || ',' || CUR_REC.COLUMN_NAME;
END LOOP;
return(ltrim(l_text,','));
END;
gather_table_stats is a procedure, not a function. And it a procedure that includes transaction control logic (presumably, a commit at least). You cannot, therefore, call it in a function that is called from SQL. You could call your function from PL/SQL rather than SQL,
DECLARE
l_text varchar2(4000);
BEGIN
l_text := get_columns( <<schema>>, <<table>> );
END;
I would, however, be very, very dubious about the approach you're taking.
First, dbms_stats gathers statistics that are used by the optimizer. Using those statistics in other contexts is generally dangerous. Most dbms_stats calls involve some level of indeterminism-- you're generally gathering data from a sample of rows and extrapolating. That is perfectly appropriate for giving the optimizer information so that it can judge things like roughly how many rows a table scan will return. It may not be appropriate if you're trying to differentiate between a column that is never NULL and one that is very rarely NULL. Some samples may catch a NULL value, others may not. It may seem to work correctly for months or years and then start to fail either consistently or intermittantly.
Second, when you gather fresh statistics, you're potentially forcing Oracle to do hard parses on all the existing SQL statements that reference the table. That can be a major performance hit if you do this in the middle of the day. If you happen to force a query plan to change in a bad way, you'll likely cause the DBA a great deal of grief. If the DBA is gathering statistics in a particular way (locking statistics on some tables, forcing histograms on others, forcing a lack of histograms on others, etc.) to deal with performance issues, it's highly likely that you'll be either working at cross purposes or actively breaking the other.
Third, if a column never has NULL values, it really ought to be marked as NOT NULL. Then you can simply look at the data dictionary to see which columns are nullable and which are not without bothering to gather statistics.
You need to set your function to be an autonomous transaction to execute gather table stats:
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
as
pragma autonomous_transaction;
L_TEXT VARCHAR2(32767) := NULL;
BEGIN
I have just started using Oracle procedures, using following procedure(made by our DBA department) in my code but having difficulty in understanding this procedure, I have googled a lot and read tutorials but still have confusion.
If anyone could explain this to me, I would really be gratefull.
function SF_MY_IDENTITY(name IN VARCHAR2, fName in VARCHAR2 class in VARCHAR2,std_Id in VARCHAR2)return UD_CURSOR
is
cursorReturn UD_CURSOR;
grNo VARCHAR(100);
phone VARCHAR(100);
begin
In above part I couldn't figure out what is this 'is' doing?what it is being used for?
Open cursorReturn for
SELECT
grNo,
phone
FROM
MY_SCHOOL MS
WHERE
MS.std_id=std_Id
AND MS.name=name
AND MS.fNameE=fName;
What is this part doing, what does open doing? and how the output variables 'grNo, phone' would be used in an irrelevant table(MY_SCHOOL)
1) The "is" token is part of the function definition in pl/sql
2) Opens a sql cursor.
I highly recommend that you read a book about pl/sql. For instance the oracle documentation.
It just takes few inputs and based on the input values it opens a cursor and returns it back.
But there is some comma missing and the code is incomplete. Based on what you have posted, this is what the function is doing.