How to create a temporary function in PostgreSQL? - sql

I have to execute a loop in database. This is only a one time requirement.
After executing the function, I am dropping the function now.
Is there any good approach for creating temporary / disposable functions?

I needed to know how to do a many time use in a script I was writing. Turns out you can create a temporary function using the pg_temp schema. This is a schema that is created on demand for your connection and is where temporary tables are stored. When your connection is closed or expires this schema is dropped. Turns out if you create a function on this schema, the schema will be created automatically. Therefore,
create function pg_temp.testfunc() returns text as
$$ select 'hello'::text $$ language sql;
will be a function that will stick around as long as your connection sticks around. No need to call a drop command.

A couple of additional notes to the smart trick in #crowmagnumb's answer:
The function must be schema-qualified at all times, even if pg_temp is in the search_path (like it is by default), according to Tom Lane to prevent Trojan horses:
CREATE FUNCTION pg_temp.f_inc(int)
RETURNS int AS 'SELECT $1 + 1' LANGUAGE sql IMMUTABLE;
SELECT pg_temp.f_inc(42);
f_inc
-----
43
A function created in the temporary schema is only visible inside the same session (just like temp tables). It's invisible to all other sessions (even for the same role). You could access the function as a different role in the same session after SET ROLE.
You could even create a functional index based on this "temp" function:
CREATE INDEX foo_idx ON tbl (pg_temp.f_inc(id));
Thereby creating a plain index using a temporary function on a non-temp table. Such an index would be visible to all sessions but still only valid for the creating session. The query planner will not use a functional index, where the expression is not repeated in the query. Still a bit of a dirty trick. It will be dropped automatically when the session is closed - as a depending object. Feels like this should not be allowed at all ...
If you just need to execute a function repeatedly and all you need is SQL, consider a prepared statement instead. It acts much like a temporary SQL function that dies at the end of the session. Not the same thing, though, and can only be used by itself with EXECUTE, not nested inside another query. Example:
PREPARE upd_tbl AS
UPDATE tbl t SET set_name = $2 WHERE tbl_id = $1;
Call:
EXECUTE upd_tbl(123, 'foo_name');
Details:
Split given string and prepare case statement

If you are using version 9.0, you can do this with the new DO statement:
http://www.postgresql.org/docs/current/static/sql-do.html
With previous versions, you'll need to create the function, call it, and drop it again.

For ad hock procedures, cursors aren't too bad. They are too inefficient for productino use however.
They will let you easily loop on sql results in the db.

Related

pl/sql procedure with variable numbers of parameters

I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.

How to demonstrate SQL injection in where clause?

I want to demonstrate the insecurity of some webservices that we have. These send unsanitized user input to an Oracle database Select statements.
SQL injection on SELECT statements is possible (through the WHERE clause), however I am having a hard time to demonstrate it as the same parameter gets placed in other queries as well during the same webservice call.
E.g:
' or client_id = 999'--
will exploit the first query but as the same WS request calls runs other SQL SELECTs, it will return an oracle error on the next query because the client_id is referred to by an alias in the second table.
I am looking to find something more convincing rather than just having an ORA error returned such as managing to drop a table in the process. However I do not think this is possible from a Select statement.
Any ideas how I can cause some data to change, or maybe get sensitive data to be included as part of an ORA error?
It's not very easy to change data, but it's still possible. Function that created with pragma autonomous_transaction can contain dml and may be called in where. For instance,
CREATE OR REPLACE FUNCTION test_funct return int
IS
pragma autonomous_transaction;
BEGIN
DELETE FROM test_del;
commit;
return 0;
end;
-- and then
SELECT null from dual where test_funct()=1;
Another option you try creating huge subquery in WHERE which in turn may cause huge performance issue on server.
You do not need a custom function, you can use a sub-query:
" or client_id = (SELECT 999 FROM secret_table WHERE username = 'Admin' AND password_hash = '0123456789ABCD')"
If the query succeeds then you know that:
There is a table called secret_table that can be seen by the user executing this query (even if there is not a user interface that would typically be used to directly interact with that table);
That it has the columns username and password_hash;
That there is a user called Admin; and
That the admin user has a password that hashes to 0123456789ABCD.
You can repeat this and map the structure of the entire database and check for any values in the database.

Create constant string for entire database

I'm still new to SQL, so I'm having some little issues to solve.
I'm running a Postgres database in Acqua Data Studio, with some queries that get follow the same model.
Some variables into these queries are the same, but may change in the future...
Thinking of an optimized database, it would be faster to change the value of a constant than to enter on 20+ queries and change the same aspect in all of them.
Example:
SELECT *
FROM Table AS Default_Configs
LEFT JOIN Table AS Test_Configs
ON Default_Configs.Column1 = 'BLABLABLA'
Imagining 'BLABLABLA' could be 'XXX', how could I make 'BLABLABLA' a constant to every View that is created following this pattern?
Create a tiny function that serves as "global constant":
CREATE OR REPLACE FUNCTION f_my_constant()
RETURNS text AS
$$SELECT text 'XXX'$$ LANGUAGE sql IMMUTABLE PARALLEL SAFE; -- see below
And use that function instead of 'BLABLABLA' in your queries.
Be sure to declare the data type correctly and make the function IMMUTABLE (because it is) for better performance with big queries.
In Postgres 9.6 or later add PARALLEL SAFE, so it won't block parallel query plans. The setting isn't valid in older versions.
To change the constant, replace the function by running an updated CREATE OR REPLACE FUNCTION statement. Invalidates query plans using it automatically, so queries are re-planned. Should be safe for concurrent use. Transactions starting after the change use the new function. But indexes involving the function have to be rebuilt manually.
Alternatively (especially in pg 9.2 or later), you could set a Customized Option as "global constant" for the whole cluster, a given DB, a given role etc, and retrieve the value with:
current_setting('constant.blabla')
One limitation: the value is always text and may have to be cast to a target type.
Related:
User defined variables in PostgreSQL
Many ways to set it:
How does the search_path influence identifier resolution and the "current schema"

User define function with in stored procedure

can we create user define function with in stored procedure then end of the store procedure we need to delete that custom user define function.
You can but it could get messy.
Look at sp_executesql. This will allow you to run arbitrary SQL, including DDL. Creating and using UDF's in this way does seem a bit dangerous -- you'll need to make sure that there aren't any name conflicts with competing threads, and there's no way to get any kind of query optimization.
I'd double check your design to make sure there isn't another solution to this!
Dynamic SQL is the only way.
ALTER PROC ...
AS
...
EXEC ('CREATE FUNCTION tempFunc...')
...
EXEC ('DROP FUNCTION tempFunc')
...
GO
However:
if you have 2 concurrent executions it will fail because tempFunc already exists
if each udf definition is different, then you need random names
if you randomise the name, the rest of the code will have to be dynamic SQL too
a stored proc implies reuse so just persist it
your code will need ddl_admin or db_owner rights to create the udf
...
So... why do you want to do this?

Local Temporary table in Oracle 10 (for the scope of Stored Procedure)

I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....