HANA - Passing string variable into WHERE IN() clause in SQL script - hana

Lets suppose I have some SQL script in a scripted calculation view that takes a single value input parameter and generates a string of multiple inputs for an input parameter in another calculation view.
BEGIN
declare paramStr clob;
params = select foo
from bar
where bar.id = :IP_ID;
select '''' || string_agg(foo, ''', ''') || ''''
into paramStr
from :params;
var_out = select *
from "_SYS_BIC"."somepackage/MULTIPLE_IP_VIEW"(PLACEHOLDER."$$IP_IDS$$" => :paramStr);
END
This works as expected. However, if I change the var_out query and try to use the variable in a where clause
BEGIN
...
var_out = select *
from "_SYS_BIC"."somepackage/MULTIPLE_IP_VIEW"
where "IP_IDS" in(:paramStr);
END
the view will activate, but I get no results from the query. No runtime errors, just an empty result set. When I manually pass in the values to the WHERE IN() clause, everything works fine. It seems like an elementary problem to have, but I can't seem to get it to work. I have even tried using char(39) rather than '''' in my concatenation expression, but no banana :(

Ok, so what you are doing here is trying to make the statement dynamic.
For the IN condition, you seem to hope that once you have filled paramStr it would be handled as a set of parameters.
That's not the case at all.
Let's go with your example from the comment: paramStr = ' 'ip1','ip2' '
What happens, when the paramStr gets filled into your code is this:
var_out = select *
from "_SYS_BIC"."somepackage/MULTIPLE_IP_VIEW"
where "IP_IDS" in(' ''ip1'',''ip2'' ');
So, instead of looking for records that match IP_DS = 'ip1' or IP_DS = 'ip2' you are literally looking for records that match IP_DS = ' 'ip1','ip2' '.
One way to work around this is to use the APPLY_FILTER() function.
var_out = select *
from "_SYS_BIC"."somepackage/MULTIPLE_IP_VIEW";
filterStr = ' "IP_IDS" in (''ip1'',''ip2'') ';
var_out_filt = APPLY_FILTER(:var_out, :filterStr) ;
I've written about that some time ago: "On multiple mistakes with IN conditions".
Also, have a look at the documentation for APPLY_FILTER.

The SAP note “2315085 – Query with Multi-Value Parameter on Scripted Calculation View Fails with Incorrect Syntax Error“ actually showcases the APPLY_FILTER() approach that failed when I first tried it.
It also presents an UDF_IN_LIST function to convert a long varchar string with array of items from input parameter to a usable in-list predicate.
Unfortunately, couldn't make it work in sps11 rev 111.03 despite some parameter available (SAP Note 2457876 :to convert error into warning for string length overflow) - (range 3) string is too long exception
then ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'System') set ('sqlscript', 'typecheck_procedure_input_param') = 'false' WITH RECONFIGURE;
-recompile the Calc View
then
select * from :var_tempout where OBJECT_ID in (select I_LIST from "BWOBJDES"."CROSS_AREA::UDF_INLIST_P"(:in_objectids,','));
invalid number exception - invalid number
But taking the scripting out of the function makes it work...
For this SPS 11 level APPLY_FILTER seems to be the only workaround. And it is really hard to say what would be the rev on SPS 12 to go in order to have it.
FUNCTION "BWOBJDES"."CROSS_AREA::UDF_INLIST_P"(str_input nvarchar(5000),
delimiter nvarchar(10))
RETURNS table ( I_LIST INTEGER ) LANGUAGE SQLSCRIPT SQL SECURITY INVOKER AS
/********* Begin Function Script ************/
BEGIN
DECLARE cnt int;
DECLARE temp_input nvarchar(128);
DECLARE slice NVARCHAR(10) ARRAY;
temp_input := :str_input;
cnt := 1;
WHILE length(temp_input) > 0 DO
if instr(temp_input, delimiter) > 0 then
slice[:cnt] := substr_before(temp_input,delimiter);
temp_input := substr_after(temp_input,delimiter);
cnt := :cnt + 1;
else
slice[:cnt] := temp_input;
break;
end if;
END WHILE;
tab2 = UNNEST(:slice) AS (I_LIST);
return select I_LIST from :tab2;
END;
CREATE PROCEDURE "MY_SCRIPTED_CV/proc"( IN numbers NVARCHAR(5000), OUT
var_out
MY_TABLE_TYPE ) language sqlscript sql security definer reads sql data with
result view
"MY_SCRIPTED_CV" as
/********* Begin Procedure Script ************/
BEGIN
-- not working
--var_out = select * from MY_TABLE where NUMBER in (select I_LIST from
--UDF_INLIST_P(:numbers,','));
-- working
DECLARE cnt int;
DECLARE temp_input nvarchar(128);
DECLARE slice NVARCHAR(13) ARRAY;
DECLARE delimiter VARCHAR := ',';
temp_input := replace(:numbers, char(39), '');
cnt := 1;
WHILE length(temp_input) > 0 DO
if instr(temp_input, delimiter) > 0 then
slice[:cnt] := substr_before(temp_input,delimiter);
temp_input := substr_after(temp_input,delimiter);
cnt := :cnt + 1;
else
slice[:cnt] := temp_input;
break;
end if;
END WHILE;
l_numbers = UNNEST(:slice) AS (NUMBER);
var_out=
SELECT *
FROM MAIN AS MA
INNER JOIN l_numbers as LN
ON MAIN.NUMBER = LN.NUMBER
END;

I know, this is a quite old thread but nevertheless my findings based what I read in here from Jenova might be interesting for others. Thatswhy I am writing them down.
I was faced with the same problem. Users can sent multiple entrie in an input parameter in a calc view I have to optimize. Unluckily I run into the same problem like Jenova and others before. And, no, imho this is not about dynamic view execution or dynamic sql, but about processing in lists in a clean way in a scripted calc view or table function if they are send in an input parameter.
Lars' proposal to use APPLY_FILTER is not applicable in my case, since I cannot use a complex statement in the APPLY_FILTER function itself. I have to materialize the whole amount of data in the select, which result I can assign to APPLY_FILTER and then execute the filtering. I want to see the filtering applied in the first step not after materializing data which is not appearing in the result of the filter application.
So I used a hdbtablefunction to create a function performing the parameter string cleanup and the transformation into an array and afterwards it is returning the result of the UNNEST function.
Since the result of the function is a table it can be used as a table inside the scripted view or tablefunction - in joins, subselects and so on.
This way I am able to process user input lists as expected, fast and with much less resource consumption.
FUNCTION "_SYS_BIC"."package1::transparam" (ip_string NVARCHAR(500) )
RETURNS table ( "PARAMETER" nvarchar(100))
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER AS
v_test varchar(1000);
IP_DELIMITER VARCHAR(1) := ',';
v_out VARCHAR(100):='';
v_count INTEGER:=1;
v_substr VARCHAR(1000):='';
v_substr2 VARCHAR(1000):='';
id INTEGER array;
val VARCHAR(100) array;
BEGIN
--
v_substr:=:ip_string;
v_substr := REPLACE(:v_substr, '''', '');
v_substr := REPLACE(:v_substr, ' ', '');
while(LOCATE (:v_substr, :ip_delimiter) > 0 ) do
-- find value
v_out := SUBSTR(v_substr, 0, LOCATE (:v_substr, :ip_delimiter) - 1 );
-- out to output
val[v_count]:=v_out;
-- increment counter
v_count:=:v_count+1;
-- new substring for search
v_substr2 := SUBSTR(:v_substr, LOCATE (:v_substr, :ip_delimiter) + 1, LENGTH(:v_substr));
v_substr := v_substr2;
END while;
IF(LOCATE (:v_substr, :ip_delimiter) = 0 AND LENGTH(:v_substr) > 0) THEN
-- no delimiter in string
val[v_count]:=v_substr;
END IF;
-- format output as tables
rst = unnest(:VAL) AS ("PARAMETER");
RETURN SELECT * FROM :rst;
END;
can be called like
select * from "package1.transparam"('''BLU'',''BLA''')
returning table with two lines
PARAMETER
---------
BLU
BLA

The most comprehensive explanation is here:
https://blogs.sap.com/2019/01/17/passing-multi-value-input-parameter-from-calculation-view-to-table-function-in-sap-hana-step-by-step-guide/
Creating custom function splitting string into multiple values
Then inner/left outer join can be used to filter.

Related

PL/SQL query with parameter

I am familiar with MSSQL and using a parameter within the query, but I am not sure how I would do this within PL/SQL.
DECLARE
LSITEID NUMBER := 100001;
BEGIN
SELECT * from invoicehead ih
JOIN sitemaster sm on sm.SITEIID = ih.SITEIID
JOIN invoiceline il on il.invoiceIID = ih.invoiceIID
WHERE
ih.StartDate BETWEEN '2015-12-01' AND '2016-03-07'
AND SITEIID IN ( LSITEID)
END;
Right now I am testing this within Pl/SQL. But essentially I would be passing in the query with the parameter from MSSQL Linked Server OPENQuery.
How I can run the above query in PL/SQL with the parameter?
There is plenty of other resource for finding an answer, e.g. here (Tutorialspoint) or specifically here (plsql-tutorial). But perhaps I have missed your point.
To not remain on merely citing links, your query could look like this:
DECLARE
LSITEID integer;
BEGIN
LSITEID := 100001;
-- dostuff
END;
Two things to note: First, in a declare part (as I have learnt it) you should avoid assigning values. Second, if you intend to pass in different parameters you could/should use a procedure.
In PL/SQL you just use the name of the argument. In the following example, the argument is P_VALUE, note select statement says where dummy = p_value.
DECLARE
FUNCTION dummycount (p_value IN DUAL.dummy%TYPE)
RETURN INTEGER
AS
l_ret INTEGER;
BEGIN
SELECT COUNT (*) c
INTO l_ret
FROM DUAL
WHERE dummy = p_value;
RETURN l_ret;
END dummycount;
BEGIN
DBMS_OUTPUT.put_line ('A: ' || dummycount (p_value => 'A'));
DBMS_OUTPUT.put_line ('X: ' || dummycount (p_value => 'X'));
END;
This results in the following output:
A: 0
X: 1

Delphi - pass table valued parameter to SQL Server stored procedure

I need to pass the parameter as table value for a stored procedure in SQL Server. How to handle this in Delphi?
As far as I know there in no simple way to pass Table parameters, using the components shipped with Delphi.
A workaround would be using a temporary table which can be used to fill a typed table variable.
Assuming your definition would look like this:
CREATE TYPE MyTableType AS TABLE
( ID int
, Text varchar(100) )
GO
CREATE PROCEDURE P_Table
#Tab MyTableType READONLY
AS
BEGIN
SET NOCOUNT ON;
Select * from #Tab -- dummy operation just return the dataset
END
GO
You could call the procedure like this:
var
i: Integer;
begin
// we create a temporary table since a table variable can obly be used for a single call
DummyDataset.Connection.Execute('Create Table #mytemp(ID int,Text varchar(100))');
DummyDataset.CommandText := 'Select * from #mytemp';
DummyDataset.Open;
for i := 0 to 10 do
begin
DummyDataset.Append;
DummyDataset.Fields[0].Value := i;
DummyDataset.Fields[1].Value := Format('A Text %d', [i]);
DummyDataset.Post;
end;
MyDataset.CommandText := 'Declare #mytemp as MyTableType '
+ 'Insert into #mytemp select * from #mytemp ' // copy data to typed table variable
+ 'EXEC P_Table #Tab = #mytemp';
MyDataset.Open;
DummyDataset.Connection.Execute('Drop Table #mytemp');
end
The sample downloadable from http://msftdpprodsamples.codeplex.com/wikipage?title=SS2008%21Readme_Table-Valued%20Parameters is written in C++ but could readily be translated to Delphi.
Once you have translated that code to Delphi, you can use something like the following to make the result set accessible via good ole ADO:
SourcesRecordset := CreateADOObject(CLASS_Recordset) as _Recordset;
RSCon := SourcesRecordset as ADORecordsetConstruction;
RSCon.Rowset := rowset;
LDataSet := TADODataSet.Create(nil);
try
// Only doing the first result set
LDataSet.Recordset := SourcesRecordset;
while not LDataSet.Eof do
begin
//... something
LDataSet.Next;
end;
finally
LDataSet.Free;
end;
Note that CreateADOObject is a private function in Data.Win.ADODB.pas but it's pretty trivial.

Oracle PL SQL: Comparing ref cursor results returned by two stored procs

I was given a stored proc which generates an open cursor which is passed as output to a reporting tool. I re-wrote this stored proc to improve performance. What I'd like to do is to show that the two result sets are the same for a given set of input parameters.
Something that is the equivalent of:
select * from CURSOR_NEW
minus
select * from CURSOR_OLD
union all
select * from CURSOR_OLD
minus
select * from CURSOR_NEW
Each cursor returns several dozen columns from a large subset of tables. Each row has an id value, and a long list of other column values for that id. I would want to check:
Both cursors are returning the same set of ids (I already checked this)
Both cursors have the same list of values for each id they have in common
If it was just one or two columns, I could concatenate them and find a hash and then sum it up over the cursor. Or another way might be to create a parent program that inserted the cursor results into a global temp table and compared the results. But since it's several dozen columns I'm trying to find a less brute force approach to doing the comparison.
Also it would be nice if the solution was scalable for other situations that involved different cursors, so it wouldn't have to be manually re-written each time, since this is a situation I'm running into more often.
I figured out a way to do this. It was a lot more complicated than I expected. I ended up using some DBMS_SQL procedures that allow converting REFCURSORs to defined cursors. Oracle has documentation on it here:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm#LNPLS00001
After that I concatenated the row values into a string and printed the hash. For bigger cursors, I will change concat_col_vals to use a CLOB to prevent it from overflowing.
p_testCursors returns a simple refcursor for example purposes.
declare
cx_1 sys_refcursor;
c NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER;
stringvar VARCHAR2(4000);
numvar NUMBER;
datevar DATE;
concat_col_vals varchar2(4000);
col_hash number;
h raw(32767);
n number;
BEGIN
p_testCursors(cx_1);
c := DBMS_SQL.TO_CURSOR_NUMBER(cx_1);
DBMS_SQL.DESCRIBE_COLUMNS(c, colcnt, desctab);
-- Define columns:
FOR i IN 1 .. colcnt LOOP
IF desctab(i).col_type = 2 THEN
DBMS_SQL.DEFINE_COLUMN(c, i, numvar);
ELSIF desctab(i).col_type = 12 THEN
DBMS_SQL.DEFINE_COLUMN(c, i, datevar);
-- statements
ELSE
DBMS_SQL.DEFINE_COLUMN(c, i, stringvar, 4000);
END IF;
END LOOP;
-- Fetch rows with DBMS_SQL package:
WHILE DBMS_SQL.FETCH_ROWS(c) > 0 LOOP
concat_col_vals := '~';
FOR i IN 1 .. colcnt LOOP
IF (desctab(i).col_type = 1) THEN
DBMS_SQL.COLUMN_VALUE(c, i, stringvar);
--Dbms_Output.Put_Line(stringvar);
concat_col_vals := concat_col_vals || '~' || stringvar;
ELSIF (desctab(i).col_type = 2) THEN
DBMS_SQL.COLUMN_VALUE(c, i, numvar);
--Dbms_Output.Put_Line(numvar);
concat_col_vals := concat_col_vals || '~' || to_char(numvar);
ELSIF (desctab(i).col_type = 12) THEN
DBMS_SQL.COLUMN_VALUE(c, i, datevar);
--Dbms_Output.Put_Line(datevar);
concat_col_vals := concat_col_vals || '~' || to_char(datevar);
-- statements
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE(concat_col_vals);
col_hash := DBMS_UTILITY.GET_SQL_HASH(concat_col_vals, h, n);
DBMS_OUTPUT.PUT_LINE('Return Value: ' || TO_CHAR(col_hash));
DBMS_OUTPUT.PUT_LINE('Hash: ' || h);
END LOOP;
DBMS_SQL.CLOSE_CURSOR(c);
END;
/
This is not easy task for Oracle.
Very good article you can find on dba-oracle web:
Sql patterns symmetric diff
and Convert set to join sql parameter
If you need it often, you can:
add "hash column" and fill it always with insert using trigger, or
for each table in cursor output get unique value (create unique index) and compare only this column wiht anijoin
and you can find other possibilities in article.

Define a function in an SQL Navigator query

I would like to extract a function from a piece of SQL-code which is used multiple times in one query. I'm looking for a functionality which is similar to the following (invented by me) syntax:
with f(x) as (return x+1)
select f(thing1), f(thing2), f(thing3) from things
thing1, thing2, thing3 are integer columns in the table "things" in the example. Also, imagine that f is more complicated than an add-one function.
How do I define a function inside a query?
Declaration of a function in the WITH clause of a query is not possible but according to the information presented at OOW it will be in 12c version. So for now you need to create a function as a schema object whether it would be a stand-alone function or part of a package. For example:
create or replace function F(p_p in number)
return number
is
begin
return p_p + 1;
end;
And then call it in a query, ensuring that the data type of a column you are passing in to the function as a parameter is of the same data type as the parameter of the function:
select f(col1)
, f(col2)
, ...
, f(coln)
from your_table
Are you trying to build a function for dynamic table and table's columns, I would do some code like this? And you cannot declare a function in the WITH clause of a query.
SELECT f ( tablename,columnname1 ),
f ( tablename,columnname2 ),
........
FROM tablename;
Create or replace function f (tableName varchar2,ColumnName varchar2)
Return somethingHere
Is
varTableName varchar2(200);
varColumnName varchar2(200);
varValue integer;
t_cid INTEGER;
t_command VARCHAR2(200);
Begin
--Get tableName
varTableName := tableName ;
--Get columnName
varColumnName := ColumnName ;
t_command := 'SELECT ' || varColumnName ||' FROM ' || varTableName;
--Here execute dynamic sql statement
DBMS_SQL.PARSE
DBMS_SQL.DEFINE_COLUMN
DBMS_SQL.EXECUTE
--fatch row values into varValue
DBMS_SQL.COLUMN_VALUE (..,..,varValue);
--then do your x+1 magic here
varValue := varValue+1
--then output your value.
End;

DROP FUNCTION without knowing the number/type of parameters?

I keep all my functions in a text file with 'CREATE OR REPLACE FUNCTION somefunction'.
So if I add or change some function I just feed the file to psql.
Now if I add or remove parameters to an existing function, it creates an overload with the same name and to delete the original I need type in all the parameter types in the exact order which is kind of tedious.
Is there some kind of wildcard I can use to DROP all functions with a given name so I can just add DROP FUNCTION lines to the top of my file?
Basic query
This query creates all necessary DDL statements:
SELECT 'DROP FUNCTION ' || oid::regprocedure
FROM pg_proc
WHERE proname = 'my_function_name' -- name without schema-qualification
AND pg_function_is_visible(oid); -- restrict to current search_path
Output:
DROP FUNCTION my_function_name(string text, form text, maxlen integer);
DROP FUNCTION my_function_name(string text, form text);
DROP FUNCTION my_function_name(string text);
Execute the commands after checking plausibility.
Pass the function name case-sensitive and with no added double-quotes to match against pg_proc.proname.
The cast to the object identifier type regprocedure (oid::regprocedure), and then to text implicitly, produces function names with argument types, automatically double-quoted and schema-qualified according to the current search_path where needed. No SQL injection possible.
pg_function_is_visible(oid) restricts the selection to functions in the current search_path ("visible"). You may or may not want this.
If you have multiple functions of the same name in multiple schemas, or overloaded functions with various function arguments, all of those will be listed separately. You may want to restrict to specific schema(s) or specific function parameter(s).
Related:
When / how are default value expression functions bound with regard to search_path?
Function
You can build a plpgsql function around this to execute the statements immediately with EXECUTE. For Postgres 9.1 or later:
Careful! It drops your functions!
CREATE OR REPLACE FUNCTION f_delfunc(_name text, OUT functions_dropped int)
LANGUAGE plpgsql AS
$func$
-- drop all functions with given _name in the current search_path, regardless of function parameters
DECLARE
_sql text;
BEGIN
SELECT count(*)::int
, 'DROP FUNCTION ' || string_agg(oid::regprocedure::text, '; DROP FUNCTION ')
FROM pg_catalog.pg_proc
WHERE proname = _name
AND pg_function_is_visible(oid) -- restrict to current search_path
INTO functions_dropped, _sql; -- count only returned if subsequent DROPs succeed
IF functions_dropped > 0 THEN -- only if function(s) found
EXECUTE _sql;
END IF;
END
$func$;
Call:
SELECT f_delfunc('my_function_name');
The function returns the number of functions found and dropped if no exceptions are raised. 0 if none were found.
Further reading:
How does the search_path influence identifier resolution and the "current schema"
Truncating all tables in a Postgres database
PostgreSQL parameterized Order By / Limit in table function
For Postgres versions older than 9.1 or older variants of the function using regproc and pg_get_function_identity_arguments(oid) check the edit history of this answer.
You would need to write a function that took the function name, and looked up each overload with its parameter types from information_schema, then built and executed a DROP for each one.
EDIT: This turned out to be a lot harder than I thought. It looks like information_schema doesn't keep the necessary parameter information in its routines catalog. So you need to use PostgreSQL's supplementary tables pg_proc and pg_type:
CREATE OR REPLACE FUNCTION udf_dropfunction(functionname text)
RETURNS text AS
$BODY$
DECLARE
funcrow RECORD;
numfunctions smallint := 0;
numparameters int;
i int;
paramtext text;
BEGIN
FOR funcrow IN SELECT proargtypes FROM pg_proc WHERE proname = functionname LOOP
--for some reason array_upper is off by one for the oidvector type, hence the +1
numparameters = array_upper(funcrow.proargtypes, 1) + 1;
i = 0;
paramtext = '';
LOOP
IF i < numparameters THEN
IF i > 0 THEN
paramtext = paramtext || ', ';
END IF;
paramtext = paramtext || (SELECT typname FROM pg_type WHERE oid = funcrow.proargtypes[i]);
i = i + 1;
ELSE
EXIT;
END IF;
END LOOP;
EXECUTE 'DROP FUNCTION ' || functionname || '(' || paramtext || ');';
numfunctions = numfunctions + 1;
END LOOP;
RETURN 'Dropped ' || numfunctions || ' functions';
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
I successfully tested this on an overloaded function. It was thrown together pretty fast, but works fine as a utility function. I would recommend testing more before using it in practice, in case I overlooked something.
Improving original answer in order to take schema into account, ie. schema.my_function_name,
select
format('DROP FUNCTION %s(%s);',
p.oid::regproc, pg_get_function_identity_arguments(p.oid))
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE
p.oid::regproc::text = 'schema.my_function_name';
Slightly enhanced version of Erwin's answer. Additionally supports following
'like' instead of exact function name match
can run in 'dry-mode' and 'trace' the SQL for removing of the functions
Code for copy/paste:
/**
* Removes all functions matching given function name mask
*
* #param p_name_mask Mask in SQL 'like' syntax
* #param p_opts Combination of comma|space separated options:
* trace - output SQL to be executed as 'NOTICE'
* dryrun - do not execute generated SQL
* #returns Generated SQL 'drop functions' string
*/
CREATE OR REPLACE FUNCTION mypg_drop_functions(IN p_name_mask text,
IN p_opts text = '')
RETURNS text LANGUAGE plpgsql AS $$
DECLARE
v_trace boolean;
v_dryrun boolean;
v_opts text[];
v_sql text;
BEGIN
if p_opts is null then
v_trace = false;
v_dryrun = false;
else
v_opts = regexp_split_to_array(p_opts, E'(\\s*,\\s*)|(\\s+)');
v_trace = ('trace' = any(v_opts));
v_dryrun = ('dry' = any(v_opts)) or ('dryrun' = any(v_opts));
end if;
select string_agg(format('DROP FUNCTION %s(%s);',
oid::regproc, pg_get_function_identity_arguments(oid)), E'\n')
from pg_proc
where proname like p_name_mask
into v_sql;
if v_sql is not null then
if v_trace then
raise notice E'\n%', v_sql;
end if;
if not v_dryrun then
execute v_sql;
end if;
end if;
return v_sql;
END $$;
select mypg_drop_functions('fn_dosomething_%', 'trace dryrun');
Here is the query I built on top of #Сухой27 solution that generates sql statements for dropping all the stored functions in a schema:
WITH f AS (SELECT specific_schema || '.' || ROUTINE_NAME AS func_name
FROM information_schema.routines
WHERE routine_type='FUNCTION' AND specific_schema='a3i')
SELECT
format('DROP FUNCTION %s(%s);',
p.oid::regproc, pg_get_function_identity_arguments(p.oid))
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE
p.oid::regproc::text IN (SELECT func_name FROM f);
As of Postgres 10 you can drop functions by name only, as long as the names are unique to their schema. Just place the following declaration at the top of your function file:
drop function if exists my_func;
Documentation here.
pgsql generates an error if there exists more than one procedure with the same name but different arguments when the procedure is deleted according to its name. Thus if you want to delete a single procedure without affecting others then simply use the following query.
SELECT 'DROP FUNCTION ' || oid::regprocedure
FROM pg_proc
WHERE oid = {$proc_oid}