I have this giant query which uses some text in it which look like variables but I have no idea what they are and I really can't figure it out. They aren't global or defined anywhere else in the oracle package. In particular below the variable(or whatever it is) called "has_value" is so confusing because it's used in multiple cases for queries across a LOT of tables.
PROCEDURE assemble_default_where(
i_search_id IN search_table.search_id%TYPE,
o_where_clause OUT VARCHAR2,
o_from_clause OUT VARCHAR2,
o_error_number OUT error_message.error_number%TYPE) IS
l_base VARCHAR2(30) := 'd';
CURSOR c_where_clause IS
SELECT DECODE
(sl.parameter_name,
'track Location', join_operator || ' ' || open_brackets || ' ' || not_operator || ' EXISTS(SELECT 1 FROM track_locations loc WHERE ' || l_base
|| '.plan_number = loc.plan_number AND ' || l_base || '.order_id = loc.order_id AND loc.t_id = NVL('''
|| track_location_rsect_id(has_value) || ''', loc.t_id) AND loc.tstatus = NVL(''' || track_tstatus_id(has_value)
FROM search_lines sl
WHERE search_id = i_search_id
ORDER BY line_no;
I have left out a bit of the query because it's not relevant to my question.
I want to know where join_operator, has_value and open_brackets come from and what they are???
There are several options:
Variable - defined in outer block.
Variable - defined in package body.
Variable - defined in package specification.
Column - the same column names could be in many tables.
Function - in the invoker's or definer's schema.
Library - in the invoker's or definer's schema.
Operator - in the invoker's or definer's schema.
Synonym - to a function, operator, or library.
In practice you probably would have already noticed if it was #1, #2, #3, or #4. And #6 and #7 are very rare. I would guess it is a function or a synonym to a function.
To rule out variables, search through the code either in your IDE or with this SQL:
select * from dba_source where lower(text) like '%join_operator%';
To rule out objects, search though all the objects with this SQL:
select * from dba_objects where object_name = 'JOIN_OPERATOR';
UPDATE
PL/SQL and DBA_DEPENDENCIES can also help identify objects.
Sample schema
alter session set plscope_settings = 'IDENTIFIERS:ALL';
create table search_lines(
search_id number, line_no number, parameter_name varchar2(100));
create or replace function join_operator_function return varchar2 is
begin
return 'asdf';
end;
/
create synonym join_operator for join_operator_function;
create or replace PROCEDURE assemble_default_where
is
CURSOR c_where_clause IS
SELECT DECODE(sl.parameter_name, 'track Location', join_operator)
FROM search_lines sl
ORDER BY line_no;
begin
null;
end;
/
PL/SCOPE example
WITH v AS (
SELECT Line,
Col,
INITCAP(NAME) Name,
LOWER(TYPE) Type,
LOWER(USAGE) Usage,
USAGE_ID,
USAGE_CONTEXT_ID
FROM USER_IDENTIFIERS
WHERE Object_Name = 'ASSEMBLE_DEFAULT_WHERE'
AND Object_Type = 'PROCEDURE'
)
SELECT RPAD(LPAD(' ', 2*(Level-1)) ||
Name, 30, '.')||' '||
RPAD(Type, 30)||
RPAD(Usage, 30)
IDENTIFIER_USAGE_CONTEXTS
FROM v
START WITH USAGE_CONTEXT_ID = 0
CONNECT BY PRIOR USAGE_ID = USAGE_CONTEXT_ID
ORDER SIBLINGS BY Line, Col
/
Assemble_Default_Where........ procedure declaration
Assemble_Default_Where...... procedure definition
C_Where_Clause............ cursor declaration
Join_Operator_Function.. function call
DBA_DEPENDENCIES example
select referenced_owner, referenced_name, referenced_type
from dba_dependencies
where owner = user
and name = 'ASSEMBLE_DEFAULT_WHERE';
REFERENCED_OWNER REFERENCED_NAME REFERENCED_TYPE
---------------- --------------- ---------------
SYS STANDARD PACKAGE
SYS SYS_STUB_FOR_PURITY_ANALYSIS PACKAGE
JHELLER_DBA JOIN_OPERATOR SYNONYM
JHELLER_DBA JOIN_OPERATOR_FUNCTION FUNCTION
JHELLER_DBA SEARCH_LINES TABLE
Related
I like to find all columns in my Oracle database schema that only contain numeric data but having a non-numeric type. (So basically column-candidates with probably wrong chosen data types.)
I have a query for all varchar2-columns:
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
FROM user_tab_cols
WHERE DATA_TYPE = 'VARCHAR2';
Furthermore I have a query to check for any non-numeric data inside a table myTable and a column myColumn:
SELECT 1
FROM myTable
WHERE NOT REGEXP_LIKE(myColumn, '^[[:digit:]]+$');
I like to combine both queries in that way that the first query only returns the rows where not exists the second.
The main problem here is that the first query is on meta layer of the data dictionary where TABLE_NAME and COLUMN_NAME comes as data and I need that data as identifiers (and not as data) in the second query.
In pseudo-SQL I have something like that in mind:
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE
FROM user_tab_cols
WHERE DATA_TYPE = 'VARCHAR2'
AND NOT EXISTS
(SELECT 1 from asIdentifier(TABLE_NAME)
WHERE NOT REGEXP_LIKE(asIdentifier(COLUMN_NAME), '^[[:digit:]]+$'));
Create a function as this:
create or replace function isNumeric(val in VARCHAR2) return INTEGER AS
res NUMBER;
begin
res := TO_NUMBER(val);
RETURN 1;
EXCEPTION
WHEN OTHERS THEN
RETURN 0;
END;
Then you can use it like this:
DECLARE
r integer;
BEGIN
For aCol in (SELECT TABLE_NAME, COLUMN_NAME FROM user_tab_cols WHERE DATA_TYPE = 'VARCHAR2') LOOP
-- What about CHAR and CLOB data types?
execute immediate 'select count(*) from '||aCol.TABLE_NAME||' WHERE isNumeric('||aCol.COLUMN_NAME||') = 0' into r;
if r = 0 then
DBMS_OUTPUT.put_line(aCol.TABLE_NAME ||' '||aCol.COLUMN_NAME ||' contains numeric values only');
end if;
end loop;
end;
Note, the performance of this PL/SQL block will be poor. Hopefully this is a one-time-job only.
There are two possible approaches: dynamic SQL (DSQL) and XML.
First one was already demonstrated in another reply and it's faster.
XML approach just for fun
create or replace function to_number_udf(p in varchar2) return number
deterministic is
pragma udf;
begin
return p * 0;
exception when invalid_number or value_error then return 1;
end to_number_udf;
/
create table t_chk(str1, str2) as
select '1', '2' from dual union all
select '0001.1000', 'helloworld' from dual;
SQL> column owner format a20
SQL> column table_name format a20
SQL> column column_name format a20
SQL> with tabs_to_check as
2 (
3 select 'collection("oradb:/'||owner||'/'||table_name||'")/ROW/'||column_name||'/text()' x,
4 atc.*
5 from all_tab_columns atc
6 where table_name = 'T_CHK'
7 and data_type = 'VARCHAR2'
8 and owner = user
9 )
10 select --+ no_query_transformation
11 owner, table_name, column_name
12 from tabs_to_check ttc, xmltable(x columns "." varchar2(4000)) x
13 group by owner, table_name, column_name
14 having max(to_number_udf(".")) = 0;
OWNER TABLE_NAME COLUMN_NAME
-------------------- -------------------- --------------------
TEST T_CHK STR1
PS. On Oracle 12.2 you can use to_number(... default ... on conversion error) instead of UDF.
The faster way to check if a string is all digits vs. contains at least one non-digit character is to use the translate function. Alas, due to the non-SQL Standard way Oracle handles empty strings, the form of the function we must use is a little complicated:
translate(input_string, 'z0123456789', 'z')
(z can be any non-digit character; we need it so that the third argument is not null). This works by translating z to itself and 0, etc. to nothing. So if the input string was null or all-digits, and ONLY in that case, the value returned by the function is null.
In addition: to make the process faster, you can test each column with an EXISTS condition. If a column is not meant to be numeric, then in most cases the EXISTS condition will become true very quickly, so you will have to inspect a very small number of values from such columns.
As I tried to make this work, I ran into numerous side issues. Presumably you want to look in all schemas (except SYS and perhaps SYSTEM). So you need to run the procedure (anonymous block) from an account with SYSDBA privileges. Then - I ran into issues with non-standard table and column names (names starting with an underscore and such); which brought to mind identifiers defined in double-quotes - a terrible practice.
For illustration, I will use the HR schema - on which the approach worked. You may need to tweak this further; I wasn't able to make it work by changing the line
and owner = 'HR'
to
and owner != 'SYS'
So - with this long intro - here is what I did.
First, in a "normal" user account (my own, named INTRO - I run a very small database, with only one "normal" user, plus the Oracle "standard" users like SCOTT, HR etc.) - so, in schema INTRO, I created a table to receive the owner name, table name and column name for all columns of data type VARCHAR2 and which contain only "numeric" values or null (numeric defined the way you did.) NOTE HERE: If you then want to really check for all numeric values, you will indeed need a regular expression, or something like what Wernfried has shown; I would still, otherwise, use an EXISTS condition rather than a COUNT in the anonymous procedure.
Then I created an anonymous block to find the needed columns. NOTE: You will not have a schema INTRO - so change it everywhere in my code (both in creating the table and in the anonymous block). If the procedure completes successfully, you should be able to query the table. I show that at the end too.
While logged in as SYS (or another user with SYSDBA powers):
create table intro.cols_with_numbers (
owner_name varchar2(128),
table_name varchar2(128),
column_name varchar2(128)
);
declare x number;
begin
execute immediate 'truncate table intro.cols_with_numbers';
for t in ( select owner, table_name, column_name
from dba_tab_columns
where data_type like 'VARCHAR2%'
and owner = 'HR'
)
loop
execute immediate 'select case when exists (
select *
from ' || t.owner || '.' || t.table_name ||
' where translate(' || t.column_name || ',
''z0123456789'', ''z'') is not null
) then 1 end
from dual'
into x;
if x is null then
insert into intro.cols_with_numbers (owner_name, table_name, column_name)
values(t.owner, t.table_name, t.column_name);
end if;
end loop;
end;
/
Run this procedure and then query the table:
select * from intro.cols_with_numbers;
no rows selected
(which means there were no numeric columns in tables in the HR schema, in the wrong data type VARCHAR2 - or at least, no such columns that had only non-negative integer values.) You can test further, by intentionally creating a table with such a column and testing to see it is "caught" by the procedure.
ADDED - Here is what happens when I change the owner from 'HR' to 'SCOTT':
PL/SQL procedure successfully completed.
OWNER_NAME TABLE_NAME COLUMN_NAME
-------------------- -------------------- --------------------
SCOTT BONUS JOB
SCOTT BONUS ENAME
so it seems to work fine (although on other schemas I sometimes run into an error... I'll see if I can figure out what that is).
In this case the table is empty (no rows!) - this is one example of a "false positive" you may find. (More generally, you will get a false positive if everything in a VARCHAR2 column is null - in all rows of the table.)
NOTE also that a column may have only numeric values and still the best data type would be VARCHAR2. This is the case when the values are simply identifiers and are not meant as "numbers" (which we can compare to each other or to fixed values, and/or with which we can do arithmetic). Example - a SSN (Social Security Number) or the equivalent in other countries; the SSN is each person's "official" identifier for doing business with the government. The SSN is numeric (actually, perhaps to accentuate the fact it is NOT supposed to be a "number" despite the name, it is often written with a couple of dashes...)
In a legacy database infrastructure, how do I best find views that access a certain table or column? I'm currently refactoring certain tables (i.e. delete unused columns) and I want to find all views that still rely on those columns and will break, if I remove the columns.
Is there any tool/feature to search through all view definitions in Oracle SQL Developer?
You can use something like function dependent_views, code below. Example usage:
select dependent_views('CUSTOMER_NAME', 'CUSTOMERS') list from dual
Output:
LIST
-----------------
SCOTT.V_PERSONS
Function searches dependendent views in ALL_DEPENDENCIES, next searches TEXT column from ALL_VIEWS for occurence of column_name.
Note: Because all_dependences may not contain full data of dependent objects (for instance when view was created by execute immediate) - my function may not find this object.
Also if column_name is substring of other column - function may return to many views.
create or replace function dependent_views
(i_column varchar2, i_table varchar2, i_owner varchar2 default USER)
return varchar2 is
o_ret varchar2(4000) := '';
v_text long := '';
begin
for o in (
select * from all_dependencies
where referenced_name = upper(i_table)
and referenced_owner = upper(i_owner)
and type = 'VIEW')
loop
begin
select text into v_text from all_views
where view_name = o.name and owner = o.owner;
exception when no_data_found then
null;
end;
if upper(v_text) like '%'||upper(i_column)||'%' then
o_ret := o_ret||o.owner||'.'||o.name||' ';
end if;
end loop;
return o_ret;
end dependent_views;
how do I best find views that access a certain table
You could query the [USER|ALL|DBA]_DEPENDENCIES view.
SELECT name ,
type ,
referenced_name ,
referenced_type
FROM user_dependencies
WHERE TYPE = 'VIEW'
AND NAME = '<VIEW_NAME>'
AND referenced_type = '<TABLE_NAME'>;
To get the result for all the views at once, remove the filter NAME = '<VIEW_NAME>'.
For example,
SQL> column name format a15
SQL> column type format a15
SQL> column referenced_name format a15
SQL> column referenced_type format a15
SQL> SELECT name ,
2 type ,
3 referenced_name ,
4 referenced_type
5 FROM user_dependencies
6 WHERE TYPE = 'VIEW';
NAME TYPE REFERENCED_NAME REFERENCED_TYPE
--------------- --------------- --------------- ---------------
EMP_CUSTOM_VIEW VIEW EMP TABLE
EMP_VIEW VIEW EMP TABLE
SQL>
Cause you search for all views that access certain table, this might help:
select
name,
type,
referenced_name,
referenced_type
from user_dependencies
where type = 'VIEW'
and referenced_type = 'TABLE'
I have the same logic to check if a given table exist in my database. On one hand I have put my logic inside a function, and on the other I have my logic inside an anonymous block. When I invoke the function, I get one answer, and when I run the anonymous bock, I get an other.
Here is my function:
CREATE OR REPLACE FUNCTION table_exists(in_table_name IN varchar2) RETURN int
is
var_regex varchar2(30) := '(\w*)([^\.])';
var_exists number(1);
BEGIN
SELECT CASE WHEN EXISTS( SELECT 1 FROM all_tab_cols WHERE TABLE_NAME = UPPER( REGEXP_SUBSTR ( in_table_name, var_regex , 1, 2)))
THEN 1 ELSE 0 END INTO var_exists FROM dual;
DBMS_OUTPUT.PUT_LINE('var_exists is: ' || var_exists);
IF (var_exists = 1)
THEN
RETURN var_exists;
ELSE
RETURN var_exists;
END IF;
END;
This is how I invoke this function:
select table_exists('test_schama.test_table') as table_exits from DUAL;
The function return zero, which mean the table doesn't exist. This is not true because 'test_schama.test_table' does exist.
My anonymous block is as follow:
DECLARE
in_table_name varchar2(100) := 'test_schama.test_table';
var_regex varchar2(30) := '(\w*)([^\.])';
var_exists int;
BEGIN
SELECT CASE WHEN EXISTS( SELECT 1 FROM all_tab_cols WHERE TABLE_NAME = UPPER( REGEXP_SUBSTR ( in_table_name, var_regex , 1, 2)))
THEN 1 ELSE 0 END INTO var_exists FROM dual;
DBMS_OUTPUT.PUT_LINE('var_exists is: ' || var_exists);
END;
Var_exists here has the value of 1 which mean that the table exist. Which is true.
What I don't understand is why am I getting two different answer from the same exact query?
You're querying all_tab_cols. That shows you all the tables that you have access to. That is not necessarily all the tables in the database. Inside of a definer's rights stored function (such as your table_exists), that restricts you to tables that the owner of the function has been granted access to directly not via a role. In an anonymous block, all_tab_cols will show data for any table that you have access to via a role that is enabled in the current session. Since the anonymous block works and the stored function does not, I would assume that the owner of the function does not have access granted on the table directly but that your current session does have an enabled role that has that access.
Most likely, you need to either
Use an invoker's rights stored function (in which case the rows in all_tab_cols would be based on the current session's privileges),
Use the dba_tab_cols data dictionary table instead, or
Grant the owner of the stored function at least select access on the table directly not via a role.
Can I get data types of each column I selected instead of the values, using a select statement?
FOR EXAMPLE:
SELECT a.name, a.surname, b.ordernum
FROM customer a
JOIN orders b
ON a.id = b.id
and result should be like this
name | NVARCHAR(100)
surname | NVARCHAR(100)
ordernum| INTEGER
or it can be in row like this, it isn't important:
name | surname | ordernum
NVARCHAR(100) | NVARCHAR(100) | INTEGER
Thanks
I found a not-very-intuitive way to do this by using DUMP()
SELECT DUMP(A.NAME),
DUMP(A.surname),
DUMP(B.ordernum)
FROM customer A
JOIN orders B
ON A.id = B.id
It will return something like:
'Typ=1 Len=2: 0,48' for each column.
Type=1 means VARCHAR2/NVARCHAR2
Type=2 means NUMBER/FLOAT
Type=12 means DATE, etc.
You can refer to this oracle doc for information Datatype Code
or this for a simple mapping Oracle Type Code Mappings
You can query the all_tab_columns view in the database.
SELECT table_name, column_name, data_type, data_length FROM all_tab_columns where table_name = 'CUSTOMER'
I usually create a view and use the DESC command:
CREATE VIEW tmp_view AS
SELECT
a.name
, a.surname
, b.ordernum
FROM customer a
JOIN orders b
ON a.id = b.id
Then, the DESC command will show the type of each field.
DESC tmp_view
I came into the same situation. As a workaround, I just created a view (If you have privileges) and described it and dropped it later. :)
If you don't have privileges to create a view in Oracle, a "hack" around it to use MS Access :-(
In MS Access, create a pass through query with your sql (but add where clause to just select 1 record), create a select query from the view (very important), selecting all *, then create a make table from the select query. When this runs it will create a table with one record, all the data types should "match" oracle.
i.e. Passthrough --> Select --> MakeTable --> Table
I am sure there are other better ways, but if you have limited tools and privileges this will work.
Also, if you have Toad for Oracle, you can highlight the statement and press CTRL + F9 and you'll get a nice view of column and their datatypes.
Field data type data is available from client code in the ODP.Net. Expect Oracle other libraries must support schema information too. It is straight forward in C# Script you just need a connection string and the SELECT statement. Ask for the Schema Data Only. This solution requires no extras CREATE VIEW rights needed with my testing so far. Resolves type on select expressions. Con is does add extra round trips to data base.
The example is using c# 10 ,may need to downgrade the syntax. The Constants.ContainerConnectionString is a connection string and the Constants.DvcrSyntax can be any select statement.
using System.Data;
using Oracle.ManagedDataAccess.Client;
using OracleSchemaSample;
var connectionBuilder =
new OracleConnectionStringBuilder(Constants.ContainerConnectionString)
{
ConnectionTimeout = 30,
Enlist = "false",
PersistSecurityInfo = true
};
await using var connection = new OracleConnection(connectionBuilder.ConnectionString);
await using var command = new OracleCommand(Constants.DvcrSyntax, connection);
var cts = new CancellationTokenSource();
try
{
await connection.OpenAsync(cts.Token);
connection.ModuleName = "MyUnqueApplicatonName";
connection.ClientId = Guid.NewGuid().ToString(); // Tracing identity
await using var schemaReader = await command.ExecuteReaderAsync(CommandBehavior.SchemaOnly, cts.Token);
var columnSchema = await schemaReader.GetColumnSchemaAsync(cts.Token);
foreach (var column in columnSchema)
Console.WriteLine(
$"{column.ColumnOrdinal}\t{column.ColumnName}\t{column.DataType}\t{column.DataTypeName}\t{column.ColumnSize}");
}
catch (Exception exception)
{
Console.WriteLine(exception);
throw;
}
you can use the DBMS_SQL.DESCRIBE_COLUMNS2
SET SERVEROUTPUT ON;
DECLARE
STMT CLOB;
CUR NUMBER;
COLCNT NUMBER;
IDX NUMBER;
COLDESC DBMS_SQL.DESC_TAB2;
BEGIN
CUR := DBMS_SQL.OPEN_CURSOR;
STMT := 'SELECT object_name , to_char(object_id), created FROM DBA_OBJECTS where rownum<10';
SYS.DBMS_SQL.PARSE(CUR, STMT, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS2(CUR, COLCNT, COLDESC);
DBMS_OUTPUT.PUT_LINE('Statement: ' || STMT);
FOR IDX IN 1 .. COLCNT
LOOP
CASE COLDESC(IDX).col_type
WHEN 2 THEN
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': NUMBER');
WHEN 12 THEN
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': DATE');
WHEN 180 THEN
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': TIMESTAMP');
WHEN 1 THEN
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': VARCHAR'||':'|| COLDESC(IDX).col_max_len);
WHEN 9 THEN
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': VARCHAR2');
-- Insert more cases if you need them
ELSE
DBMS_OUTPUT.PUT_LINE('#' || TO_CHAR(IDX) || ': OTHERS (' || TO_CHAR(COLDESC(IDX).col_type) || ')');
END CASE;
END LOOP;
SYS.DBMS_SQL.CLOSE_CURSOR(CUR);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM(SQLCODE()) || ': ' || DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
SYS.DBMS_SQL.CLOSE_CURSOR(CUR);
END;
/
full example in the below url
https://www.ibm.com/support/knowledgecenter/sk/SSEPGG_9.7.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0055146.html
Is it possible to do the following in Postgres:
SELECT column_name FROM information_schema WHERE table_name = 'somereport' AND data_type = 'integer';
SELECT SUM(coulmn_name[0]),SUM(coulmn_name[1]) ,SUM(coulmn_name[3]) FROM somereport;
In other words I need to select a group of columns from a table depending on certain criteria, and then sum each of those columns in the table.
I know I can do this in a loop, so I can count each column independently, but obviously that requires a query for each column returned from the information schema query. Eg:
FOR r IN select column_name from information_schema where report_view_name = 'somereport' and data_type = 'integer';
LOOP
SELECT SUM(r.column_name) FROM somereport;
END
This query creates the complete DML statement you are after:
WITH x AS (
SELECT 'public'::text AS _schema -- provide schema name ..
,'somereport'::text AS _tbl -- .. and table name once
)
SELECT 'SELECT ' || string_agg('sum(' || quote_ident(column_name)
|| ') AS sum_' || quote_ident(column_name), ', ')
|| E'\nFROM ' || quote_ident(x._schema) || '.' || quote_ident(x._tbl)
FROM x, information_schema.columns
WHERE table_schema = _schema
AND table_name = _tbl
AND data_type = 'integer'
GROUP BY x._schema, x._tbl;
You can execute it separately or wrap this query in a plpgsql function and run the query automatically with EXECUTE:
Full automation
Tested with PostgreSQL 9.1.4
CREATE OR REPLACE FUNCTION f_get_sums(_schema text, _tbl text)
RETURNS TABLE(names text[], sums bigint[]) AS
$BODY$
BEGIN
RETURN QUERY EXECUTE (
SELECT 'SELECT ''{'
|| string_agg(quote_ident(c.column_name), ', ' ORDER BY c.column_name)
|| '}''::text[],
ARRAY['
|| string_agg('sum(' || quote_ident(c.column_name) || ')'
, ', ' ORDER BY c.column_name)
|| ']
FROM '
|| quote_ident(_schema) || '.' || quote_ident(_tbl)
FROM information_schema.columns c
WHERE table_schema = _schema
AND table_name = _tbl
AND data_type = 'integer'
);
END;
$BODY$
LANGUAGE plpgsql;
Call:
SELECT unnest(names) AS name, unnest (sums) AS col_sum
FROM f_get_sums('public', 'somereport');
Returns:
name | col_sum
---------------+---------
int_col1 | 6614
other_int_col | 8364
third_int_col | 2720642
Explain
The difficulty is to define the RETURN type for the function, while number and names of columns returned will vary. One detail that helps a little: you only want integer columns.
I solved this by forming an array of bigint (sum(int_col) returns bigint). In addition I return an array of column names. Both sorted alphabetically by column name.
In the function call I split up these arrays with unnest() arriving at the handsome format displayed.
The dynamically created and executed query is advanced stuff. Don't get confused by multiple layers of quotes. Basically you have EXECUTE that takes a text argument containing the SQL query to execute. This text, in turn, is provided by secondary SQL query that builds the query string of the primary query.
If this is too much at once or plpgsql is rather new for you, start with this related answer where I explain the basics dealing with a much simpler function and provide links to the manual for the major features.
If performance is essential query the Postgres catalog directly (pg_catalog.pg_attributes) instead of using the standardized (but slow) information_schema.columns. Here is a simple example with pg_attributes.