How to trace a SQL which causes error - sql

My database has many kinds of clients and sometimes they use a wrong SQL string. but those clients were written in different languages such as C++, C, Java, .Net it's not possible that I learn all of them.
When a error happened ORA-00942 for example, how can I know what the SQL text is by just using Oracle or some Oracle utils if I don't know how to print the SQL text in client?

AFAIK the "only" option is to trace the whole server. You have to create a special type of the trigger AFTER SERVERERROR ON DATABASE.
See: http://www.red-database-security.com/scripts/oracle_error_trigger.html
There are probably other ways of doing that, but they are too low-level.
Like: OCI tracing, JDBC tracing or some alter system set events ...
Summary: a session trace will NOT contain a statement which failed parsing.
More advanced topics:
JDBC driver support standard logging
OCI drivers support "interceptor" library. You have to set some environment variable and let it point onto our own .dll library. OCI driver will load this library and will call callbacks from this lib on various events. It is described in OCI Programming reference and also AFAIK you can find some sample interceptor library on SF
.Net tracing is documented here

Yes it is very much possible. It is a new feature in SQL*Plus. Read SQL*Plus error logging – New feature release 11.1 to know more about the feature in depth.
NOTE : SQL*Plus error logging is set OFF by default. So, you need to “set errorlogging on” to use the SPERRORLOG table.
The SPERRORLOG table looks like this:
SQL> desc sperrorlog;
Name Null? Type
----------------------------------------- -------- ----------------------------
USERNAME VARCHAR2(256)
TIMESTAMP TIMESTAMP(6)
SCRIPT VARCHAR2(1024)
IDENTIFIER VARCHAR2(256)
MESSAGE CLOB
STATEMENT CLOB
For example,
SQL> set errorlogging on
SQL>
SQL> SELECT * FROM dula;
SELECT * FROM dula
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL>
SQL> SELECT USERNAME, TIMESTAMP, MESSAGE, STATEMENT FROM sperrorlog;
USERNAME TIMESTAMP MESSAGE STATEMENT
---------- ------------------------------ -------------------------------------------------- ------------------
LALIT 10-MAR-15 04.08.13.000000 PM ORA-00942: table or view does not exist SELECT * FROM dula
SQL>
So, the statement which had an error is now logged in the SPERRORLOG table.
EDIT
My database has many kinds of clients and sometimes they use a wrong SQL string. but those clients were written in different languages such as C++, C, Java, .Net
The above mentioned solution is only applicable for the scripts executed via SQL*Plus. If the queries are executed through the application client, then to find the complete information you could trace the entire session that the application is using to interact with the database.
You could have a look at this article for an example on how to generate a trace file.

Related

How to get the Oracle sql Id in the application

I have a .net web service that makes some dynamically generated sql calls against ORACLE and they are performing bad in production. The DBAs keep asking for the sql ids to tune the query. They can use the OEM tool to find the slow performing query and get the sql id. But I was wondering if there is a way to know the sql id and log it so that I can retrieve it and give it to the DBAs for tuning.
Is this something that can be achieved in .net ?
Query the V$SQL dynamic view to get the SQL ID;
More on the V$SQL:
https://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2113.htm#REFRN30246
The following package dbms_application_info is very useful to instrument your queries.
Prior to running the processing logic from app layer, set the module/action, to identify your module.
DBMS_APPLICATION_INFO.set_module(module_name => 'add_order',
action_name => 'processing orders');
After that, set the client_info with a marker that indicates what processing is going on prior to running the sql.
Eg:
exec dbms_application_info.set_client_info('starting load from staging');
--Run the query
insert into dest_table select * from staging;
update dest_table set last_updated=sysdate;
exec dbms_application_info.set_client_info('updated the last_updated column');
delete from dest_table where order_value<0;
exec dbms_application_info.set_client_info('deleted -ve orders');
When this happens we can have a look at v$session/v$sql to see where the processing is currently taking place
SELECT sid,
serial#,
username,
osuser,
module,
action,
client_info
FROM v$session
WHERE module='add_order'
SELECT *
FROM v$sql
WHERE module='add_order'
have a look at the link
https://oracle-base.com/articles/8i/dbms_application_info
If the application can capture sufficient information to identify the session in v$session, you can query it from another session to grab the value of sql_id, or else query the v$sql_monitor view if you are licensed (requires Enterprise Edition and the Diagnostics and Tuning option). Use dbms_application_info to tag activity for better tracking.
Also you can configure database services if you haven't already, so that applications connect to a specific service rather than a generic one, and this will appear in v$session.service_name and be reported in OEM etc.
If it's practical to capture the session details from the same session immediately after the poorly-performing SQL statement completes (which it may not be, if the connection times out for example), you might try querying the prev_ details from v$session:
select s.prev_sql_id
, s.prev_child_number
, s.prev_exec_start
, s.prev_exec_id
, p.sql_text as prev_sql
, p.plan_hash_value as prev_plan
from v$session s
left join v$sql p on p.sql_id = s.prev_sql_id and p.child_number = s.prev_child_number
where s.audsid = sys_context('userenv', 'sessionid')

How to use SET OPTION within a DB2 stored procedure

I read (and tried) that I cannot use WITH UR in DB2 stored procedures. I am told that I can use SET OPTION to achieve the same. However, when I implement it in my stored procedure, it fails to compile (I moved around its location same error). My questions are:
Can I really not use WITH UR after my SELECT statements within a procedure?
Why is my stored procedure failing to compile with the below error
message?
Here is a simplified version of my code:
CREATE OR REPLACE PROCEDURE MySchema.MySampleProcedure()
DYNAMIC RESULT SETS 1
LANGUAGE SQL
SET OPTION COMMIT=*CHG
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_TABLE AS (
SELECT 'testValue' as "Col Name"
) WITH DATA
BEGIN
DECLARE exitCursor CURSOR WITH RETURN FOR
SELECT *
FROM SESSION.TEMP_TABLE;
OPEN exitCursor;
END;
END
#
Error Message:
SQL0104N An unexpected token "SET OPTION COMMIT=*CHG" was found
following " LANGUAGE SQL
Here is code/error when I use WITH UR
CREATE OR REPLACE PROCEDURE MySchema.MySampleProcedure()
LANGUAGE SQL
DYNAMIC RESULT SETS 1
--#SET TERMINATOR #
BEGIN
DECLARE GLOBAL TEMPORARY TABLE TEMP_TABLE AS (
SELECT UTI AS "Trade ID" FROM XYZ WITH UR
) WITH DATA;
BEGIN
DECLARE exitCursor CURSOR WITH RETURN FOR
SELECT *
FROM SESSION.TEMP_TABLE;
OPEN exitCursor;
END;
END
#
line 9 is where the DECLARE GLOBAL TEMPORARY ... is
DB21034E The command was processed as an SQL statement because it was
not a valid Command Line Processor command. During SQL processing it
returned: SQL0109N The statement or command was not processed because
the following clause is not supported in the context where it is
used: "WITH ISOLATION USE AND KEEP". LINE NUMBER=9. SQLSTATE=42601
Specifying the isolation level:
For static SQL:
If an isolation-clause is specified in the statement, the value of that clause is used.
If an isolation-clause is not specified in the statement, the isolation level that was specified for the package when the package was bound to the database is used.
You need to bind the routine package with UR, since your DECLARE GTT statement is static. Before CREATE OR REPLACE use the following in the same session:
CALL SET_ROUTINE_OPTS('ISOLATION UR')
P.S.: If you want to run your routine not only 1 time in the same session without an error, use additional WITH REPLACE option of DECLARE.
If your Db2 server runs on Linux/Unix/Windows (Db2-LUW), then there is no such statement as SET OPTION COMMIT=*CHG , and so Db2 will throw an exception for that invalid syntax.
It is important to only use the matching Db2 Knowledge Centre for your Db2 platform and your Db2-version. Don't use Db2-Z/OS documentation for Db2-LUW development. The syntax and functionalities differ per platform and per version.
A Db2-LUW SQL PL procedure can use with ur in its internal queries, and if you are getting an error then something else is wrong. You have to use with ur in the correct syntax however, i.e in a statement that supports this clause. For your example you get the error because the clause does not appear to be valid in the depicted context. You can achieve the desired result in other ways, one of them being to populate the table in a separate statement from the declaration (e.g insert into session.temp_table("Trade ID") select uti from xyz with ur; ) and other ways are also possible.
One reason to use the online Db2 Knowledge Cenbtre documentation is that it includes sample programs, including sample SQL PL procedures, which are also available in source code form in the sample directory of your DB2-LUW server, in addition to being available on github. It is wise to study these, and get them working for you.

Show errors in sql plus

I have a file myFile.sql that contains a list of script to compile :
#"Directory\package1.sql"
#"Directory\package2.sql"
#"Directory\package3.sql"
#"Directory\package4.sql"
I have the following script:
SPOOL Directory\Upgrade.log
#"Directory\myFile.sql"
SPOOL OFF
Some packages in myFile.sql have errors, but in Upgrade.log I do not have the details of the errors, I have the message Warning : Package body created with compilation errors.
How can I add the error detail without adding SHOW ERR after each line in MyFile.sql ?
In upgrade.log I want have this:
Package1 created
Warning Package body created with compilation errors.
**Error detail1**
Package2 created
Warning Package body created with compilation errors.
**Error detail2**
I need a hook in sqlplus to show error automatically after each package creation if there is an error
Thanks for your help.
One method is to query dictionnary view USER_ERRORS, or ALL_ERRORS.
From the documentation:
ALL_ERRORS describes the current errors on the stored objects accessible to the current user.
USER_ERRORS gives the same information for objects owned by the current user.
From Oracle 11.1 onward, you could use the SQLPlus error logging feature. You can read more about SQL*Plus error logging – New feature release 11.1.
SQL*Plus error logging is set OFF by default. So, you need to set errorlogging on to use the SPERRORLOG table.
Demo:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> desc sperrorlog;
Name Null? Type
----------------------------------------- -------- ----------------------------
USERNAME VARCHAR2(256)
TIMESTAMP TIMESTAMP(6)
SCRIPT VARCHAR2(1024)
IDENTIFIER VARCHAR2(256)
MESSAGE CLOB
STATEMENT CLOB
SQL> truncate table sperrorlog;
Table truncated.
SQL> set errorlogging on;
SQL> selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.
SQL> select timestamp, username, script, statement, message from sperrorlog;
TIMESTAMP USERNAME STATEMENT MESSAGE
----------- -------- ------------------ -------
17-APR-2020 SCOTT selct * from dual; SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.
Similarly, you can capture PLS errors too. They will start with error code SP.

Which type of binding does PL/SQL use?

I came across the following question.
PL/SQL uses which of the following
(A) No Binding
(B) Early Binding
(C) Late Binding
(D) Deferred Binding
But could not find any satisfying answers.
Can anybody give an explanation for this?
(E) All of the Above
This is a ridiculous multiple choice question. The terms early, late, and deferred binding are ambiguous. And PL/SQL can be run in many different ways, including in SQL.
Here are my (arguably incorrect) definitions of the choices:
No binding - Variables have no types.
Early Binding - Variable types are fixed at compile time.
Late Binding - Variable types are flexible and may be set at run time.
Deferred Binding - Multiple variables types are defined at compile time, but only one of them is chosen at run time.
Now we have to match those choices to different PL/SQL contexts: static SQL and PL/SQL, empty anonymous blocks, remote procedures, dynamic SQL and PL/SQL, adaptive cursor sharing, FILTER operations, object-oriented PL/SQL, ANY* types, and I'm probably missing some more.
(A) No binding
An empty anonymous block doesn't have any variables so nothing is bound. I'm not sure if this really fits the definition of no binding, it seems kind of like an edge case. In some languages there is always an object and something must always be bound, but not in PL/SQL.
(B) Early Binding
Regular SQL and PL/SQL use early binding - variables are given a type and they must stick to it. Type mismatches will either throw a compiler error or require implicit conversion.
Remote procedure calls with REMOTE_DEPENDENCIES_MODE set to "TIMESTAMP" is arguably early binding. The timestamp is set at compile time when everything is checked. It is still checked at run time, but it's a simple and fast check.
(C) Late Binding
Dynamic SQL and PL/SQL use late binding because the code is not even compiled until run time. This applies both to DBMS_SQL and execute immediate.
Object-Oriented PL/SQL uses late binding. The type is set at compile time but a different subtype may be used at run time.
ANYTYPE, ANYDATA, and ANYDATASET also use late binding since they can be created at run time, or retrieved and executed at run time.
Remote procedure calls with REMOTE_DEPENDENCIES_MODE set to "SIGNATURE" is arguably late binding. The signature is checked at both compile time and run time, and allows for a tiny bit of flexibility in types.
(D) Deferred Binding
Some Oracle SQL features create multiple code paths but only execute one of them. Adaptive cursor sharing and FILTER operations will create multiple ways to run the same SQL statement, and the appropriate version is chosen at run time.
Invoker's Rights and Definer's Rights
Invoker's rights and definer's rights also complicate this question. But I think that ultimately they don't make a difference, and that both of them are still early binding. The compiler still decides the type at compile time. Although you can use invoker's rights to stealthily change a type at run time it will only generate an error because it doesn't match the expected type.
For example, let's say there are two schemas that have the same table and column names, but different types:
create table user1.test_table(a number);
insert into suer1.test_table values(1);
create table user2.test_table(a date);
insert into user2.test_table values(sysdate);
If you create this function on USER1 it looks like the type of V_VALUE is dynamic and can change with the user.
create or replace function user1.test_function return varchar2 authid current_user is
v_value test_table.a%type;
begin
select a into v_value from test_table;
return to_char(v_value);
end;
/
The code compiles using the types from USER1 and works fine when run by USER1. However, when USER2 runs it this error is generated: ORA-00932: inconsistent datatypes: expected NUMBER got DATE.
This leads me to believe that invoker's and definer's rights do not affect the binding. They both use early binding in static SQL and PL/SQL.
You can find the answer in a very very old documentation related to Oracle 8:
https://docs.oracle.com/cd/A58617_01/server.804/a58236/05_ora.htm
Efficiency versus Flexibility
Before a PL/SQL program can be executed, it must be compiled. The
PL/SQL compiler resolves references to Oracle schema objects by
looking up their definitions in the data dictionary. Then, the
compiler assigns storage addresses to program variables that will hold
Oracle data so that Oracle can look up the addresses at run time. This
process is called binding.
How a database language implements binding affects runtime efficiency
and flexibility. Binding at compile time, called static or early
binding, increases efficiency because the definitions of schema
objects are looked up then, not at run time. On the other hand,
binding at run time, called dynamic or late binding, increases
flexibility because the definitions of schema objects can remain
unknown until then.
Designed primarily for high-speed transaction processing, PL/SQL
increases efficiency by bundling SQL statements and avoiding runtime
compilation. Unlike SQL, which is compiled and executed
statement-by-statement at run time (late binding), PL/SQL is processed
into machine-readable p-code at compile time (early binding). At run
time, the PL/SQL engine simply executes the p-code.
But Oracle in subsequent versions removed the whole chapter Interaction with Oracle from the documentation.
So, according to the above the answer is: (B) Early Binding - for sure for Oracle version 8, and perhaps in futher versions.
edit: I just rephrased the answer to get to the point a bit quicker.
short answer
PL uses early binding and sometimes late binding (e.g. is late when using the Object-oriented capabilities of PL/SQL, and the polymorphism they feature)
SQL uses deferred binding: the resolution of symbols (tables names, columns, etc) happens during a postponed compilation that occurs at runtime: this step involves parsing and compiling sql text into executable instructions (the "execution plan")
The deferred binding is at the core of the SQL engine and makes it extremely adaptable to changing distribution of data; it is a core feature that everybody should be aware of, so my best guess is that the question asked for it, unless by "PL/SQL" they meant "PL" (as often Oracle docs do).
The example below hopefully makes the point immediately clear. We have two users ("demo" and "scott") that own a table and a procedure each, with the same names.
demo
invoked_proc
x_table
scott
invoked_proc
x_table
If we create the procedure below in the schema "demo", this is how Oracle would resolve the symbols during compilation and during execution
compilation
"invoked_proc": demo.invoked_proc
"x_table": demo.x_table
execution when logged as "demo": the same
execution when logged as "scott"
"invoked_proc": demo.invoked_proc <- resolves as per compilation
"x_table": scott.x_table <- resolved differently
Here is the procedure, and the fact that we are using invoker's rights is not fundamental, it is just making the matter more evident:
CREATE or replace PROCEDURE demo.invoker_proc
AUTHID CURRENT_USER
IS
n number := 1;
BEGIN
invoked_proc();
select id into n
from X_TABLE where id = n;
END;
/
Long answer
All types of binding are involved (b, c, d). Amongst those, the deferred binding is probably the most remarkable, so if they asked to choose just one answer, I'd definitively go for "d" (deferred binding) when speaking of SQL, and "b" (early binding) when dealing with PL.
We probably should start by saying that the terminology is important, and that possibly the answer provided by krokodilko is the most authoritative given the docs it points at (even if that reference is potentially obsolete).
However I firmly believe the point of the whole matter is that in Oracle, the PL engine and the SQL engine are almost completely set apart and resolve the names in totally different ways. While the former usually involves some form of early or late binding (mostly, early binding), the latter always imply deferred binding: SQL statements are "compiled" into executable plans when executed, not at compile time. The semantic resolution of the symbols is deferred until when the query is run, so the names are resolved according to the current context (primarily, the current user). This is also the type of binding used when issuing an execute immediate statement, dbms_sql, or dbms_job.submit invoked with NO_PARSE = true.
When Oracle encounters the text of an SQL statement, even when is part of a compiled PL unit (procedure, function, trigger...), it performs a syntactic analysis of its symbols first, then a semantic analysis and finally "compiles" the execution plan (determines the procedural steps of the algorithm that will actually resolve the query).
Depending on the situation it can skip part of this analysis: however all of this fundamentally happens at runtime, not at compile time. This allows the engine to adapt, for instance to the latest statistics it has gathered from the data (this is what the Cost Based Optimizer strives to do), and is fundamentally different from what the Rule Based Optimizer used to do in earlier versions (queries where "compiled" at compile time, and the algorithm was know in advance and fixed until the next re-compilation; ).
The only pre-baked product of the compilation of a PL unit is the resolution of its direct dependencies, which occurs at compile time (information which is stored in the DIANA code) and enables the engine to detect when it is time to re-compile a PL unit due to external changes (for instance, changes in the structure of a table mentioned in an SQL statement). Note that this preemptive mechanism falls short when using invoker's right, precisely because Oracle cannot be aware of what the name will reference later,, during the execution.
All of this is dramatically evident when we use invoker rights to run a procedure.
Consider the aforementioned case, in which two users have a procedure and a table on their respective schemas, named equal (the tables happens to be identical except that scott's table has an index on it)
CONNECT demo/demo
CREATE PROCEDURE invoked_proc IS
BEGIN
DBMS_OUTPUT.put_line ('invoked_proc owned by demo');
END;
/
GRANT execute on invoked_proc to public;
create table X_TABLE (id number);
insert into x_table values(1);
commit;
CONNECT scott/tiger
CREATE PROCEDURE invoked_proc IS
BEGIN
DBMS_OUTPUT.put_line ('invoked_proc owned by scott');
END;
/
create table X_TABLE (id number);
create index X_INDEX ON X_TABLE(id);
insert into x_table values(1);
commit;
and consider what happens when you compile a third procedure (on a single schema) that references those names and which is executed with invoker's rights:
CREATE or replace PROCEDURE invoker_proc
AUTHID CURRENT_USER
IS
n number := 1;
BEGIN
invoked_proc();
select id into n
from X_TABLE where id = n;
END;
/
What can we say about the symbols that have been resolved by the compiler?
We can quickly infer them by looking at the dependencies of INVOKER_PROC:
select owner, name,
referenced_owner, referenced_name
from dba_dependencies
where name = 'INVOKER_PROC';
OWNER NAME REFERENCED_OWNER REFERENCED_NAME
------- ------------- ------------------ -----------------
DEMO INVOKER_PROC DEMO X_TABLE
DEMO INVOKER_PROC DEMO INVOKED_PROC
So, both the procedure and the table identified by the compiler belong to DEMO, the user where the procedures belongs.
As for the plan that underlies the sql query, nothing has been determined yet:
Select substr(sql_text,1,40),
sql_id,
plan_hash_value,
parsing_schema_name,
child_number
from v$sql
where sql_text like '%X_TABLE%'
;
no relevant records found...
However, things start to get different when the sql engine is actually invoked, which doesn't happen until runtime.
connect demo/demo
EXEC demo.invoker_proc
> invoked_proc owned by demo
and looking at the plans we see that a new plan is around
Select substr(sql_text,1,40),
sql_id,
plan_hash_value,
parsing_schema_name,
child_number
from v$sql
where sql_text like '%X_TABLE%'
;
SUBSTR(SQL_TEXT,1,40) SQL_ID PLAN_HASH_VALUE PARSING_SCHEMA_NAME CHILD_NUMBER
-------------------------------------- ------------- --------------- ------------------- ------------
SELECT ID FROM X_TABLE WHERE ID = :B1 01a77qj300y18 1220025608 DEMO 0
If we run the same procedure from scott's perspective and look at the plans, we see that the same invoked_proc has been called, the one owned by DEMO (it outputs "invoked_proc owned by demo").
So the name resolution of the PL part carried during the compilation still holds true (early binding). The same is not quite true for the SQL part:
connect scott/tiger
EXEC demo.invoker_proc;
> invoked_proc owned by demo
SUBSTR(SQL_TEXT,1,40) SQL_ID PLAN_HASH_VALUE PARSING_SCHEMA_NAME CHILD_NUMBER
-------------------------------------- ------------- --------------- ------------------- ------------
SELECT ID FROM X_TABLE WHERE ID = :B1 01a77qj300y18 1220025608 DEMO 0
SELECT ID FROM X_TABLE WHERE ID = :B1 01a77qj300y18 3707869577 SCOTT 1
We see that it has created another plan for the new query, with a different child_number. The second plan references the other table, other than the one determined during compilation (deferred binding). This is confirmed if we look at the content of the plans; the first plan involves a plain table, on which only a full scan can be performed:
select plan_table_output
from table(dbms_xplan.display_cursor(
SQL_ID => '01a77qj300y18'
,CURSOR_CHILD_NO => 0
));
SQL_ID 01a77qj300y18, child number 0
-------------------------------------
SELECT ID FROM X_TABLE WHERE ID = :B1
Plan hash value: 1220025608
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
|* 1 | TABLE ACCESS FULL| X_TABLE | 1 | 13 | 4 (0)| 00:00:01 |
-----------------------------------------------------------------------------
while the second table, which offers an index, allows Oracle to get a better code path (an index range scan in this case)
select plan_table_output
from table(dbms_xplan.display_cursor(
SQL_ID => '01a77qj300y18'
,CURSOR_CHILD_NO => 1
));
SQL_ID 01a77qj300y18, child number 1
-------------------------------------
SELECT ID FROM X_TABLE WHERE ID = :B1
Plan hash value: 3707869577
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 (100)| |
|* 1 | INDEX RANGE SCAN| X_INDEX | 1 | 13 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------
The takeaway is that PL usually involves early or late binding, while SQL always is deferred.
There are further considerations that can come into play, like the usage of BASELINES (precompiled plans that are permanently stored and can be choosen or not depending on what the optimizer believes cheaper) or ADAPTIVE CURSORS (alternative plans stored in the library pool that are chosen at runtime); and they involve slightly different concepts. However, given the lack of consensus on the term "deferred binding", possibly all of them can be considered part of the "deferred binding thing" (I am curious to know the opinion of who wrote the original quiz).
For instance this is not in contrast with what I see in .NET: deferred binding seems to be a topic loosely related to code that is reflexively generated and compiled during runtime. While GWT claims to use "deferred binding"; meaning by that: alternative code segments are compiled and only one is chosen at runtime (to quote them "In essence, deferred binding is the GWT answer to Java reflection", see here)
From the query posted, I understand that using which binding type PL/SQL is developed in general. In that case option (B) Early Binding is the answer as oracle.
Further,
PL - Stored Procedures - Early Binding.
PL - Remote Procedures - Late Binding.
Dynamic SQL - Late Binding.
Its (B) Early Binding which is used at compile time and (C) Late Binding which is used at run time.
Before a PL/SQL program can be executed, it must be compiled. The PL/SQL compiler resolves references to Oracle objects by looking up their definitions in the data dictionary. Then, the compiler assigns storage addresses to program variables that will hold Oracle data so that Oracle can look up the addresses at run time. This process is called binding.
How a database language implements binding, affect run-time efficiency and flexibility?
Binding at compile time, called static or early binding, increases efficiency because the definitions of database objects are looked up then, not at run time.
On the other hand, binding at run time, called dynamic or late binding, increases flexibility because the definitions of database objects can remain unknown until then.

Make trigger behavior depend on query

My goal is to make trigger behavior to depend on some client identifier.
For example I execute a query
begin;
<specify-some-client-identifier>
insert into some_table
values('value')
commit;
And I have trigger function executing before insert:
NEW.some_filed := some_func(<some-client-identifier-spicified-above>)
So, how do I <specify-some-client-identifier> and get <some-client-identifier-spicified-above>?
You basically need some kind of variables in SQL. It is possible to do it, with multiple ways:
using GUCs
using table with variables
using temp table with variables
using %_SHARED in plperl functions
All this is possible. If you're interested in implementation details and/or comparison - check this blogpost - just in case it wasn't obvious from domain - it's my blog.
You will find this prior answer informative. There I explain how to pass an application-defined username through so it is visible to PostgreSQL functions and triggers.
You can also use the application_name GUC, which can be set by most client drivers or explicitly by the application. Depending on your purposes this may be sufficient.
Finally, you can examine pg_stat_activity to get info about the current client by looking it up by pg_backend_pid(). This will give you a client IP and port if TCP/IP is being used.
Of course, there's also current_user if you log in as particular users at the database level.
As usual, #depesz points out useful options I hadn't thought of, too - using shared context within PL/Perl, in particular. You can do the same thing in PL/Python. In both cases you'll pay the startup overhead of a full procedural language interpreter and the function call costs of accessing it, so it probably only makes sense to do this if you're already using PL/Perl or PL/Python.