How do I use a dynamic role password in Postgres? - sql

I have a role in postgres as follows:
create role admin login password 'some_password';
What I'd like instead is:
create role admin login (select current_setting('custom.ADMIN_PASSWORD'));
But this fails with the error:
ERROR: syntax error at or near "("
LINE 2: (SELECT ...
I expected this to work, because it works in the following example:
select public.register_account(
email := (SELECT current_setting('custom.SOME_EMAIL')),
password := (SELECT current_setting('custom.SOME_PASSWORD'))
);
How can I use current_setting() to apply a role password?
Bonus Points: Why does my first example fail, while my second succeeds?

The first failed because DDL changes like that aren't your general everyday SQL. Think of a statement like the creation of a role as being special within the context of PostgreSQL. In general the database engine prefers DDL and other structural changes to be explicit, not calculated on the fly like a lot of the rest of SQL.
You can however get around this restriction by using a "do" block, essentially an inline function. Combined with the EXECUTE command and the format() function, you can go dynamic to your heart's content. Only be warned that with great power comes great responsibility. Dynamic SQL like this should be avoided in general unless you truly have no other alternative since it short circuits a lot of the grammar/parser validation. Simple mistakes become a lot harder to see and fix while at the same time—due to it being a structural change to the database rather than just another row of data—far more serious in effect when bugs arise. Many tasks like CREATE ROLE do not allow dynamic shenanigans by default precisely for this reason.
All that said, this will get you going.
DO $$
BEGIN
EXECUTE format('CREATE ROLE admin LOGIN PASSWORD ''%1$s'';',
current_setting('custom.ADMIN_PASSWORD'));
END;
$$ LANGUAGE plpgsql;

Related

AWS Redshift: Can I use a case statement to create a new user if the user doesn't already exist?

I'm trying to automate user creation within AWS. However, if I just write the user creation scripts, they will fail if re-run and users already exist.
I'm working in AWS Redshift.
I'd love to be able to do something like
CREATE USER IF NOT EXISTS usr_name
password '<random_secure_password>'
NOCREATEDB
NOCREATEUSER
;
however that doesn't seem possible.
Then I found CASE statements but it doesn't seem like CASE statements can work for me either.
i.e.
CASE WHEN
SELECT count(*) FROM pg_user WHERE usename = 'usr_name' = 0
THEN
CREATE USER usr_name
password '<random_secure_password>'
NOCREATEDB
NOCREATEUSER
END
Would this work? (Not a superuser so I can't test it myself)
If not, any ideas? Anything helps, thanks in advance.
If you're using psql you can use the \gexec metacommand:
\t on
BEGIN;
SELECT CASE WHEN (SELECT count(*) FROM pg_user WHERE usename = 'nonesuch') = 0
THEN 'CREATE USER nonesuch PASSWORD DISABLE'
END
\gexec
SELECT CASE WHEN (SELECT count(*) FROM pg_user WHERE usename = 'nonesuch') = 0
THEN 'CREATE USER nonesuch PASSWORD DISABLE'
ELSE 'SELECT \'user already exists, doing nothing\''
END
\gexec
ROLLBACK;
result:
BEGIN
CREATE USER
user already exists, doing nothing
ROLLBACK
https://www.postgresql.org/docs/9.6/app-psql.html (note you can't use format() as in the example since it's not implemented in Redshift)
As far as I know there is no mechanism to do this IN Redshift (at the moment) and making this transition to a more cloud model trips up many companies I've worked with. Many on-prem databases are expected to be their own operating world where all things must be done inside the database via SQL, whether or not these are data operations or not. What I call the "universal database" model. Redshift is part of the larger ecosystem of AWS cloud solutions where many of these administrative/management/flexibility operations are best done at the cloud layer, not the in database. The "database for data" model.
You didn't explained why you need to test for the existence of user and create them if they are not there or why this decision is being done in SQL. I do expect that some amount of 'this is how we have done it in the past' is in play. What you are looking to do can be a Lambda (with or without) Step Function, likely can be done in your ETL solution, or even written as a bash script. What you are looking to do is actually easy and I'd advise you think about doing it as part of a solution level architecture - not as a point database operation.
Now you may rightly ask 'if this is easy, why can't Redshift do it?' Fair point and one I've been asked many times. One answer is that Redshift is a cloud-based, large-data, analytic warehouse and as such it is designed to operate in that context. Another answer is that if enough large clients show the need and demand the functionality it will be added (AWS does react to meet general needs). Remember there are thousands of places new SQL command options can be added and all of them won't be added to Redshift - in fact enhancement requests of this type are made to almost every database frequently.
My advice is to take a step back and look at how your user and user-rights management should work for your solution and not just for the database. Then move to an architecture that manages these rights at the appropriate layer of the solution (whatever you decide that to be). Redshift users can be integrated with IAM which can be used to control access to other systems and other database. I know that this kind change takes work and time to complete (and can impact organizational roles) so until then I'd look at your existing database control systems (ETL, vacuum/analyze launcher, metrics collector etc.) and see which can meet your near-term needs.
Create a stored procedure with user name as param and have the check within. You can call this stored procedure in your deployment step. Only database creation cannot be done within a block. User can be created within a stored procedure.
CREATE OR REPLACE PROCEDURE sp_create_user(i_username varchar)
AS $$
DECLARE
t_user_count BIGINT;
BEGIN
SELECT count(1)
INTO t_user_count
FROM pg_user WHERE LOWER(usename) = 'username';
IF t_user_count>0 THEN
RAISE INFO 'User already exists';
ELSE
CREATE USER username WITH PASSWORD 'password' NOCREATEDB NOCREATEUSER;
END IF;
EXCEPTION
WHEN OTHERS THEN
RAISE EXCEPTION '[Error while creating user: username] Exception: %', SQLERRM;
END;
$$
LANGUAGE plpgsql;

How to demonstrate SQL injection in where clause?

I want to demonstrate the insecurity of some webservices that we have. These send unsanitized user input to an Oracle database Select statements.
SQL injection on SELECT statements is possible (through the WHERE clause), however I am having a hard time to demonstrate it as the same parameter gets placed in other queries as well during the same webservice call.
E.g:
' or client_id = 999'--
will exploit the first query but as the same WS request calls runs other SQL SELECTs, it will return an oracle error on the next query because the client_id is referred to by an alias in the second table.
I am looking to find something more convincing rather than just having an ORA error returned such as managing to drop a table in the process. However I do not think this is possible from a Select statement.
Any ideas how I can cause some data to change, or maybe get sensitive data to be included as part of an ORA error?
It's not very easy to change data, but it's still possible. Function that created with pragma autonomous_transaction can contain dml and may be called in where. For instance,
CREATE OR REPLACE FUNCTION test_funct return int
IS
pragma autonomous_transaction;
BEGIN
DELETE FROM test_del;
commit;
return 0;
end;
-- and then
SELECT null from dual where test_funct()=1;
Another option you try creating huge subquery in WHERE which in turn may cause huge performance issue on server.
You do not need a custom function, you can use a sub-query:
" or client_id = (SELECT 999 FROM secret_table WHERE username = 'Admin' AND password_hash = '0123456789ABCD')"
If the query succeeds then you know that:
There is a table called secret_table that can be seen by the user executing this query (even if there is not a user interface that would typically be used to directly interact with that table);
That it has the columns username and password_hash;
That there is a user called Admin; and
That the admin user has a password that hashes to 0123456789ABCD.
You can repeat this and map the structure of the entire database and check for any values in the database.

Make trigger behavior depend on query

My goal is to make trigger behavior to depend on some client identifier.
For example I execute a query
begin;
<specify-some-client-identifier>
insert into some_table
values('value')
commit;
And I have trigger function executing before insert:
NEW.some_filed := some_func(<some-client-identifier-spicified-above>)
So, how do I <specify-some-client-identifier> and get <some-client-identifier-spicified-above>?
You basically need some kind of variables in SQL. It is possible to do it, with multiple ways:
using GUCs
using table with variables
using temp table with variables
using %_SHARED in plperl functions
All this is possible. If you're interested in implementation details and/or comparison - check this blogpost - just in case it wasn't obvious from domain - it's my blog.
You will find this prior answer informative. There I explain how to pass an application-defined username through so it is visible to PostgreSQL functions and triggers.
You can also use the application_name GUC, which can be set by most client drivers or explicitly by the application. Depending on your purposes this may be sufficient.
Finally, you can examine pg_stat_activity to get info about the current client by looking it up by pg_backend_pid(). This will give you a client IP and port if TCP/IP is being used.
Of course, there's also current_user if you log in as particular users at the database level.
As usual, #depesz points out useful options I hadn't thought of, too - using shared context within PL/Perl, in particular. You can do the same thing in PL/Python. In both cases you'll pay the startup overhead of a full procedural language interpreter and the function call costs of accessing it, so it probably only makes sense to do this if you're already using PL/Perl or PL/Python.

Prompt user in PLSQL

My assignment is to write a PLSQL module to insert data into database. Upon a certain condition, it may need additional information and should prompt the user for one more detail. This should be done directly in PLSQL and wording is straight from the assignment.
I've researched the topic and found some people said this cannot be done in PLSQL? But the ACCEPT PROMPT function does exist.
ACCEPT v_string PROMPT 'Enter your age: ';
While this works directly from SQLPlus, it does not work in PLSQL as it gives me this error:
PLS-00103: Encountered the symbol "V_STRING" when expecting one of the following: := . ( # % ;
Can anyone provide some insight as to how I'm supposed to ask the user from PLSQL, only upon when a certain condition is true (the condition is checked when you get something else from the DB). To clarify, I only need help on how to accept input.
Thanks for your time.
There is a trick that will allow you to do something like this, but (a) it's a bit of a hack, (b) you need to be logged into the database server itself, and (c) it only works on Linux (and perhaps other flavours of Unix).
Generally, it's not possible to ask for user input in PL/SQL, especially if you're connecting to a database on a remote machine. Either your assignment is wrong or you've misunderstood it.
PL/SQL programs are designed to run on the database server, so it doesn't make sense to ask the user for input during them. Using the ACCEPT command, SQL*Plus can ask the user for input while running a script client-side. SQL*Plus will then substitute in the value entered before sending SQL or PL/SQL to the database.
Well, since it's not really a part of SQL, but rather the developer tools you use, here's the ones I know:
SQL*Plus: &Variable
SQL Developer: You can make a procedure or you can use :Variable to be prompted to insert a parameter
I checked and tried following things which might helpful to you. It's just an example which I tried. You can implement your own logic.
declare
c1 varchar2(50);
begin
c1:='&enter_value';
dbms_output.put_line(c1);
exception
when others then
dbms_output.put_line(sqlerrm);
end;
'&' will help you to prompt and ask input to user. Once user feeds some input, it will assign to variable and you can use variable wherever you want.
Hopefully this helps you.
You cannot get user input in pure PL/SQL. It is a server-side language to express business/database logic.
The use of "&" to get input is a "sqlplus" feature and not a PL/SQL strategy. If it was an "assignment" then you need to tell the teacher that it is an invalid assignment - or maybe it was a trick question!

SQL Server Synonyms and Concurrency Safety With Dynamic Table Names

I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.
Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.