Make trigger behavior depend on query - sql

My goal is to make trigger behavior to depend on some client identifier.
For example I execute a query
begin;
<specify-some-client-identifier>
insert into some_table
values('value')
commit;
And I have trigger function executing before insert:
NEW.some_filed := some_func(<some-client-identifier-spicified-above>)
So, how do I <specify-some-client-identifier> and get <some-client-identifier-spicified-above>?

You basically need some kind of variables in SQL. It is possible to do it, with multiple ways:
using GUCs
using table with variables
using temp table with variables
using %_SHARED in plperl functions
All this is possible. If you're interested in implementation details and/or comparison - check this blogpost - just in case it wasn't obvious from domain - it's my blog.

You will find this prior answer informative. There I explain how to pass an application-defined username through so it is visible to PostgreSQL functions and triggers.
You can also use the application_name GUC, which can be set by most client drivers or explicitly by the application. Depending on your purposes this may be sufficient.
Finally, you can examine pg_stat_activity to get info about the current client by looking it up by pg_backend_pid(). This will give you a client IP and port if TCP/IP is being used.
Of course, there's also current_user if you log in as particular users at the database level.
As usual, #depesz points out useful options I hadn't thought of, too - using shared context within PL/Perl, in particular. You can do the same thing in PL/Python. In both cases you'll pay the startup overhead of a full procedural language interpreter and the function call costs of accessing it, so it probably only makes sense to do this if you're already using PL/Perl or PL/Python.

Related

Autonomous transaction analogue in ABAP

I'm trying to commit DML update in a database table while the main program is still running without committing it as there may be errors in future and there might be a need to rollback it but the internal (saved) updates should stay.
Like in the Oracle autonomous transactions.
Call function ... starting new task ... or Submit ... and return don't work as they affect the main transaction.
Is there a way to start a nested database LUW and commit it without interrupting the main LUW?
I am not aware of a way to do this with OpenSQL. But when you are using the ADBC framework, then each instance of the class CL_SQL_CONNECTION operates within a separate database LUW.
I would generally not recommend using ADBC unless you have to, because:
You are now writing SQL statements as strings, which means you don't have compile-time syntax checking.
You can't put variables into SQL code anymore. (OK, you can, but you shouldn't, because you are probably creating SQL injection vulnerabilities that way). You need to pass all the variables using statement->set_param.
You are now writing NativeSQL, which means that you might inadvertently write SQL which can't be ported to other database backends.
You can create separate function for saving your changes and you can call your function with starting new task mode like below.
call function 'ZFUNCTION' starting new task 'SAVECHANGES'
exporting
param = value.

How can I allow students to inject a DROP TABLE into this page? [duplicate]

I am learning mysql now and one of the subjects it touches is the security issue when dealing with user input - one concern is the injection attack. I tried to duplicate the attack the book demonstrated like add a query $query = "select * from temp_table; drop table temp_table, which I used mysqli_query($connection,$query). Nothing happen. I changed to use mysqli_multi_query() and found it executed both statements. Finally I found that mysqli_query only runs one query each time.
my question is, if I use mysqli_query, theoretically speaking, the system shouldn't be worried on additional statement injection attack? Or, there is still any other way that the users can run additional statement even the server is using mysqli_query?
It's true that the basic mysqli_query() will only run one statement. So you don't have to worry that an SQL injection attack will trick your application into running multiple statements.
But one statement can include a subquery, or a SELECT... UNION SELECT....
One statement can read data it isn't intended to read. Or cause a huge sort that is intended to overwhelm your server as a denial-of-service attack.
Or it can simply be an error, not a malicious attack at all.
SELECT * FROM Users WHERE last_name = 'O'Reilly'; -- woops!
The solutions to SQL injection are pretty simple, and easy to follow. I don't understand why so many developers look for excuses not to write safe code.

Case insensitive search using ALTER SESSION on ODP.NET/EF connections?

I've got some EF/LINQ statements that I need to be case-insensitive text searches, but our oracle database is case sensitive. How can I execute the necessary ALTER SESSION statement at the connection/command level so that it will affect the subsequent same-context calls?
Command I think I need to run (OTN Thread)
ALTER SESSION SET NLS_SORT=BINARY_CI
I'm aware of both Database.ExecuteSqlCommand and Database.Connection.CreateCommand as methods, but I can't figure out the 'when'. If I manually try to do this to the context after creation but before the LINQ, I have to manually open & close the connection, and then it seems to be a different transaction as the LINQ, and doesn't seem to be applying.
Technically this is not a solution to your question of how to inject ALTER SESSION SET NLS_SORT=BINARY_CI into a query, but it may help you with your insensitive case search, just use .ToLower() .
The first option would be to ask the DBA to add a "login trigger" to the DB account you are using. The disadvantage there is two fold: One it will be set for every command, and two the DBA will laugh at you for not simply doing the oracle defacto "upper" on everything.
These guys seemed to pull it off using the ExecuteStoreCommand off of the context. I'm not a fan of EF so I can't help much here, but I'd guess you'd need to execute your linq query inside of that same context?
http://blogs.planetsoftware.com.au/paul/archive/2012/07/31/ef4-part-10-database-agnostic-linq-to-entities-part-2.aspx
You may be able to use one of the "executing" methods in the command interception feature in EF:
http://msdn.microsoft.com/en-us/data/dn469464#BuildingBlocks

SQL Server Synonyms and Concurrency Safety With Dynamic Table Names

I am working with some commercial schemas, which have a a set of similar tables, which differ only in language name e.g.:
Products_en
Products_fr
Products_de
I also have several stored procedures which I am using to access these to perform some administrative functions, and I have opted to use synonyms since there is a lot of code, and writing everything as dynamic SQL is just painful:
declare #lang varchar(50) = 'en'
if object_id('dbo.ProductsTable', 'sn') is not null drop synonym dbo.ProductsTable
exec('create synonym dbo.ProductsTable for dbo.Products_' + #lang)
/* Call the synonym table */
select top 10 * from dbo.ProductsTable
update ProductsTable set a = 'b'
My question is how does SQL Server treat synonyms when it comes to concurrent access? My fear is that a procedure could start, then a second come along and change the table the synonym points to halfway through causing major issues. I could wrap everything in a BEGIN TRAN and COMMIT TRAN which should theoretically remove the risk of two processes changing a synonym, however the documentation is scarce on this matter and I can not get a definitive answer.
Just to note, although this system is concurrent, it is not high traffic, so the performance hits of using synonyms/transactions are not really an issue here.
Thanks for any suggestions.
Your fear is correct. Synonyms are not intended to used in this way. Wrapping it is a transaction (not sure what isolation level would be required) might solve the issue, but only by making the system single user.
If I was dealing with this then I would probably have gone with dynamic SQL becuase I am familiar with it. However, having thought about it I wonder if schemas could solve your problem.
If you created schema for each language and then had a table called products in each schema. Your stored proc can then reference an un-qualified table name and SQL should resolve the reference to the table that is in the default schema of the current user. You'll then need to either change what account your application authenticates as to determine which schema it uses or use EXECUTE AS in a stored proc to decide which schema is default.
I haven't tested this schema idea, I may not have thought of everything and I don't know enough about your application to know if it is actually workable in your case. Let us know if you decide to try it.

MySQL Trigger only for a certain mysql user

I'm trying to find out if a specific MySQL User is still in use in our system (and what queries it is executing).
So I thought of writing a trigger that would kick in anytime user X executes a query, and it would log the query in a log table.
How can I do that?
I know how to write a query for a specific table, but not for a specific user (any table).
Thanks
You could branch your trigger function on USER().
The easiest would be to have the trigger always fire, but only logs if the user is X.
I would look at these options:
A) Write an audit plugin, which filters events based on the user name.
For simplicity, the user name can be hard coded in the plugin itself,
or for elegance, it can be configured by a plugin variable, in case this problem happens again.
See
http://dev.mysql.com/doc/refman/5.5/en/writing-audit-plugins.html
B) Investigate the --init-connect server option.
For example, call a stored procedure, check the value of user() / current_user(),
and write a trace to a log (insert into a table) if a connection from the user was seen.
See
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_init_connect
This is probably the closest thing to a connect trigger.
C) Use the performance schema instrumentation.
This assumes 5.6.
Use table performance_schema.setup_instrument to only enable the statement instrumentation.
Use table performance_schema.setup_actors to only instrument sessions for this user.
Then, after the system has been running for a while, look at activity for this user in the following tables:
table performance_schema.users will tell if there was some activity at all
table performance_schema.events_statements_history_long will show the last queries executed
table performance_schema.events_statements_summary_by_user will show aggregate statistics about each statement types (SELECT, INSERT, ...) executed by this user.
Assuming you have a user defined as 'old_app'#'%', a likely follow up question will be to find out where (which host(s)) this old application is still connecting from.
performance_schema.accounts will just show that: if traffic for this user is seen, it will show each username # hostname source of traffic.
There are statistics aggregated by account also, look for '%_by_account%' tables.
See
http://dev.mysql.com/doc/refman/5.6/en/performance-schema.html
There are also other ways you could approach this problem, for example using MySQL proxy
In the proxy you could do interesting things - from logging to transforming queries, pattern matching (check this link also for details on how to test/develop the scripts)
-- set the username
local log_user = 'username'
function read_query( packet )
if proxy.connection.client.username == log_user and string.byte(packet) == proxy.COM_QUERY then
local log_file = '/var/log/mysql-proxy/mysql-' .. log_user .. '.log'
local fh = io.open(log_file, "a+")
local query = string.sub(packet, 2)
fh:write( string.format("%s %6d -- %s \n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server["thread_id"],
query))
fh:flush()
end
end
The above has been tested and it does what it is supposed to (although this is a simple variant, does not log success or failure and only logs proxy.COM_QUERY, see the list of all constants to see what is skipped and adjust for your needs)
Yeah, fire away, but use whatever system you have to see what user it is (cookies, session) to log only if the specific user (userID, class) matches your credentials.