How to let an Execute SQL SSIS component decide whether to continue package execution, based on whether a specific SQL Agent job is running.
This is an easy way of letting the package decide to proceed based on whether a specific SQL Agent job is running at the time the package is invoked.
To do so:
Create an Execute SQL task component at the start of your package.
Give it a query that will ask [msdb] on the execution status of the SQL Agent job you are interested in.
Like so:
USE [msdb];
SELECT ISNULL(MAX([sj].[name]),'') AS [JobRunning]
FROM [msdb].[dbo].[sysjobactivity] AS [sja]
INNER JOIN [msdb].[dbo].[sysjobs] AS [sj]
ON [sja].[job_id] = [sj].[job_id]
WHERE [sja].[start_execution_date] IS NOT NULL
AND [sja].[stop_execution_date] IS NULL
AND CAST([sja].[start_execution_date] AS DATE) = CAST(GETDATE() AS DATE)
AND ISNULL([sj].[name],'') LIKE '%NameOfSQLAgentJobThatNeedsToNotBeRunnimg%';
And as shown in the screenshot:
Make sure to set the result set property to single row and to create a Package scoped variable to hold the returned value.
Like so:
And then you need to add the fist step of the Package logic whatever that might be and connect the two.
When you have connected the two, edit the Precedence Constraint and set it to Expression and Constraint and upon Success set the code to:
#[User::JobRunning]!="NameOfSQLAgentJobThatNeedsToNotBeRunnimg"
like so:
And that's it.
Save, build, deploy :)
I'm trying to improve the current auditing I have with one of my databases. Currently this is done with access data macro's however I use vb.net as a front end.
Most of the updates use a data adapter and use the following command to update the backend
CurrentDataAdapter.Update
For the purposes of inserting the information into an audit table I would like to be able to list the SQL commands that take place with this. Using the command text just gives a single SQL command with the parameters place holders
CurrentDataAdapter.UpdateCommand.CommandText
Gives
UPDATE [Table] Set F1=#P1 WHERE ID=#P2
However I'm more after a list of
UPDATE [Table] SET F1=a WHERE ID=1
UPDATE [Table] Set F2=b WHERE ID=2
UPDATE [Table] SET F3=c WHERE ID=3
Is this possible? (Multiple SQL statements in one are not supported with Access backend)
Many thanks
There is SQL Server 2012 database that is used by three different applications. In that database there is a table that contains ~500k rows and for some mysterious reason this table gets emptied every now and then. I think this is possibly caused by:
A delete query without a where clause
A delete query in a loop gone wild
I am trying to locate the cause of this issue by reviewing code but no joy. I need an alternate strategy. I think I can use triggers to detect what/why all rows get deleted but I am not sure how to go about this. So:
Can I use triggers to check if a query is attempting to delete all rows?
Can I use triggers to log the problematic query and the application that issues that query?
Can I use triggers to log such actions into a text file/database table/email?
Is there a better way?
You can use Extended Events to monitor your system.
Here a simple screen shot where are.
A simple policy can monitor for delete and truncate statements.
When this events are raised details are written into file.
Here a screen with details (you can configure the script to collect more data) collected for delete statement.
Here the script used, modify the output file path
CREATE EVENT SESSION [CheckDelete] ON SERVER
ADD EVENT sqlserver.sql_statement_completed(SET collect_statement=(1)
ACTION(sqlserver.client_connection_id,sqlserver.client_hostname)
WHERE ([sqlserver].[like_i_sql_unicode_string]([statement],N'%delete%') OR [sqlserver].[like_i_sql_unicode_string]([statement],N'%truncate%')))
ADD TARGET package0.event_file(SET filename=N'C:\temp\CheckDelete.xel',max_file_size=(50))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
This is a possibility that may help you. It creates a trigger on Table1 that sends an email when a process DELETEs more than 100 records. I'd modify the message to include some useful data like:
Process ID (##SPID)
Host (HOST_NAME())
Name of app (APP_NAME())
And possibly the entire query
CREATE TRIGGER Table1MassDeleteTrigger
ON dbo.Activities
FOR DELETE
AS
DECLARE #DeleteCount INT = (SELECT COUNT(*) FROM deleted)
IF(#DeleteCount > 100)
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'MailProfileName',
#recipients = 'admin#yourcompany.com',
#body = 'Something is deleting all your data!',
#subject = 'Oops!';
I am using SQL 2005. For one of our customers, we run a script everytime we set-up a new database. The script defines what information remains and what information is deleted from the database....we use a master database to set 'typical' default information. I have been asked to add a delete statement to the script with a 'test' for the delete statement to quit running automatically after 1 January 2011. We don't want any other information if the script affected; just the one statement. Does anyone know how to structure the syntax for this kind of request?
Thank you.
IF GETDATE() < '20110101'
BEGIN
--Deletez
END
You may need to cast, but i don't think so CAST('20110101' AS DATETIME)
I'm trying to find out if a specific MySQL User is still in use in our system (and what queries it is executing).
So I thought of writing a trigger that would kick in anytime user X executes a query, and it would log the query in a log table.
How can I do that?
I know how to write a query for a specific table, but not for a specific user (any table).
Thanks
You could branch your trigger function on USER().
The easiest would be to have the trigger always fire, but only logs if the user is X.
I would look at these options:
A) Write an audit plugin, which filters events based on the user name.
For simplicity, the user name can be hard coded in the plugin itself,
or for elegance, it can be configured by a plugin variable, in case this problem happens again.
See
http://dev.mysql.com/doc/refman/5.5/en/writing-audit-plugins.html
B) Investigate the --init-connect server option.
For example, call a stored procedure, check the value of user() / current_user(),
and write a trace to a log (insert into a table) if a connection from the user was seen.
See
http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_init_connect
This is probably the closest thing to a connect trigger.
C) Use the performance schema instrumentation.
This assumes 5.6.
Use table performance_schema.setup_instrument to only enable the statement instrumentation.
Use table performance_schema.setup_actors to only instrument sessions for this user.
Then, after the system has been running for a while, look at activity for this user in the following tables:
table performance_schema.users will tell if there was some activity at all
table performance_schema.events_statements_history_long will show the last queries executed
table performance_schema.events_statements_summary_by_user will show aggregate statistics about each statement types (SELECT, INSERT, ...) executed by this user.
Assuming you have a user defined as 'old_app'#'%', a likely follow up question will be to find out where (which host(s)) this old application is still connecting from.
performance_schema.accounts will just show that: if traffic for this user is seen, it will show each username # hostname source of traffic.
There are statistics aggregated by account also, look for '%_by_account%' tables.
See
http://dev.mysql.com/doc/refman/5.6/en/performance-schema.html
There are also other ways you could approach this problem, for example using MySQL proxy
In the proxy you could do interesting things - from logging to transforming queries, pattern matching (check this link also for details on how to test/develop the scripts)
-- set the username
local log_user = 'username'
function read_query( packet )
if proxy.connection.client.username == log_user and string.byte(packet) == proxy.COM_QUERY then
local log_file = '/var/log/mysql-proxy/mysql-' .. log_user .. '.log'
local fh = io.open(log_file, "a+")
local query = string.sub(packet, 2)
fh:write( string.format("%s %6d -- %s \n",
os.date('%Y-%m-%d %H:%M:%S'),
proxy.connection.server["thread_id"],
query))
fh:flush()
end
end
The above has been tested and it does what it is supposed to (although this is a simple variant, does not log success or failure and only logs proxy.COM_QUERY, see the list of all constants to see what is skipped and adjust for your needs)
Yeah, fire away, but use whatever system you have to see what user it is (cookies, session) to log only if the specific user (userID, class) matches your credentials.