I have an instance with almost 1000 jobs.
We have a deletion occurring in one of our tables that shouldn’t be happening. (I have a trigger in-place).
AFTER DELETE
AS
INSERT INTO myTableHistory (DeletedDate, DeletedBy)
SELECT GETDATE(), USER_ID() FROM Deleted
exec msdb.dbo.sp_send_dbmail #recipients = 'me#me.com', #body = 'trigger exec', #Subject = 'trigger exec'
I set this up on Friday night; and I received emails:
6:39am on Saturday
6:35am on Sunday
6:35am today (Monday)
(meaning, deletions occurred at that time and day).
So, I would need to do a very specific search (which I don’t know how to do it).
I would need to find a step from any job… such step started between 5.35am and 6.35am (Monday), and the step finished between 6.35amd 7.35am (Monday).
Or look the same on Sunday…
This site has the answer, by step and by job, great!
https://www.mssqltips.com/sqlservertip/2850/querying-sql-server-agent-job-history-data/
Related
I have configured a database email, operators, and such on my SQL managed instance, to receive an email when a job fails.
In the email, we get something like this "The yyy_job failed on step 3".
But my question is... Is there a way to add the error message on the body of the email? I've been searching for this, but can't fine a suitable answer.
Thank you in advance
As far as I know there's no way to add further details to the email notifications when a job fails.
The only way is to implement your own notification process.
https://www.sqlshack.com/reporting-and-alerting-on-job-failure-in-sql-server/
We have a similar set up. We have a SQL Server Agent job that consists of several steps.
I configured it in such a way that we receive notification email when the job starts and another email when it finishes. There are two versions of the final email - one for success, another for failure.
At the end of the job there are two final steps called "Email OK" and "Email FAIL". Note how each of the steps have their "On Success" and "On Failure" configured.
This is how "Email OK" and "Email FAIL" steps look like in our case:
In my case I simply have different subjects of the emails, so it is easy to filter in the email client.
You can write any extra T-SQL code to execute a query against msdb.dbo.sysjobhistory and include the relevant result into the email.
I will not write a complete query here, but I imagine it would look similar to my sketch below. If you need help with that, ask another question.
This is how you can use msdb.dbo.sp_send_dbmail to include the result of some query into the email text:
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'ABC'
,#recipients = 'abc#example.com'
,#subject = 'Some subject line'
,#body = #VarBody
,#body_format = 'TEXT'
,#importance = 'NORMAL'
,#sensitivity = 'NORMAL'
,#query = N'
-- show latest entry in the log for your job
SELECT TOP(1)
message, ...
FROM
msdb.dbo.sysjobhistory
WHERE
job_id = ''your job ID''
ORDER BY
instance_id DESC;
'
,#execute_query_database = 'msdb'
;
Have a look at the documentation for a list of parameters for sp_send_dbmail. Example above inlines the query result. You can also attach it as a separate file.
I want to know what happens when a procedure is executed through a job and before it finishes is time for the job to call the next execution of the procedure. Here the job I created:
DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
(
job => x
,what => 'BEGIN PKG_DISTRIBUIDOR_SCHEDULER.PRC_DISTRIBUYE_TRANSACCIONES(5000); END;'
,next_date => to_date(sysdate,'dd/mm/yyyy hh24:mi:ss')
,interval => 'SYSDATE+30/86400'
,no_parse => FALSE
);
DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
COMMIT;
END;
As you can see, the job is executed each 30 seconds. So if my procedure (PRC_DISTRIBUYE_TRANSACCIONES) delays more than 30 seconds, what does the job do in this case?
If you use the (old deprecated) Jobs, i.e. DBMS_JOB
The starting time for the next execution is determined when the current jobs is finished.
If you specify an interval as SYSDATE+30/86400 then it does not mean: "The job runs every 30 seconds."
It means: "The next jobs starts 30 seconds after the previous job has been finished."
If you use the Scheduler Jobs, i.e. DBMS_SCHEDULER
Immediately after a job starts, the repeat_interval (e.g. FREQ=SECONDLY;INTERVAL=30) is evaluated to determine the next scheduled execution time of the job. While this might arrive while the job is still running, a new instance of the job does not start until the current one completes. See About Setting the Repeat Interval
So it means: If a jobs last longer than 30 seconds then the new job will start immediately after the previous job has been finished.
Nothing happens!
Only when anonymous PL/SQL block inside "what" parameter finish is the next date calculated according the interval parameter
I have a strange issue with msdb.dbo.sp_send_dbmailwithin a stored procedure (code simplified):
SET NOCOUNT ON;
SET XACT_ABORT ON;
-- ...
SET #msg = N'...';
SET #filename = N'...';
EXEC msdb.dbo.sp_send_dbmail
#recipients = 'mailaddr#example.com'
, #blind_copy_recipients = 'another_mailaddr#example.com'
, #from_address = 'senders_mail#example.com'
, #subject = N'...'
, #body = #msg
, #query = N'SET NOCOUNT ON; SELECT <something> FROM <a_view>;'
, #execute_query_database = N'<same database the proc resides in>'
, #query_result_width = 8000
, #attach_query_result_as_file = 1
, #query_attachment_filename = #filename
, #query_result_header = 1
, #query_result_separator = ';'
, #query_result_no_padding = 1
, #exclude_query_output = 1;
I have a database user which is member of db_datareaderand EXECUTEpermissions for the stored procedure. Furthermore the assigned server login is mapped to msdband a member of the DatabaseMailUserRole. There is only 1 single mail profile which is public and flagged as default.
Everything works fine from SSMS: connect with the user's credentials and execute the stored procedure. Great!
The first oddity: if I'm logged in as sysadmin and try to EXECUTE AS it doesn't work. Okay, I found something that points out that there are issues with this.
But the main issue is this: I call the procedure from a 3rd party Java application which connects using the jTDS driver (don't know if this is important). The applicaton executes the procedure ... and nothing else happened (no log entries, the task freezes).
In the activity monitor I see the following:
Process <...>
Database master (??? I've never connected to this
db!)
Task State Running
Wait Type PREEMPTIVE_OS_GETPROCADDRESS
Head Blocker 1
To make things worse I cannot kill this process. If I try this the Command column in the activity monitor shows only KILLED/ROLLBACK.
KILL <PROCESS-ID>shows
spid <...>: Transaction rollback in progress. Estimated rollback completion: 0% Estimated time left: 0 seconds.
I have to restart the whole instance to get rid of the process.
What is happening here?
Finally I found the answer here: blocking from xp_sysmail_format_query waittype of preemptive_os_getprocaddress
It seems that the Java application opens a transaction explicitly (there are 2 calls to stored procedures consecutively). After setting the Auto-Commit option of the database adapter to on everything work's fine.
I have stored procedure that calls a sql job but the job is failing if 2 users make a call to the stored procedure at the same time so here is what I want to do:
check if sql job is running first
only execute the 2nd call if the first call finishes.
I have seen some examples where you can find out if job is running but can't seem to find out to put the 2nd call on hold and only execute when the first one completes.
DECLARE #job_name NVARCHAR(MAX) = 'mySQLJob'
EXEC msdb.dbo.sp_start_job #job_name = #mySQLJob
the error I am getting is something like "job is already running"
sp_help_job can give you this information
EXEC msdb..sp_help_job #job_name = 'mySQLJob', #job_aspect = 'JOB'
current_execution_status Values
- 1 Executing
- 2 Waiting For Thread
- 3 Between Retries
- 4 Idle
- 5 Suspended
- 6 Obsolete
- 7 PerformingCompletionActions
Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though).
The job steps are:
1) Delete all rows from table;
2) Use For each loop to fill up table from Excel spreasheets;
3) Clean up table.
I've tried this MS page (steps 1 & 2), didn't see any need to start changing from Server side security.
Also SQLServerCentral.com for this page, no resolution.
How can I get error logging or a fix?
Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming.
I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?
Here's how I managed tracking "returned state" from an SSIS package called via a SQL Agent job. If we're lucky, some of this may apply to your system.
Job calls a stored procedure
Procedure builds a DTEXEC call (with a dozen or more parameters)
Procedure calls xp_cmdshell, with the call as a parameter (#Command)
SSIS package runs
"local" SSIS variable is initialized to 1
If an error is raised, SSIS "flow" passes to a step that sets that local variable to 0
In a final step, use Expressions to set SSIS property "ForceExecutionResult" to that local variable (1 = Success, 0 = Failure)
Full form of the SSIS call stores the returned value like so:
EXECUTE #ReturnValue = master.dbo.xp_cmdshell #Command
...and then it gets messy, as you can get a host of values returned from SSIS . I logged actions and activity in a DB table while going through the SSIS steps and consult that to try to work things out (which is where #Description below comes from). Here's the relevant code and comments:
-- Evaluate the DTEXEC return code
SET #Message = case
when #ReturnValue = 1 and #Description <> 'SSIS Package' then 'SSIS Package execution was stopped or interrupted before it completed'
when #ReturnValue in (0,1) then '' -- Package success or failure is logged within the package
when #ReturnValue = 3 then 'DTEXEC exit code 3, package interrupted'
when #ReturnValue in (4,5,6) then 'DTEXEC exit code ' + cast(#Returnvalue as varchar(10)) + ', package could not be run'
else 'DTEXEC exit code ' + isnull(cast(#Returnvalue as varchar(10)), '<NULL>') + ' is an unknown and unanticipated value'
end
-- Oddball case: if cmd.exe process is killed, return value is 1, but process will continue anyway
-- and could finish 100% succesfully... and #ReturnValue will equal 1. If you can figure out how,
-- write a check for this in here.
That last references the "what if, while SSIS is running, some admin joker kills the CMD session (from, say, taskmanager) because the process is running too long" situation. We've never had it happen--that I know of--but they were uber-paranoid when I was writing this so I had to look into it...
Why not use logging built into SSIS? We send our logs toa database table and then parse them out to another table in amore user friendly format and can see every step of everypackage that was run. And every error.
I did fix this eventually, thanks for the suggestions.
Basically I logged into Windows with the proxy user account I was running and started to see errors like:
"The For each file enumerator is empty"
I copied the project files across and started testing, it turned out that I'd still left a file path (N:/) in the properties of the For Each loop box, although I'd changed the connection properties. Easier once you've got error conditions to work with. I also had to recreate the variable mapping.
No wonder people just recreate the whole package.
Now fixed and working!