Email alert when field meets certain condition - sql-server-2012

Using SQL Server Management Studio 2012
I would like an email sent to me every time a certain field in my table is greater than zero. This is not something that happens often but I would like to be alerted when it does.
I only want to be emailed the new insert not any of the previous. In the example below, taking column 3 as the field of interest, Harry is the newest insert and column 3 is greater than zero. This is when I would like to be alerted, as you can see Jack's is greater than zero too but lets assume this is an older entry so I don't want this to appear on the email.
Name Department Column3 Column4 Column5
Harry HR 2 ABC DEF
James Sport 0 ABC DEF
Jack Finance 1 ABC DEF
Relatively new with the email function in SQL but understand the basics below
use Database
go
begin
Execute msdb.dbo.sp_send_dbmail
#recipients = 'emailaddress',
#query = 'select Name, department, Column3 from mytable
where Column3 > 0 '
End

A trigger that checks the values from the INSERTED table and executes sp_send_dbmail if this values exceeds the specified number can be used for this. An example trigger is below. Using sp_send_dbmail will require properly configuring Database Mail if you haven't already. More details on Database Mail and setting this up can be found here. A cursor is used to send an email for each new row that was added/updated. Since you only want to data from new or updated rows, this is obtained from the INSERTED table instead of a query, then used to build the #body parameter of sp_send_dbmail.
Note that CONCAT is used for a safeguard for nulls, as if multiple strings are added together with the + operator and one is null, the entire concatenated string will be null. However if CONCAT is used the non-null strings will still be preserved. A table variable is used to initially obtain the values from the INSERTED table which will later be feed into the cursor. The INSERTED table will capture values from both INSERT and UPDATE operations. From your question, it seems as though you still wanted to new value added to the table, so an AFTER trigger was used.
CREATE TRIGGER dbo.TestTrigger
ON YourDatabase.YourSchema.YourTable
AFTER INSERT, UPDATE
AS
BEGIN
DECLARE #NewColumn3 INT;
DECLARE #NewName VARCHAR(25);
DECLARE #NewDepartment VARCHAR(25);
DECLARE #Tab Table (COLUMN3 INT, NAME VARCHAR(25), DEPARTMENT VARCHAR(25));
DECLARE #Message VARCHAR(2000);
INSERT INTO #Tab
SELECT COLUMN3, NAME, DEPARTMENT
FROM INSERTED
IF EXISTS((SELECT COLUMN3 FROM #Tab where COLUMN3 > 0))
BEGIN
--make sure to only add data from rows where COLUMN3 > 0
DECLARE EmailCursor CURSOR FOR
SELECT COLUMN3, NAME, DEPARTMENT FROM #Tab WHERE COLUMN3 > 0
OPEN EmailCursor
FETCH NEXT FROM EmailCursor
INTO #NewVal, #NewName, #NewDepartment
--while there are still rows
WHILE (##FETCH_STATUS = 0)
BEGIN
--use CONCAT to avoid null value voiding message
SET #Message = CONCAT('Name ', #NewName, ' from department ', #NewDepartment, ' added a value of ', #NewVal)
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'YourProfileName',
#recipients = 'EmailAddress#domain.com',
#body = #Message,
#subject = 'Email Subject';
FETCH NEXT FROM EmailCursor
INTO #NewVal, #NewName, #NewDepartment
END
CLOSE EmailCursor
DEALLOCATE EmailCursor
END
END

I am looking for a quit similar answer using VBA instead of SQL. It could be a good solution however still need to understand the condition bit.
Trigger email alert on excel based on date review
Would you consider switching to VBA?

Related

Read columns in SQL tables which are the result of another query

I need to check that all primary key columns do have all values in uppercase.
So, I have a first request which returns me the table-field pairs which are part of PK.
SELECT table_name, field_name FROM dico WHERE pkey > 0;
(dico is some table which gives that information. No need to look it up in the SQL Schema…)
And, for all those pairs tx/fx listed from that first query above, I need to look for values which would not be uppercased.
SELECT DISTINCT 't1', 'f1', f1 FROM t1 WHERE f1 <> UPPER(f1) UNION ALL
SELECT DISTINCT 't2', 'f2', f2 FROM t2 WHERE f2 <> UPPER(f2) UNION ALL
...
SELECT DISTINCT 'tn', 'fn', fn FROM tn WHERE fn <> UPPER(fn);
(I'm putting the table name and field name as "strings" in the output, so that I know from where the wrong values are coming.)
As you see, I do have the code for both requests, but I do not know how to combine them (if possible, in a generic way that would work for both SQL Server and Oracle).
Can you give me some idea on how to finish that?
One way that I could think of is to use a statement block that contains a loop.
Unfortunately, the structure of a statement block will be different for every different database system (the one for SQL Server will be different for Oracle).
I wrote an example using SQL Server further below (fiddle link is at: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=85cd786adf32247da1aa73c0341d1b72).
Just in case, the dynamic query gets very long (possibly longer than the limit of varchar, which is 8000 characters), SQL Server has varchar(max) that can hold up to 2GB (https://learn.microsoft.com/en-us/sql/t-sql/data-types/char-and-varchar-transact-sql?view=sql-server-ver15). This can be used for #DynamicQuery, replacing VARCHAR(3000) in the example below (modified/alternative fiddle link, just to show that the data type really exists and can be used, is at: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=7fbb5d130aad35e682d8ce7ffaf09ede).
Please note that the example is not using your exact queries because I do not have access to the exact same data as the one you have (e.g. I cannot test the example using dico table because I do not have access to that table).
However, I made the example so that it uses a similar basic structure of logic from your queries, so that later on it can be customised to suit your exact need/scenario (e.g. by changing the table names and field names to match the ones that you use, as well as by adding the WHERE clause as you need).
In the example, your 1st query will be run immediately and the result will be handled by a cursor.
After that, a loop (using WHILE statement/structure) will loop through the cursor for the result of the 1st query to dynamically build the 2nd query (inserting the table names and the field names from the 1st query).
Note that at this point, the 2nd query is still being built, not being run yet.
Eventually, after the loop has finished, the resulting/compiled 2nd query will be run/executed (using the EXEC command).
-- START of test data creation.
create table TableA
( message varchar(200)
);
insert into TableA([message]) values ('abc');
insert into TableA([message]) values ('def');
create table TableB
( message varchar(200)
);
insert into TableB([message]) values ('ghi');
insert into TableB([message]) values ('jkl');
-- END of test data creation.
-- START of dynamic SQL
declare #TableAndFieldDetails CURSOR
declare #TableName VARCHAR(50)
declare #FieldName VARCHAR(50)
declare #DynamicQuery VARCHAR(3000) = ''
begin
SET #TableAndFieldDetails = CURSOR FOR
-- START of the 1st query
SELECT [INFORMATION_SCHEMA].COLUMNS.TABLE_NAME,
[INFORMATION_SCHEMA].COLUMNS.COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME LIKE '%message%'
-- END of the 1st query
-- START of dynamically building the 2nd query
OPEN #TableAndFieldDetails
FETCH NEXT FROM #TableAndFieldDetails INTO #TableName, #FieldName
WHILE ##FETCH_STATUS = 0
BEGIN
IF #DynamicQuery <> ''
BEGIN
SET #DynamicQuery += ' UNION ALL '
END
-- The one line right below is each individual part/element of the 2nd query
SET #DynamicQuery += 'SELECT ''' + #TableName + ''', ''' + #FieldName + ''', ' + #FieldName + ' FROM ' + #TableName
FETCH NEXT FROM #TableAndFieldDetails INTO #TableName, #FieldName
END
CLOSE #TableAndFieldDetails
DEALLOCATE #TableAndFieldDetails
-- END of dynamically building the 2nd query
EXEC (#DynamicQuery)
end
-- END of dynamic SQL

SQL cursor performance/Alternative?

I currently have two tables Table1 and Table 2 structure as below. As you can see table 1 contains multiple rows for column FK and the FK column makes the foreign key to the Table 2 ID column which has only one row per ID with the most recent value from table 1 ordered by ID column in Table 1.
Table 1
ID FK END_DTTM
1 1 01/01/2000
2 1 01/01/2005
3 1 01/01/2012
4 1 01/01/2100
5 2 01/01/1999
6 2 01/01/2100
7 3 01/01/2100
Table 2
ID END_DTTM
1 01/01/2100
2 01/01/2100
3 01/01/2100
The business requirement is to track every update in Table 2 so that point in time data can be retrieved. To achieve this I am using SQL 2016 and Temporal tables where every update to the Table 2 creates a version in the history table automatically.
To achieve the insert update process I am currently using cursors which is terribly slow and is processing around 71000 rows in 30 mins and the table has around 60million rows! Cursor query as follows:
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
Declare #ID as int;
Declare #FK as int;
Declare #END_DTTM as datetime2;
DECLARE #SelectCursor as CURSOR;
SET #SelectCursor = CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR
SELECT [ID],[FK],[END_DTTM] from TABLE1 order by FK,ID;
OPEN #SelectCursor ;
FETCH NEXT FROM #SelectCursor INTO #ID,#FK,#END_DTTM;
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE TABLE2
set
END_DTTM = #END_DTTM
where ID = #FK
IF ##ROWCOUNT = 0
BEGIN
INSERT Table2
(
ID,END_DTTM
)
VALUES (
#FK,#END_DTTM
)
END
FETCH NEXT FROM #SelectCursor INTO #ID,#FK,#END_DTTM;
END
CLOSE #SelectCursor;
DEALLOCATE #SelectCursor;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
DECLARE #ErrorNumber INT = ERROR_NUMBER();
DECLARE #ErrorLine INT = ERROR_LINE();
DECLARE #ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
DECLARE #ErrorSeverity INT = ERROR_SEVERITY();
DECLARE #ErrorState INT = ERROR_STATE();
PRINT 'Actual error number: ' + CAST(#ErrorNumber AS VARCHAR(10));
PRINT 'Actual line number: ' + CAST(#ErrorLine AS VARCHAR(10));
PRINT 'Actual message: ' + CAST(#ErrorMessage AS VARCHAR(4000));
PRINT 'Actual severity: ' + CAST(#ErrorSeverity AS VARCHAR(10));
PRINT 'Actual state: ' + CAST(#ErrorState AS VARCHAR(10));
Insert into ERROR_LOG
(
SOURCE_PRIMARY_KEY
,ERROR_CODE
,ERROR_COLUMN
,ERROR_DESCRIPTION
)
VALUES
(
null,
#ErrorNumber,
#ErrorState,
#ErrorMessage,
'Error!'
);
Throw;
-- RAISERROR(#ErrorMessage, #ErrorSeverity, #ErrorState);
END CATCH
END;
I tried using cte but I didn't see any performance gain with it, in fact it was tad slower than the cursors themselves.
Is there a better way to achieve the above using set based operation still process every row from table 1 and update 2 so that temporal table picks up the update and tracks the changes?
I am not sure how you are running the update SQL, but I'll describe the process that I use to track changes in Oracle.
I setup a trigger on the table that I want to audit. I create another table with the same columns, one set prefixed with OLD_ and the other prefixed NEW_. In the Oracle trigger, you can reference the new row and old row. I run an insert into the audit table with the old and new value, DML action type, and the TIMESTAMP. Additionally, I'll add the database user and if possible the application user that requested the change.
In the current RAC cluster and in our ancient 9i AIX server, I could never notice any performance degradation.
Additionally, if the transaction is rolled back, it won't insert the audit record as the trigger is inside the transaction.
Don't let people tell you NOT to use SQL triggers. While you don't want to do "crazy" things with triggers (like running queries or calling Web services), it is the perfect application for a trigger (I normally use them to add a last updated date to a row. I don't trust the application layer for accurate information).
Oh, is that all you want?
insert into table2(ID, END_DTTM)
select fk, max(END_DTTM)
from table1 t1
group by fk;

removing duplicates from table without using temporary table

I've a table(TableA) with contents like this:
Col1
-----
A
B
B
B
C
C
D
i want to remove just the duplicate values without using temporary table in Microsoft SQL Server. can anyone help me?
the final table should look like this:
Col1
-----
A
B
C
D
thanks :)
WITH TableWithKey AS (
SELECT ROW_NUMBER() OVER (ORDER BY Col1) As id, Col1 As val
FROM TableA
)
DELETE FROM TableWithKey WHERE id NOT IN
(
SELECT MIN(id) FROM TableWithKey
GROUP BY val
)
Can you use the row_number() function (http://msdn.microsoft.com/en-us/library/ms186734.aspx) to partition by the columns you're looking for dupes on, and delete where row number isn't 1?
I completely agree that having a unique identifier will save you a lot of time.
But if you can't use one (or if this is purely hypothetical), here's an alternative: Determine the number of rows to delete (the count of each distinct value -1), then loop through and delete top X for each distinct value.
Note that I'm not responsible for the number of kittens that are killed every time you use dynamic SQL.
declare #name varchar(50)
declare #sql varchar(max)
declare #numberToDelete varchar(10)
declare List cursor for
select name, COUNT(name)-1 from #names group by name
OPEN List
FETCH NEXT FROM List
INTO #name,#numberToDelete
WHILE ##FETCH_STATUS = 0
BEGIN
IF #numberToDelete > 0
BEGIN
set #sql = 'delete top(' + #numberToDelete + ') from #names where name=''' + #name + ''''
print #sql
exec(#sql)
END
FETCH NEXT FROM List INTO #name,#numberToDelete
END
CLOSE List
DEALLOCATE List
Another alternative would to be create a view with a generated identity. In this way you could map the values to a unique identifer (allowing for conventional delete) without making a permanent addition to your table.
Select grouped data to temp table, then truncate original, after that move back it to original.
Second solution, I am not sure will it work but you can try open table directly from SQL Management Studio and use CTRL + DEL on selected rows to delete them. That is going to be extremely slowly because you need to delete every single row by hands.
You can remove duplicate rows using a cursor and DELETE .. WHERE CURRENT OF.
CREATE TABLE Client ([name] varchar(100))
INSERT Client VALUES('Bob')
INSERT Client VALUES('Alice')
INSERT Client VALUES('Bob')
GO
DECLARE #history TABLE (name varchar(100) not null)
DECLARE #cursor CURSOR, #name varchar(100)
SET #cursor = CURSOR FOR SELECT name FROM Client
OPEN #cursor
FETCH NEXT FROM #cursor INTO #name
WHILE ##FETCH_STATUS = 0
BEGIN
IF #name IN (SELECT name FROM #history)
DELETE Client WHERE CURRENT OF #cursor
ELSE
INSERT #history VALUES (#name)
FETCH NEXT FROM #cursor INTO #name
END

T-SQL EXEC and scope

Let's say I have a stored procedure with this in its body:
EXEC 'INSERT INTO ' + quotename(#table) ' blah...'
SELECT IDENT_CURRENT('' + #table + '')
Is IDENT_CURRENT() guaranteed to get the identity of that row INSERTed in the EXEC? IDENT_CURRENT() "returns the last identity value generated for a specific table in any session and any scope", but the scope is different within the EXEC than the stored procedure, right?
I want to make sure that if the stored procedure is being called multiple times at once, the correct identity is SELECTed.
EDIT: Or do I need to do both the INSERT and SELECT within the EXEC, like so:
declare #insert nvarchar
set #insert =
'INSERT INTO ' + quotename(#table) ' blah...' +
'SELECT IDENT_CURRENT(''' + #table + ''')'
EXEC #insert
And if that's the case, how do I SELECT the result of the EXEC if I want to continue with more code in T-SQL? Like this (although it's obviously not correct):
declare #insert nvarchar
set #insert =
'INSERT INTO ' + quotename(#table) ' blah...' +
'SELECT IDENT_CURRENT(''' + #table + ''')'
declare #ident int
set #ident = EXEC #insert
-- more code
SELECT * FROM blah
UPDATE: In the very first snippet, if I SELECT SCOPE_IDENTITY() instead of using IDENT_CURRENT(), NULL is returned by the SELECT. :(
Try
EXEC 'INSERT INTO ' + quotename(#table) ' blah...; SELECT ##IDENTITY'
or better, according to this
EXEC 'INSERT INTO ' + quotename(#table) ' blah...; SELECT SCOPE_IDENTITY()'
According to Microsoft's T-SQL docs:
IDENT_CURRENT is similar to the SQL
Server 2000 identity functions
SCOPE_IDENTITY and ##IDENTITY. All
three functions return last-generated
identity values. However, the scope
and session on which last is defined
in each of these functions differ:
IDENT_CURRENT returns the last
identity value generated for a
specific table in any session and any
scope.
##IDENTITY returns the last identity
value generated for any table in the
current session, across all scopes.
SCOPE_IDENTITY returns the last
identity value generated for any table
in the current session and the current
scope.
So I would say, no, IDENT_CURRENT does not guarantee to give you back the right value. It could be the last IDENTITY value inserted in a different session.
I would make sure to use SCOPE_IDENTITY instead - that should work reliably.
Marc
http://blog.sqlauthority.com/2009/03/24/sql-server-2008-scope_identity-bug-with-multi-processor-parallel-plan-and-solution/
There is a bug in SCOPE_IDENTITY() I have switched my stored procedures over to the methodology used to retrieve default values from an insert:
declare #TheNewIds table (Id bigint, Guid uniqueidentifier)
insert [dbo].[TestTable] output inserted.Id, inserted.Guid into #TheNewIds
values (default);
select #Id = [Id], #Guid = [Guid] from #TheNewIds;
I think Scope_Identity() is what you're looking for, which will give you the most recent identify in the current scope.
I'd like to chip in my favourite solution by using OUTPUT keyword. Since INSERT can support multiple rows at a time, we would want to know the identities inserted. Here goes:
-- source table
if object_id('Source') is not null drop table Source
create table Source
(
Value datetime
)
-- populate source
insert Source select getdate()
waitfor delay '00:00.1'
insert Source select getdate()
waitfor delay '00:00.1'
insert Source select getdate()
select * from Source -- test
-- destination table
if object_id('Destination') is null
create table Destination
(
Id int identity(1, 1),
Value datetime
)
-- tracking table to keep all generated Id by insertion of table Destination
if object_id('tempdb..#Track') is null
create table #Track
(
Id int
)
else delete #Track
-- copy source into destination, track the Id using OUTPUT
insert Destination output inserted.Id into #Track select Value from Source
select Id from #Track -- list out all generated Ids
Go ahead to run this multiple times to feel how it works.

Dynamic cursor used in a block in TSQL?

I have the following TSQL codes:
-- 1. define a cursor
DECLARE c_Temp CURSOR FOR
SELECT name FROM employees;
DECLARE #name varchar(100);
-- 2. open it
OPEN c_Temp;
-- 3. first fetch
FETCH NEXT FROM c_Temp INTO #name;
WHILE ##FETCH_STATUS = 0
BEGIN
print #name;
FETCH NEXT FROM c_Temp INTO #name; -- fetch again in a loop
END
-- 4. close it
....
I use the name value only in a loop block. Here I have to
define a cursor variable,
open it,
fetch twice and
close it.
In PL/SQL, the loop can be like this:
FOR rRec IN (SELECT name FROM employees) LOOP
DBMS_OUTPUT.put_line(rRec.name);
END LOOP;
It is much simpler than my TSQL codes. No need to define a cursor. It is created dynamically which is accessible within a loop block (much like C# for loop). Not sure if there something similar like this in TSQL?
Something along these lines might work for you, although it depends on having an ID column or some other unique identifier
Declare #au_id Varchar(20)
Select #au_id = Min(au_id) from authors
While #au_id IS NOT NULL
Begin
Select au_id, au_lname, au_fname from authors Where au_id = #au_id
Select #au_id = min(au_id) from authors where au_id > #au_id
End
Cursors are evil in Sql Server as they can really degrade performance - my favoured approach is to use a Table Variable (>= Sql Server 2005) with an auto inc ID column:
Declare #LoopTable as table (
ID int identity(1,1),
column1 varchar(10),
column2 datetime
)
insert into #LoopTable (column1, column2)
select name, startdate from employees
declare #count int
declare #max int
select #max = max(ID) from #LoopTable
select #count = 1
while #count <= #max
begin
--do something here using row number '#count' from #looptable
set #count = #count + 1
end
It looks pretty long winded however works in any situation and should be far more lightweight than a cursor
Since you are coming from an Oracle background where cursors are used frequently, you may not be aware that in SQl Server cursors are performance killers. Depending on what you are actually doing (surely not just printing the variable), there may be a much faster set-based solution.
In some cases, its also possible to use trick like this one:
DECLARE #name VARCHAR(MAX)
SELECT #name = ISNULL(#name + CHAR(13) + CHAR(10), '') + name
FROM employees
PRINT #name
For a list of employee names.
It can also be used to make comma-separated string, just replace + CHAR(13) + CHAR(10) with + ', '
Why not simply just return the recordset using a select statement. I assume the object is to copy and paste the values in the UI (based on the fact that you are simply printing the output)? In Management studio you can copy and paste from the grid, or press +T and then run the query and return the results as part of the messages tab in plain text.
If you were to run this via an application, the application wouldn't be able to access the printed statements as they are not being returned within a recordset.