"Trigger" on frequent changes with time range? - sql

I have a need to monitor a particular table in a SQL database, and get notified (email ideally) if there are a large number of changes within any 30 second period (for example). Unsure if this is best done in SQL Server, or by some other method.
I've been hunting and can't seem to find anything relevant and thought I'd ask here to get pointed in the right direction.
So logically, something like this:
If # of Updates or Inserts exceeds 500 within any 30 second timeframe, send email to x#xxxxxx.com
Any thoughts on how best to attack this would be greatly appreciated.

I'd split this into separate concerns.
Firstly, how do you detect changes? The easiest is a "last updated" column; you can then run a select query on changes since the current time, and compare that to your limiter.
The second question is how you execute that query and send your emails. Your question title mentions triggers - these are specific SQL Server objects. I have a personal dislike of triggers - they are "side effects", and tend to make code hard to understand, and lead to entertaining bugs. In your case, they're probably a bad solution - you want triggers to be incredibly fast, as they hold up your data changes while they execute. Sending an email is probably slower than you want.
The simplest option is probably to use an agent job, scheduled to run at whatever frequency you need. Again, performance may be a challenge here - having a high-load query run every 30 seconds may slow down your database dramatically.

If you are using any audit/log table which keeps track of transactions (Insert, Update or Delete) on that particular table, then it should be fairly easy.
Use SQL server agent job to execute 2 tasks:
Execute a script to count the number of transactions on that audit/log table for the last 30 seconds
If it exceeds 500, run powershell script/ .exe file to send emails

Related

Running an SQL query in background

I'm trying to update a modest dataset of 60k records with a value which takes a little time to compute. From a small trial run of 6k records in the production environment, it took 4 minutes to complete, so the full execution should take around 40 minutes.
However this trial run showed that there were SQL timeouts occurring on user requests when accessing data in related tables (but not necessarily on the actual rows which were being updated).
My question is, is there a way of running non-urgent queries as a background operation in the SQL server without causing timeouts or table locking for extensive periods of time? The data within the column which is being updated during this period is not essential to have the new value returned; aka if a request happened to come in for this row, returning the old value would be perfectly acceptable rather than locking the set until the update is complete (I'm not sure the ins and outs of how this works, obviously I do want to prevent data corruption; could be a way of queuing any additional changes in the background)
This is possibly a situation where the NOLOCK hint is appropriate. You can read about SQL Server isolation levels in the documentation. And Googling "SQL Server NOLOCK" will give you plenty of material on why you should not over-use the construct.
I might also investigate whether you need a SQL query to compute values. A single query that takes 4 minutes on 6k records . . . well, that is a long time. You might want to consider reading the data into an application (say, using Python, R, or whatever) and doing the data manipulation there. It may also be possible to speed up the query processing itself.

Efficiency - looping greater than the sum of it's parts?

This is a question of concept, and I am just moving from MS Access to SQL Server for stability and scalability.
I need to maintain a database that pulls from another server daily. Due to the possibility (and probability) of record changes on the other server, I have to pull using a 10 day rolling window with the expectation that anything older than 10 days will not change by policy.
The pull is in stages, getting just the records within a date range on the initial pass, then moving to other tables one at a time to pull relevant and relation data.
I have written a script that works with date range variables. If I set the range to 10 days, it pulls everything in about 8 hours. In a test to see if looping might be better, having the script loop starting with today -10, then continue until the while < today, it takes 16 hours to do 3 days.
Being new, I am curious of the logical reason why the looping by date approach is so much slower. My thought was to try to mitigate impact on the other server, but maybe this isn't conceptually the case.
The code works perfectly in both cases with the only difference being looped or all at once.
Thanks for any insight on this!
Since you are pulling from another server, you are probably using a linked server. Also, since it's the most intuitive way to do it, you are probably doing something like this:
select somefields
from ServerName.DatabaseName.Owner.TableName
where whatever
With this syntax, sql server brings the entire contents of the remote table to the local server first, and then applies the where clause. If the remote table has a lot of data, it takes a long time to transfer it over.
When you ran your original query, the data was transferred once. When you set up your loop, it was transferred more than once. That's why it took more time.
To speed up production with linked servers, use openquery.

SQL Server - On-Insert Trigger vs. Per-Minute Job

My question is mainly concerned with "what's best for performance", but kinda "philosophically" speaking as well (if it makes a difference)... so let's jump right in.
[TableA].[ColumnB] stores a value that needs to exist in [TableC].[ColumnD]. Right off the bat, no answers involving Foreign-keys - just assume that they're "not allowed" in this environment for whatever reason.
But due to "circumstances x,y,z", [TableA].[ColumnB] sometimes gets values that do not exist in [TableC].[ColumnD], because, let's say, [TableA] gets populated from an object that exists in running code as a "serialized blob", an in-memory representation of the data, and the [ColumnB] values got populated before those values were deleted from [TableC].[ColumnD] by some other process. ANYWAY, this is for example's sake, so don't get bogged down in the "why does this condition happen", just accept that it does.
To "fix" the problem, which method is best of these two: 1. make a Trigger that fires on-INSERT on [TableA], to Update [ColumnB] to the value that it should be (and assume I have a "mapping" of bad-to-good values). Or, 2. run a scheduled-Job every hour/minute/whatever that runs Update queries to change all possible "bad" values to their corresponding "good" values.
To put it more generally, what's better for performance and/or what is best practice: a Trigger, or a periodic Scheduled-Job? In context, let's say [TableA] is typically on the order of hundreds of thousands of rows, with Inserts happening 10-100 records at-a-time, as frequently as every few minutes to as rarely as a few times per day.
On-insert.
Doing triggers is like callbacks- They're more logically sound, and they spread any lag into every query. Doing continual checks (called polling or cron-jobs), you end up with more severe moments of lag every now and then. In almost all cases, using triggers/callbacks are the better way to go as having 1ms of lag added to every query is better than 100ms of lag at seemingly random intervals.
Use of triggers is generally discouraged, but your load is light and your case seems to be a natural trigger case. Consider using instead-of trigger to avoid two operations on the same row (one insert instead of insert and update). It may be the simplest and most reliable solution (as long as you have written reliable code in the trigger that won't cause the whole operation to crash).
Since you are considering a batch job, you are not concerned with timing issues. I.e it's OK with your application that tables may be out of sync for 1 minute or even 1 hour. That's the major difference with the trigger approach, which will guarantee that tables are in sync all the time. Potential timing issues would make me uncomfortable. On the plus side, you won't be at risk of crashing the original insert operation with your trigger.
If you go this route, please consider Change Tracking feature. Change tracking will indicate which rows have been inserted since the last time you checked, so you won't have to scan the whole table for new records. Alternatively, if your TableA has an INDENITY primary or unique key, you can implement similar design without change tracking functionality.
Triggers are both best performance and practice, as they maintain referential integrity as well as allowing the server to optimise for performance.
You didn't say what version of SQL Server you were using, but if it's 2008+, you can use Change Data Capture to keep track of data changes to your "primary" table. Then, periodically, you can run a batch over the change table and do whatever processing is required over that small set.

Postgres: How to fire multiple queries in same time?

I have one procedure which updates record values, and i want to fire it up against all records in table (over 30k records), procedure execution time is from 2 up to 10 seconds, because it depends on network load.
Now i'm doing UPDATE table SET field = procedure_name(paramns); but with that amount of records it takes up to 40 min to process all table.
Now im using 4 different connections witch fork to background and fires query with WHERE clause set to iterate over modulo of row id's to speed this up, ( WHERE id_field % 4 = ) and this works well and cuts down table populate to ~10 mins.
But i want to avoid using cron, shell jobs and multiple connections for this, i know that it can be done with libpq, but is there a way to fire up a query (4 different non-blocking queries) and do not wait till it ends execution, within single connection?
Or if anyone can point me out to some clues on how to write that function, using postgres internals, or simply in C and bound it as a stored procedure?
Cheers Darius
I've got a sure answer for this question - IF you will share with us what your ab workout is!!! I'm getting fat by the minute and I need answers myself...
OK I'll answer anyway.
If you are updating one table, on one database server, in 40 minutes 'single threaded' and in 10 minutes with 4 threads, the bottleneck is not the database server; otherwise, it would get bogged down in I/O. If you are executing a bunch of UPDATES, one call per record, the network round-trip time is killing you.
I'm pretty sure this is the case and not that it's either an I/O bottleneck on the DB or the possibility that procedure_name(paramns); is taking a long time. (If that were the procedure taking 2-10 seconds it would take like 2500 min to do 30K records). The reason I am sure is that starting 4 concurrent processed cuts the time in 1/4. So especially it is not an i/o issue on the DB server.
This might be the one excuse for putting business logic in an SP on the server. Optimization unfortunately means breaking the rules. The consequence is difficult maintenance. but, duh!!
However, the best solution would be to get this set up to use 'bulk update' queries. That might mean you have to take several strange and unintuitive steps such as this:
This will require a lot of modfication if multiple users can run it concurrently.
refactor the system so procedure_name(paramns) can get all the data it needs to process all records via a select statement. May need to use creative joins. If it's an SP of course now you are moving the logic to the client.
Use that have the program create an XML or other importable flat file format with the PK of the record to update, and the new field value or values. Write all the updates to this file instead of executing them on the DB.
have a temp table on the database that matches the layout of this flat file
run an import on the database - clear the temp table and import the file
do an update of a join of the temp table and the table to be updated, e.g., UPDATE mytbl, mytemp WHERE myPK=mytempPK SET myval=mytempnewval (use the right join syntax of course).
You can try some of these things 'by hand' first before you bother coding, to see if it's worth the speed increase.
If possible, you can still put this all in an SP!
I'm not making any guarantees, especially as I look down at my ever-fattening belly, but, this has the potential to melt your update job down to under a minute.
It is possible to update multiple rows at once. Below an example in postgres:
UPDATE
table_name
SET
column_name = temp.column_name
FROM
(VALUES
(<id1>, <value1>),
(<id2>, <value2>),
(<id3>, <value3>)
) AS temp("id", "column_name")
WHERE
table_name.id = temp.id
PHP has some functions for asynchrone queries:
pg_ send_ execute()
pg_ send_ prepare()
pg_send_query()
pg_ send_ query_ params()
No idea about other programming languages, you have to dig into the manuals.
I think you can't. Single connection can handle single query at once. It's described in libpq documentation chapter "Asynchronous Command Processing":
"After successfully calling PQsendQuery, call PQgetResult one or more times to obtain the results. PQsendQuery cannot be called again (on the same connection) until PQgetResult has returned a null pointer, indicating that the command is done."

SQL SERVER Procedure Inconsistent Performance

I am working on a SQL Job which involves 5 procs, a few while loops and a lot of Inserts and Updates.
This job processes around 75000 records.
Now, the job works fine for 10000/20000 records with speed of around 500/min. After around 20000 records, execution just dies. It loads around 3000 records every 30 mins and stays at same speed.
I was suspecting network, but don't know for sure. These kind of queries are difficult to analyze through SQL Performance Monitor. Not very sure where to start.
Also, there is a single cursor in one of the procs, which executes for very few records.
Any suggestions on how to speed this process up on the full-size data set?
I would check if your updates are within a transaction. If they are, it could explain why it dies after a certain amount of "modified" data. You might check how large your "tempdb" gets as an indicator.
Also I have seen cases when during long-running transactions the database would die when there are other "usages" at the same time, again because of transactionality and improper isolation levels used.
If you can split your job into independent non-overlaping chunks, you might want to do it: like doing the job in chunks by dates, ID ranges of "root" objects etc.
I suspect your whole process is flawed. I import a datafile that contains 20,000,000 records and hits many more tables and does some very complex processing in less time than you are describing for 75000 records. Remember looping is every bit as bad as using cursors.
I think if you set this up as an SSIS package you might be surprised to find the whole thing can run in just a few minutes.
With your current set-up consider if you are running out of room in the temp database or maybe it is trying to grow and can't grow fast enough. Also consider if at the time the slowdown starts, is there some other job running that might be causing blocking? Also get rid of the loops and process things in a set-based manner.
Okay...so here's what I am doing in steps:
Loading a file in a TEMP table, just an intermediary.
Do some validations on all records using SET-Based transactions.
Actual Processing Starts NOW.
TRANSACTION BEGIN HERE......
LOOP STARTS HERE
a. Pick Records based in TEMP tables PK (say customer A).
b. Retrieve data from existing tables (e.g. employer information)
c. Validate information received/retrieved.
d. Check if record already exists - UPDATE. else INSERT. (THIS HAPPENS IN SEPARATE PROCEDURE)
e. Find ALL Customer A family members (PROCESS ALL IN ANOTHER **LOOP** - SEPARATE PROC)
f. Update status for CUstomer A and his family members.
LOOP ENDS HERE
TRANSACTION ENDS HERE