How would I go about updating a table with statistics every 15minutes? I've got today a structured job that updates one other table with the same statistics but on a daily basis at 8:00 and 17:00. Though now I would like to use similar logic to update it every 15 minutes. My other job is using the SQL-server agent to exec the stored procedure twice everyday. Should I just use the same logic and change the update rate to every 15 minutes in the server agent?
So this is the code for the main insert stored procedure to check and insert data and it basically cehcks the activity between 17:00 and 8:00 also the amount of different parameters at 8:00.
If Convert(varchar(50),DATEPART(HH,getdate())) = '8'
begin
declare #datumstring as Varchar(30)
Declare #Datum As Datetime
set #Datum = Dateadd(dd,-1,GETDATE())
set #datumstring = convert(varchar, #Datum,102)
set #datumstring = #datumstring + ' 17:00'
set #Datum = CONVERT(datetime, #datumstring)
Then the insert mostly contains alot of subselects but it's not important here, although I'll post one of them so you can see how I use the date conditions.
(select COUNT(*) from table b
where b.ob = 4 and b.t = 7 and Time
between #Datum and GETDATE() and b.Name = LoginName)
Could I use the similar logic with time manipulation with variables or should I just make a similar insert into a table, then update that stored procedure every 15 minutes?
Cheers
Well as I understand you need something which will help you to execute some particular code block in every 15 minute.
So why not try WAITFOR keyword from sqlserver. Put you code in
WHILE (1 = 1)
BEGIN
WAITFOR DELAY '00:15'
-- CODE TO BE EXECUTE
END
Or can also use
WHILE (1 = 1)
BEGIN
WAITFOR TIME '00:15'
-- CODE TO BE EXECUTE
END
if you want to do it on server use Cron Job . or if you want to do it on client site user java script . or JQuery ajex
$.ajax({
url: "test.html",
context: document.body
}).done(function() {
$( this ).addClass( "done" );
});
see enter link description here
Related
My question is a bit tricky, because it's mostly a logical problem.
I've tried to optimize my app speed by reading everything into memory but only those records, which changed since "last read" = greatest timestamp of records last time loaded.
FirebirdSQL database engine does not allow to update a field in an "After Trigger" directly, so it's obviously using "before update or insert" triggers to update the field new.last_changed = current_timestamp;
The problem:
As it turns out, this is a totally WRONG method, because those triggers fire on transaction start!
So if there is a transaction that takes some more time than an other, the saved "last changed time" will be lower than a short-burst transaction fired and finished in between.
1. tr.: 13:00:01.400 .............................Commit << this record will be skipped !
2. tr.: 13:00.01.500......Commit << reading of data will happen here.
The next read will be >= 13:00.01.500
I've tried:
to rewrite all triggers, so they fire after and call an UPDATE orders SET ... << but this causing circular, self-calling endless events.
Would a SET_CONTEXT lock interfere with multi-row update and nested triggers?
(I do not see any possibility this method would work good if running multiple updates in the same transaction.)
What is the common solution for all this?
Edit1:
What I want to happen is to read only those records from DB actually changed since last read. For that to happen, I need the engine to update records AFTER COMMIT. (Not during it, "in the middle".)
This trigger is NOT good, because it will fire on the moment of change, (not after Commit):
alter trigger SYNC_ORDERS active after insert or update position 999 AS
declare variable N timestamp;
begin
N = cast('NOW' as timestamp);
if (new.last_changed <> :N) then
update ORDERS set last_changed= :N where ID=new.ID;
end
And from the application I do:
Query1.SQL.Text := 'SELECT * FROM orders WHERE last_changed >= ' + DateTimeToStr( latest_record );
Query1.Open;
latest_record := Query1.FieldByName('last_changed').asDateTime;
.. this code will list only the record commited in the 2th transaction (earlier) and never the first, longer running transaction (commited later).
Edit2:
It seems I have the same question as here... , but specially for FirebirdSQL.
There are not really any good solutions there, but gave me an idea:
- What if I create an extra table and log changes earlier than 5 minutes there per table?
- Before each SQL query, first I will ask for any changes in that table, sequenced via ID grow!
- Delete lines older than 23 hours
ID TableID Changed
===========================
1 5 2019.11.27 19:36:21
2 5 2019.11.27 19:31:19
Edit3:
As Arioch already suggested, one solution is to:
create a "logger table" filled on every BEFORE INSERT OR UPDATE
trigger by every table
and update the "last_changed" sequence of it
by the ON TRANSACTION COMMIT trigger
But, would not be ...
a better approach?:
adding 1-1 last_sequence INT64 DEFAULT NULL column to every table
create a global generator LAST_GEN
update every table's every NULL row with a gen_id(LAST_GEN,1) inside the ON TRANSACTION COMMIT trigger
SET to NULL again on every BEFORE INSERT OR UPDATE trigger
So basically switching the last_sequence column of a record to:
NULL > 1 > NULL > 34 ... every time it gets modified.
This way I :
do not have to fill the DB with log data,
and I can query the tables directly with WHERE last_sequence>1;.
No needed to pre-query the "logger table" first.
I'm just afraid: WHAT happens, if the ON TRANSACTION COMMIT trigger is trying to update a last_sequence field, while a 2th transaction's ON BEFORE trigger is locking the record (of an other table)?
Can this happen at all?
The final solution is based on the idea, that:
Each table's BEFORE INSERT OR UPDATE trigger can push a time of the transaction: RDB$SET_CONTEXT('USER_TRANSACTION', 'table31', current_timestamp);
The global ON TRANSACTION COMMIT trigger can insert a sequence + time into a "logging table", if receiving such a context.
It can even take care of "daylight saving changes" and "intervals", by logging only "big time differences", like >=1 minute, to reduce the amount of records.)
A stored procedure can ease and speed up the calculation of 'LAST_QUERY_TIME' of each query's.
Example:
1.)
create trigger ORDERS_BI active before insert or update position 0 AS
BEGIN
IF (NEW.ID IS NULL) THEN
NEW.ID = GEN_ID(GEN_ORDERS,1);
RDB$SET_CONTEXT('USER_TRANSACTION', 'orders_table', current_timestamp);
END
2, 3.)
create trigger TRG_SYNC_AFTER_COMMIT ACTIVE ON transaction commit POSITION 1 as
declare variable N TIMESTAMP;
declare variable T VARCHAR(255);
begin
N = cast('NOW' as timestamp);
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'orders_table');
if (:T is not null) then begin
if (:N < :T) then T = :N; --system time changed eg.: daylight saving" -1 hour
if (datediff(second from :T to :N) > 60 ) then --more than 1min. passed
insert into "SYNC_PAST_TIMES" (ID, TABLE_NUMBER, TRG_START, SYNC_TIME, C_USER)
values (GEN_ID(GEN_SYNC_PAST_TIMES, 1), 31, cast(:T as timestamp), :N, CURRENT_USER);
end;
-- other tables too:
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'details_table');
-- ...
when any do EXIT;
end
Edit1:
It is possible to speed up the readout of the "last-time-changed" value from our SYNC_PAST_TIMES table with a help of a Stored Procedure. Logically, You have to store in memory both the ID PT_ID + the time PT_TM in your program to call it for each table.
CREATE PROCEDURE SP_LAST_MODIF_TIME (
TABLE_NUMBER SM_INT,
LAST_PASTTIME_ID BG_INT,
LAST_PASTTIME TIMESTAMP)
RETURNS (
PT_ID BG_INT,
PT_TM TIMESTAMP)
AS
declare variable TEMP_TIME TIMESTAMP;
declare variable TBL SMALLINT;
begin
PT_TM = :LAST_PASTTIME;
FOR SELECT p.ID, p.SYNC_TIME, p.TABLA FROM SYNC_PAST_TIMES p WHERE (p.ID > :LAST_PASTTIME_ID)
ORDER by p.ID ASC
INTO PT_ID, TEMP_TIME, TBL DO --the PT_ID gets an increasing value immediately
begin
if (:TBL = :TABLE_NUMBER) then
if (:TEMP_TIME< :MI_TIME) then
PT_TM = :TEMP_TIME; --searching for the smallest
end
if (:PT_ID IS NULL) then begin
PT_ID = :LAST_PASTTIME_ID;
PT_TM = :LAST_PASTTIME;
end
suspend;
END
You can use this procedure by including in your select, using the WITH .. AS format:
with UTLS as (select first 1 PT_ID, PT_TM from SP_LAST_MODIF_TIME (55, -- TABLE_NUMBER
0, '1899.12.30 00:00:06.000') ) -- last PT_ID, PT_TM from your APP
select first 1000 u.PT_ID, current_timestamp as NOWWW, r.*
from UTLS u, "Orders" r
where (r.SYNC_TIME >= u.PT_TM);
Using FIRST 1000 is a must to prevent reading the whole table if all values are changed at once.
Upgrading the SQL, adding a new column, etc. makes SYNC_TIME changing to NOW at the same time at all rows of the table.
You may adjust it per table individually, just like the interval of seconds to monitor changes. Add a check to your APP, how to handle the case, if the new data reaches 1000 lines at once ...
I have a table in SQL server end_date and start_date column I want to send email to the user before seven days and one day before that licence is expiring. how
I want to send email to the user before seven days and one day before
that licence is expiring.
select * from mytable
where
(CAST(end_date as DATE) = DateAdd(DD,1,GETDATE()))
OR
(CAST(end_date as DATE) = DateAdd(DD,7,GETDATE()))
So you could break that down into two separate queries if your emailing out notices of expiration for different wording.
You do need to run this each day, either with a scheduler, or as other posters have pointed out using an agent job.
I haven't tried this, so you may need to put the CAST around the DateAdd as well. I'm not sure what you're datatypes are from the question. It's usually better to post the table or parts of the table so we can better answer the question.
If you want a poor mans scheduler, which is a pretty bad way to implement this, you can do something like this borrowed from here:
CREATE PROCEDURE MyBackgroundTask
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- The interval between cleanup attempts
declare #timeToRun nvarchar(50)
set #timeToRun = '03:33:33'
while 1 = 1
begin
waitfor time #timeToRun
begin
execute [MyDatabaseName].[dbo].[MyDatabaseStoredProcedure];
end
end
END
I have a table with a date field, and I want to execute a certain query when that date reaches the current date. Is there any way to do that on Sql Server?
One way or say good way is making a Sql - job and second is trigger. But trigger is not suitable and also not advisable to use.
Sqljob is automated process, just give date-time with day and it will executed automatically. In that you call the sp, where first check the date and then process as you want.
First create a job, please check the link
http://www.databasedesign-resource.com/sql-server-jobs.html
http://forums.asp.net/p/1364020/2826971.aspx
Now create a sp which executed by job like (This is sample example)..
create procedure spname()
as
begin
declare #currentdate datetime = getdate()
declare #tempdateToCheck datetime = (select datecolumn from tablename where ....give condition if any )
if(#tempdateToCheck >= #currentdate)
begin
--execute statement you want like
insert tablename .... --insert statement
update tablename.... --update statement
declare #tempvariable1 int --anydatatype you define and any no. of variable, you want.
select #tempvariable1 = columnname from tablename
update tablename1 set columname = #tempvariable1 where condition
end
end
You can create a sql server agent job in order to check the date column everyday and if reached then execute query.
I'm a novice at SQL server and I have a task to create a trigger that will insert/update a customer's status to 'Blocked' in a payment is overdue.
How could I check to say something like the following within a trigger?
if getdate() > dateDue
then update status = 'Blocked'
end if
Many thanks for you help in advance
Here's an implementation of Martin's suggestion to create a non-persisted computed column:
ALTER TABLE dbo.YourTable
ADD Status AS CASE WHEN DueDate < GETDATE() THEN 'Blocked' ELSE 'Not Blocked' END
I don't have time to really test this, so there might be some problems / syntax issues, but here's something that should give you an idea how to go about it. Basically, your trigger should fire whenever the value of "dateDue" is changed. It should iterate through the "inserted" values in case more than one record was updated, and for each record in "inserted", if the new "dateDue" value is > the current time, you update that record and set the status to 'Blocked'.
CREATE TRIGGER myTriggerName
ON myTable
AFTER INSERT, UPDATE
AS
IF UPDATE(dateDue)
BEGIN
DECLARE #currPk INT
DECLARE #currDateDue DATETIME
DECLARE #today DATETIME
DECLARE inserted_Cursor CURSOR FOR
SELECT myTableID, dateDue, GETDATE() FROM Inserted
OPEN inserted_Cursor;
FETCH NEXT FROM inserted_Cursor INTO #currPk, #currDateDue, #today
WHILE ##FETCH_STATUS = 0
BEGIN
IF(#currDateDue < #today)
UPDATE myTable SET status = 'Blocked' WHERE myTableID = #currPk
FETCH NEXT FROM inserted_Cursor INTO #currPk, #currDateDue, #today
END;
CLOSE inserted_Cursor;
DEALLOCATE inserted_Cursor;
END;
If you want to have this status updated when dueDate becomes < today, as opposed to only updating it when the dueDate for the record is modified, you should schedule a stored procedure via SQL Sever Agent and have it run a simple update to set the statuses for any records where dueDate < today. You can run this nightly, or every hour, or whatever you need.
If you don't want to run Agent, you could do it with a Windows service that you write code for (more of a pain to set up), or even a batch file that runs from a Windows Task, but obviously Agent is the most convenient way to do this.
I am using SQL Server. I'm writing a stored procedure that executes a series of queries. I want to LOG the execution time of every query. Is it possible? Pls help.
Example for using a logging table:
create procedure procedure_name as begin
declare #start_date datetime = getdate(),
#execution_time_in_seconds
int /*your procedure code
here*/
#execution_time_in_seconds =
datediff(SECOND,#start_date,getdate())
insert into your
logging_table(execution_time_column) values(#execution_time_in_seconds) end
The engine is already keeping stats of execution in sys.dm_exec_query_stats. Before you add heavy logging like insert into a log tabvle inside your procedure, consider what can you extract from these stats. The contain values for:
execution count
execution time (elapsed)
work time (non-blocked actual CPU time across all CPUs in parallel queries)
logical reads/writes
physical reads
number of rows returned
This kind of information is significantly richer and more useful for performance investigation that what you would log in a naive approach. Most metrics contain the min, max and total value (and with execution count you also have the average). You can immediatly get a clue what are expensive queries (the one with large elapsed average), which are queries that block often (elapsed time much higher than work time), which cause much writes or much reads, which return large results etc etc etc.
You can keep track of the time stamp via CURRENT_TIMESTAMP and log it before and after the statements you want to execute and then log them and later compare them to see and of course with meaningful messages indicating what started and finished when. Or if you want to see it also you could use this: SET STATISTICS TIME ON and SET STATISTICS TIME OFF this one I use in query analyser.
Depending on what you exactly want you need to figure out where to store these messages for logging. Like a table or some thing else.
use PRINT & GETDATE to log the execution time info
example:
DECLARE #Time1 DATETIME
DECLARE #Time2 DATETIME
DECLARE #Time3 DATETIME
DECLARE #STR_TIME1 VARCHAR(255)
DECLARE #STR_TIME2 VARCHAR(255)
DECLARE #STR_TIME3 VARCHAR(255)
--{ execute query 1 }
SET #Time3 = GETDATE()
--{ execute query 2 }
SET #Time2 = GETDATE()
--{ execute query 3 }
SET #Time3 = GETDATE()
SET #STR_TIME1 = CONVERT(varchar(255),#Time1,109)
SET #STR_TIME2 = CONVERT(varchar(255),#Time2,109)
SET #STR_TIME3 = CONVERT(varchar(255),#Time3,109)
PRINT 'T1 is' + #STR_TIME1
PRINT 'T2 is ' + #STR_TIME2
PRINT 'T3 is ' + #STR_TIME3