Context: I have a SQL Server database engine called DB4, and it's updating all its databases from another database engine called DB5 through the SQL Server agent every 5 hours. I don't have access to DB5, but I have been told DB5 is also updating from somewhere else.
Problem: The problem is that sometimes the two database engines will update their databases simultaneously, so the DB4 cannot update completely.
Question: Is there any way I can detect if DB5 is updating? Then I can write in the SQL server agent jobs, like if the DB5 is not updating then update DB4, otherwise do nothing.
PS: The way DB4 updates is processed by many Agent Jobs. Somebody wrote many scripts in the jobs. Basically, the scripts are like this format:
TRUNCATE Table_Name
INSERT INTO Table_Name
SELECT field_name1,field_name2 ......
FROM DB5.database_name.table_Name
By DB4, DB5 you mean servers not databases which the names here are confusing.
Nevertheless, if you have access to DB4 and DB4 is selecting from DB5, that means DB5 is a linked server registered in DB4 with a user who has a query permission on DB5 databases and MAY have insert/update/delete/create objects permissions. If so, then you can create a table in DB5.database_name as follows:
CREATE TABLE DB5.database_name.dbo.Table_Flag(Uflag bit NOT NULL)
GO
INSERT INTO DB5.database_name.dbo.Table_Flag(Uflag) values (0)
then you can create a trigger for the updating table in DB5-Database which will update the Uflag
to 1 if there are any newly updated/inserted/deleted rows
Then you can modify the job in DB4 as:
declare #count int
set #count = (select count(*) fromDB5.database_name.dbo.Table_Flag where Uflag = 1)
if (#count > 1)
begin
TRUNCATE Table_Name
INSERT INTO Table_Name
SELECT field_name1,field_name2 ......
FROM DB5.database_name.table_Name
UPDATE DB5.database_name.dbo.Table_Flag SET Uflag = 0
end
Related
Using SAS Enterprise Guide Version 7.1 64 bit.
I cannot create base tables in SQL Server thus can only create temporary table for data processing.
I am also pulling data over from a SAS data set to a global temporary table as(the SAS data set is teradata table pushed to sas-(step2). I need to update holiday table.
I also cannot see how many records are updated in the log (if there is a way to get it, it will be helpful as that way i know the code is working) - I have the sastraceloc but that is not showing update counts in a table.
I finally need to update ##t1 (step4) table created in step 1 with fields from ##holidays table (step2), how do I code this? Am I using the execute, proc sql combination correctly?
Step #1
LIBNAME tmpdta ODBC DATAsrc=datasrcname INSERTBUFF=32767
USER='uid' PASSWORD="pwd" connection=shared;
LIBNAME loc '/c/folder/data';
PROC SQL exec noerrorstop;
CONNECT TO odbc as test(DSN=datasrcname USER='uid' PASSWORD="pwd" connection=shared;
connection=shared);
execute
(create table ##t1
id int,
name varchar(50),
address varchar(100)
)by test;
-- end of 1
step #2
data tmpdt.'##holidays'n;
set loc.holidayexpns;
run;
step #3
proc sql;
connect using tmpdt;
execute
(
update u SET
fee=0,
month=5
FROM tmpdt.##holidays u
)by tmpdt;
step #4
PROC SQL exec noerrorstop;
CONNECT TO odbc as test(DSN=datasrcname USER='uid' PASSWORD="pwd" connection=shared;
connection=shared);
execute
(
update xyz
set a.fee=b.fee,
a.month=b.month
from tmpdt.##holidays h
join ##t1 xyz on
h.id=xyz.id
)by test;
The execute native-query will not know the SAS reference tmpdt., tmpdt will be known to your user login only if on the SQL Server side you have access to a catalog of like name. Did you mean to indicate tempdb ?
Per documentation
Any return code or message that is generated by the DBMS is available in the macro variables SQLXRC and SQLXMSG after the statement completes.
I have database on Sql Server 2008 R2.
On that database a delete query on 400 Million records, has been running for 4 days , but I need to reboot the machine. How can I force it to commit whatever is deleted so far? I want to reject that data which is deleted by running query so far.
But problem is that query is still running and will not complete before the server reboot.
Note : I have not set any isolation / begin/end transaction for the query. The query is running in SSMS studio.
If machine reboot or I cancelled the query, then database will go in recovery mode and it will recovering for next 2 days, then I need to re-run the delete and it will cost me another 4 days.
I really appreciate any suggestion / help or guidance in this.
I am novice user of sql server.
Thanks in Advance
Regards
There is no way to stop SQL Server from trying to bring the database into a transactionally consistent state. Every single statement is implicitly a transaction itself (if not part of an outer transaction) and is executing either all or nothing. So if you either cancel the query or disconnect or reboot the server, SQL Server will from transaction log write the original values back to the updated data pages.
Next time when you delete so many rows at once, don't do it at once. Divide the job in smaller chunks (I always use 5.000 as a magic number, meaning I delete 5000 rows at the time in the loop) to minimize transaction log use and locking.
set rowcount 5000
delete table
while ##rowcount = 5000
delete table
set rowcount 0
If you are deleting that many rows you may have a better time with truncate. Truncate deletes all rows from the table very efficiently. However, I'm assuming that you would like to keep some of the records in the table. The stored procedure below backs up the data you would like to keep into a temp table then truncates then re-inserts the records that were saved. This can clean a huge table very quickly.
Note that truncate doesn't play well with Foreign Key constraints so you may need to drop those then recreate them after cleaned.
CREATE PROCEDURE [dbo].[deleteTableFast] (
#TableName VARCHAR(100),
#WhereClause varchar(1000))
AS
BEGIN
-- input:
-- table name: is the table to use
-- where clause: is the where clause of the records to KEEP
declare #tempTableName varchar(100);
set #tempTableName = #tableName+'_temp_to_truncate';
-- error checking
if exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#tempTableName)) begin
print 'ERROR: already temp table ... exiting'
return
end
if not exists (SELECT [Table_Name] FROM Information_Schema.COLUMNS WHERE [TABLE_NAME] =(#TableName)) begin
print 'ERROR: table does not exist ... exiting'
return
end
-- save wanted records via a temp table to be able to truncate
exec ('select * into '+#tempTableName+' from '+#TableName+' WHERE '+#WhereClause);
exec ('truncate table '+#TableName);
exec ('insert into '+#TableName+' select * from '+#tempTableName);
exec ('drop table '+#tempTableName);
end
GO
You must know D(Durability) in ACID first before you understand why database goes to Recovery mode.
Generally speaking, you should avoid long running SQL if possible. Long running SQL means more lock time on resource, larger transaction log and huge rollback time when it fails.
Consider divided your task some id or time. For example, you want to insert large volume data from TableSrc to TableTarget, you can write query like
DECLARE #BATCHCOUNT INT = 1000;
DECLARE #Id INT = 0;
DECLARE #Max = ...;
WHILE Id < #Max
BEGIN
INSERT INTO TableTarget
FROM TableSrc
WHERE PrimaryKey >= #Id AND #PrimaryKey < #Id + #BatchCount;
SET #Id = #Id + #BatchCount;
END
It's ugly more code and more error prone. But it's the only way I know to deal with huge data volume.
I'm trying to write a query that can run on different servers. One way I'm trying to detect which server i'm on is the presense of a certain linked server (i.e. Server1 will have a link to Server2 and vice versa).
Trouble is, I can't get SQL Server to ignore/skip the code that runs on the non-existant linked server. There are two nearly identical sections of code, one which uses the Linked Server1 and one which does not (because it's running on Server1 already).
drop table #origdates
if exists(select 1 from sys.servers where name = N'Server1')
BEGIN
Select * into #origdates from openquery([Server1],'Select accounts, dates from table1')
END
if not exists(select 1 from sys.servers where name = N'Server1')
BEGIN
Select accounts, dates into #origdates from table1
END
If I execute the individual sections, everything is fine; the code either executes or not as specified, but the moment I run the entire thing together it's as if the server ignores the if exists section, with an error like:
Could not find server 'Server1' in sys.servers. Verify that the correct server name was specified. If necessary, execute the stored procedure sp_addlinkedserver to add the server to sys.servers.
The reason I'm doing this is so I don't have to maintain two identical scripts with two separate begginning sections.
Using ELSE in place of the second if not exists line results in the server complaining that the #origdates table already exists, even if a drop table command is issued right before the line of the select into command.
Using different table names returns the error to the 'Could not find server' message, despite that it's not even supposed to be executing that code at all...
Try this, SQL is trying to validate the OPENQUERY, but it can't because [Server1] is not a valid linked server. Hiding the OPENQUERY in a variable should fix it.
Note, you need to pass FROM db.owner.table in an OPENQUERY, not just FROM table
declare #sql nvarchar(max)
if object_id('tempdb..#origdates') is not null
drop table #origdates
create table #origdates (accounts int, dates datetime)
if exists(select 1 from sys.servers where name = N'Server1')
BEGIN
set #sql='insert into #origdates Select * from openquery([Server1],''select accounts, dates from db.dbo.table1'')'
exec(#sql)
END
else
BEGIN
insert into #origdates Select accounts, dates from table1
END
I have a database table from which same data under a certain condition are lost at a specific time daily as if such statement is performed:
delete * from table where category=1
I'd like to list all delete actions on this table through a SQL script to know how records are deleted exactly and by which statement, user and time of deletion.
Does anyone have such script? Or did anyone have similar case and can advise?
The SQL version is Server 2008 Enterprise Edition.
If this is just a short-term debugging issue, the easiest way to address this is probably to run SQL Server Profiler, with filters set to capture the data you're interested in. No code changes that way.
For best performance, try to run SQL Profiler on a machine other than the DB server, if you can.
Use AFTER DELETE trigger on the table to log deletions in another table with user and time it was performed.
Using some advanced tricks you can extract the query text which deleted the rows, but I'm not sure that it is possible inside a trigger.
The trigger might look like this
CREATE TABLE YourLogTable
(
ID int identity primary key,
Date datetime NOT NULL DEFAULT GETDATE(),
[User] nvarchar(128) NOT NULL DEFAULT suser_sname(),
[SqlText] NVARCHAR(MAX),
[any other interesting columns from deleted rows]
)
GO
CREATE TRIGGER [TR.AD#YourTable]
ON YourTable
AFTER DELETE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sqlText NVARCHAR(MAX)
SELECT #sqlText = txt.Text
FROM sys.dm_exec_connections c
CROSS APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) txt
WHERE session_id = ##SPID
INSERT YourLogTable([SqlText], [any other interesting columns from deleted rows])
SELECT #SqlText, [any other interesting columns from deleted rows]
FROM DELETED
END
With SQL Server 2008, how can I detect if a record is locked?
EDIT:
I need to know this, so I can notify the user that the record is not accessible because the record is blocked.
In most circumstances with SQL 2008 you can do something like:
if exists(select 0 from table with (nolock) where id = #id)
and not exists(select 0 from table with(readpast) where id = #id)
begin
-- Record is locked! Do something.
end
If that is not enough (that is, you need to ignore table-level locks as well), use the NOWAIT hint that throws an error if there's a lock.