MSSQL Automatic Merge Database - sql

I have a PC that has a MSSQL database with 800 variables being populated every second. I need that database to merge/backup to a second database on another server PC at least every 10 minutes. Additionally, the first database needs to be wiped clean once per week, in order to save local drive space, so that only 1 week's worth of data is stored on that first database at any given time; meanwhile, the second database keeps everything intact and never gets cleared, only being added upon by the merges that occur every 10 minutes.
To my knowledge, this means I cannot rely on database mirroring, since the first one will be wiped every week. So from what I have gathered, this means I have to have scheduled merges going on every 10 minutes.
I will readily admit I know next to nothing about SQL. So my two questions are:
How do I set up scheduled merges to occur from one database to another in 10 minute frequencies?
How do I set a database to be scheduled/scripted so that it gets cleared every week?
(Note: both databases are running on MS SQL Server 2012 Standard.)

Assuming you can create a linked server on server A that connects to server B (Here's a guide)
Then create a trigger on your table, for example table1:
CREATE TRIGGER trigger1
ON table1
AFTER INSERT
AS
INSERT INTO ServerB.databaseB.dbo.table1
select *
from inserted
More on triggers here.
For part 2, you can schedule a job to truncate the table on whatever schedule you would like. How to create a scheduled job.
The trigger only fires on Inserts so deleting rows does nothing to the table on server B.

How is the purging/deleting of the data happening, via a stored proc? If so, you could also try transactional replication, and replicate the execution of that particular stored proc, but dummy the proc on the subscriber, so when the proc gets replicated and executed on the subscriber, nothing will get deleted/purged.

Related

How do I lock out writes to a specific table while several queries execute?

I have a table set up in my sql server that keeps track of inventory items (in another database) that have changed. This table is fed by several different triggers. Every 15 minutes a scheduled task runs a batch file that executes a number of different queries that send updates on the items flagged in this table to update several ecommerce websites. The last query in the batch file resets the flags.
As you can imagine there is potential to lose changes if an item is flagged while this batch file is running. I have worked around this by replaying the last 25 hours of updates every 24 hours, just in case this scenario happened. It works, but IMO is kind of clumsy.
What I would like to do is delay any writes to this table until my script finishes, and resets the flags on all the rows that were flagged when the script started. Then allow all of these delayed writes to happen.
I've looked into doing this with table hints (TABLOCK) but this seems to be limited to one query--unless I'm misunderstanding what I have read, which is certainly possible. I have several that run in succession. TIA.
Alex
Could you modify your script into a stored procedure that extracts all the data into a temporary table using a select statement that applies a lock to the production table. You could then drop your lock on the main table and do all your processing in the temporary table (or permanent table built for the purpose) away from the live system. It will be a lot slower and put more load on your SQL box but speed shouldn't be an issue if you have a point in time snapshot of it.
If that option is not applicable then maybe you could play with wrapping the whole thing in a transaction and putting a table lock on your production table with the first select statement.
Good luck mate

Best way to do a long running schema change (or data update) in MS Sql Server?

I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.

Query that falls back to different table if linked server query fails

Our test database is linked to a database owned by another department within our company. Whenever they bring their database down (like when refreshing with production data) our application goes down as well. The only thing we are doing with their database is we have a view that selects from one of their tables and we join to this view in a number of queries.
Ideally, whenever their system goes down, I'd like our view to pull from a backup of their table that exists in our database. It has slightly stale data, but at least we would be able to continue working. I thought of using a TRY...CATCH in the view or in a sql function, but they are not supported in those. A stored procedure might work, except that you can't join to the results of a stored procedure in queries, can you?
How can I make my SELECT statements fall back to a backup table when the linked server's table is unavailable?
So what I ended up doing was to create a SQL Server Agent job that calls sp_testlinkedserver in a TRY...CATCH every few minutes and if it's down we alter the view to point to our backup table and if it's up, we alter it to point to the "live" data again. We also track the previous state so we only alter the view if the state has changed. It works pretty slick.

SQL Server - mirror some columns from tables to another database on the same server without replication

I have a SQL Server 2012 Web Edition (11.0.5058.0) instance on a VPS which hosts two databases. I would like to mirror a couple of columns from 3 tables to the second database, but I don't have transactional replication installed.
So I have a Staff table on the source database - I just want the staff_code and unique_id - I have an Activity table - I just need the activity_code, description and unique_id.. etc.
What is the best way to go about this - would that be triggers? The data is not regularly updated, possibly once a week - but I would still like the synchronisation to be fast if possible?
The data in the source database may be deleted, updated or inserted, by another application, so I want to ensure the data in my database reflects that information correctly.
Thanks for any suggestions!
UPDATED: Table comparison example:
SELECT CASE WHEN NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
)
AND NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
)
THEN 'True'
ELSE 'False' //GRAB NEW OR UPDATED DATA
END AS result ;
As long as the two databases can be connected (e.g. can you do a SELECT * FROM SecondDB.dbo.Activity?), then I would just
set up a query (stand-alone, or in a stored procedure) that just checks whether or not the data on the source has changed
updates the second database using normal SELECT, INSERT, UPDATE and possibly DELETE statements
set up that query/stored procedure with a SQL Server Agent Job to run at regular intervals, e.g. once every night, once every week - whatever works for you
I don't think triggers would be a good choice here - triggers should be kept very small, lean, fast - and "replicating" to another database sounds like too much processing work for a nimble trigger.... (also if you triggers take a long time to complete, the calling application will have to wait for that whole time..... not good for your application performance!)

Delete the data from sql server when after five days

I want to write a trigger or function which runs automatically and delete the data which are 5 days old . I have a data column in the table which stores the current date. The program will run automatically and delete such data.
I use sql server 2008
Better create a job that runs every night to delete the old data.
Trigger is not a good solution for this case, because every INSERT or UPDATE will invoke the trigger, slowing you down.