Query that falls back to different table if linked server query fails - sql

Our test database is linked to a database owned by another department within our company. Whenever they bring their database down (like when refreshing with production data) our application goes down as well. The only thing we are doing with their database is we have a view that selects from one of their tables and we join to this view in a number of queries.
Ideally, whenever their system goes down, I'd like our view to pull from a backup of their table that exists in our database. It has slightly stale data, but at least we would be able to continue working. I thought of using a TRY...CATCH in the view or in a sql function, but they are not supported in those. A stored procedure might work, except that you can't join to the results of a stored procedure in queries, can you?
How can I make my SELECT statements fall back to a backup table when the linked server's table is unavailable?

So what I ended up doing was to create a SQL Server Agent job that calls sp_testlinkedserver in a TRY...CATCH every few minutes and if it's down we alter the view to point to our backup table and if it's up, we alter it to point to the "live" data again. We also track the previous state so we only alter the view if the state has changed. It works pretty slick.

Related

Execute an INSERT, UPDATE and DELETE against a SQL Server Database Snapshot

According to http://blogs.msdn.com/b/sqlcat/archive/2011/10/17/updating-a-database-snapshot.aspx I should be able to successfully execute an INSERT, UPDATE and DELETE against a Database Snapshot.
The idea is to create a view of a table before you create the snapshot, and then create the snapshot, and update the View in the snapshot.
I have tried this on my SQL Server 2014 (v12.0.2269) and I still get the error
Failed to update database "Snapshot2015_07" because the database is read-only.
The reason I am keen for this to work is that financials need to be frozen at a particular date, but need to be updated if errors are found in the snapshot.
Has anyone had success recently doing this?
I know there are alternatives like AutoAudit, but it is a lot of work to implement for 1-2 updates/deletes on a database with multiple tables with 5 million + rows
The view has to specify the database name (which is the original database name, not the snapshot database name), along with the schema and table name. Ensure the view you created specifies those three parts of the fully qualified object name.

MSSQL Automatic Merge Database

I have a PC that has a MSSQL database with 800 variables being populated every second. I need that database to merge/backup to a second database on another server PC at least every 10 minutes. Additionally, the first database needs to be wiped clean once per week, in order to save local drive space, so that only 1 week's worth of data is stored on that first database at any given time; meanwhile, the second database keeps everything intact and never gets cleared, only being added upon by the merges that occur every 10 minutes.
To my knowledge, this means I cannot rely on database mirroring, since the first one will be wiped every week. So from what I have gathered, this means I have to have scheduled merges going on every 10 minutes.
I will readily admit I know next to nothing about SQL. So my two questions are:
How do I set up scheduled merges to occur from one database to another in 10 minute frequencies?
How do I set a database to be scheduled/scripted so that it gets cleared every week?
(Note: both databases are running on MS SQL Server 2012 Standard.)
Assuming you can create a linked server on server A that connects to server B (Here's a guide)
Then create a trigger on your table, for example table1:
CREATE TRIGGER trigger1
ON table1
AFTER INSERT
AS
INSERT INTO ServerB.databaseB.dbo.table1
select *
from inserted
More on triggers here.
For part 2, you can schedule a job to truncate the table on whatever schedule you would like. How to create a scheduled job.
The trigger only fires on Inserts so deleting rows does nothing to the table on server B.
How is the purging/deleting of the data happening, via a stored proc? If so, you could also try transactional replication, and replicate the execution of that particular stored proc, but dummy the proc on the subscriber, so when the proc gets replicated and executed on the subscriber, nothing will get deleted/purged.

SQL Server - mirror some columns from tables to another database on the same server without replication

I have a SQL Server 2012 Web Edition (11.0.5058.0) instance on a VPS which hosts two databases. I would like to mirror a couple of columns from 3 tables to the second database, but I don't have transactional replication installed.
So I have a Staff table on the source database - I just want the staff_code and unique_id - I have an Activity table - I just need the activity_code, description and unique_id.. etc.
What is the best way to go about this - would that be triggers? The data is not regularly updated, possibly once a week - but I would still like the synchronisation to be fast if possible?
The data in the source database may be deleted, updated or inserted, by another application, so I want to ensure the data in my database reflects that information correctly.
Thanks for any suggestions!
UPDATED: Table comparison example:
SELECT CASE WHEN NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
)
AND NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
)
THEN 'True'
ELSE 'False' //GRAB NEW OR UPDATED DATA
END AS result ;
As long as the two databases can be connected (e.g. can you do a SELECT * FROM SecondDB.dbo.Activity?), then I would just
set up a query (stand-alone, or in a stored procedure) that just checks whether or not the data on the source has changed
updates the second database using normal SELECT, INSERT, UPDATE and possibly DELETE statements
set up that query/stored procedure with a SQL Server Agent Job to run at regular intervals, e.g. once every night, once every week - whatever works for you
I don't think triggers would be a good choice here - triggers should be kept very small, lean, fast - and "replicating" to another database sounds like too much processing work for a nimble trigger.... (also if you triggers take a long time to complete, the calling application will have to wait for that whole time..... not good for your application performance!)

Can I restore the content of a table after that I deleted all the rows inside it?

I am very new in Microsoft SQL Server and I am not so into databasess.
Yesterday I made an error and I deletd all the rows inside the wrong table (I should delete the records in another table)
So now it is very important to me restore in some way all the deleted records in this table (only these records and not all the DB, if it is possibile in someway).
for completeness the table is named dbo.VulnerabilityWorkaround and have the following fields:
Id: int not null (is the PK)
Description: varchar(max), not null
I think that the SQL Server
retains the information related to the deleted records in a log file (or in something like it, maybe a DB table...I don't know)
Can in some way restore my original dbo.VulnerabilityWorkaround by a query or something like it?
There is the transaction log, but as far as I know that can be used depending on the backup strategy the database instance has, meaning you would have to fire up a restore backup operation.
Other than restoring a previous backup, I don't think you have much options.
Since you just need one table it could be easier to restore a backup to a different server and then copy/move only the data you need using SSIS or Bulk Import/Export.

Why does a temp table work but not a permanent table?

I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.