I had inherited this SQL Server where we put data in a table (Call TableA) on a database (DB-A). I can see the tableA in another database on the same server ( DB-B) gets the same data right away.
Any ideas how this is implemented? I am trying to see the trace but so far no luck. Any one has an idea?
At this stage I am not sure if its replication. This is a guess
It could be replication or it could be a trigger on the source table that is moving the data over.
Perhaps it is transactional replication? You should be able to go to the replication are and see if there are subscribers or publishers.
Either that or you have linked servers, and triggers are copying the data.
This is most likely happening by use of either a synonym or cross-database view. Check to see if the "table" on the other database really is a table. If it IS a table, then they've set up transactional replication between the two databases.
select type_desc from sys.objects where name = 'name_on_database_b'
Related
I have 10 SQL Servers. On every server there is a catalog MASTER_DATA. This catalog has a table called Employees. Whenever there are changes in the employee info it gets updated on the CENTRAL server in the MASTER_DATA catalog.
Now what I have to do is to cascade the changes in the Employees table to all the MASTER_DATA catalogs on all servers. After this, the same changes needs to be cascaded to all the other catalogs (other then the MASTER_DATA catalog) on all the servers.
I have the following options to do this
SSIS Packages
Replication
Plain Old TSQL Queries
What would be the best way to do this? Also, are there any other ways to do the same?
Based on the information you have provided, and assuming that by "catalog" you mean "database", this seems like an ideal use-case for transactional replication.
Your CENTRAL.MASTER_DATA database would be the publisher; all the other databases would be subscribers.
It's not clear from your description why the second tier of duplication is required - i.e. why each non-MASTER_DATA database requires its own copy of the Employees table - is there a reason not to have queries refer to the local MASTER_DATA copy of the data? You could use a synonym in the non-MASTER_DATA databases to avoid having to change your queries.
I've been told that RDBMS ( SQL Server in this case ) make use of the temporary database to perform its internal job, for instance when a SELECT count( column ) FROM foo query is performed.
What kind of queries / statements trigger the use of the temporary database?
background:
We are currently about to change the collation on our application database, but we have been told there might be problems if that database make use of the temporary database, because they will have different collation. The rationale is the temporary database is already being used by other applications.
So we want to identify what kind of queries may trigger temp db usage and see if they'll have any problem.
I've found this about when is the db used:
http://msdn.microsoft.com/en-us/library/ms190768.aspx
In SQL Server, on the "Subscription side", how can you know if a table is under replication/subscription?
Any idea?
I'm not sure there's a simple answer to this, and I think the answers may vary based on the type of replication. I think you may have to rely on heuristics to answer it.
For snapshot replication, I'm unable to think of anything that would give the game away. Obviously, the presence of the replication tables (e.g. MSreplication_objects) tells you that replication is occurring within the database, but there aren't any specific clues about tables, so far as I'm aware.
For transactional replication (non updating), you may be able to go via MSreplication_objects (which will list some stored procs) and then use sys.sql_dependencies to locate the tables that these relate to
For transaction replication (updating), you can look in MSsubscription_articles (or look for the presence of the subscription updating triggers against the table)
For merge replication, you can look in sysmergearticles, but you'd also have to look in sysmergesubscriptions to determine that you're on the subscription side.
Go to the subscriber database check for the table dbo.MSreplication_subscriptions. If the database is subscriber, you will find this table. Also, to find out articles use this in the subscribed database
SELECT publisher,Publisher_Db,publication,article
FROM dbo.MSreplication_objects
I used Damien the Unbeliever's idea (+1) to produce this code that worked for me
SELECT DISTINCT
ot.object_id
,ot.schema_id
,r.publisher
,r.publisher_db
,r.publication
,r.article
FROM
dbo.MSreplication_objects R
INNER JOIN sys.objects so ON r.object_name = so.name AND so.type = 'P' --stored procedures
INNER JOIN sys.sql_dependencies dp ON so.object_id = dp.object_id
INNER JOIN sys.objects ot ON dp.referenced_major_id = ot.object_id --objects
AND r.article = ot.name
Simplest way would be to create a linked server to the main server and query the table [distribution].[dbo].[MSarticles].
select * from [distribution].[dbo].[MSarticles]
Take a look at DATABASEPROPERTYEX. It has an 'IsSubscribed' option that should do what you want it to do.
I have an SQL table, from which data is being deleted. Nobody knows how the data is being deleted. I added a trigger and know the time, however no jobs are running that would delete the data. I also added a trigger whenever rows are being deleted from this table. I am then inserting the deleted rows and the SYSTEM_USER to a log table, however that doesnt help much. Is there anything better I can do to know who and how the data gets deleted? Would it be possible to get the server id or something? Thanks for any advice.
Sorry: I am using SQL Server 2000.
**update 1*: Its important to find out how the data gets deleted - preferably I would like to know the DTS package or SQL statement that is being executed.
Just a guess, but do you have delete cascades on one of the parent tables (those referenced by foreign keys). If so, when you delete the parent row the entries in the child table are also removed.
If the recovery mode is set to "Full", you can check the logs.
Beyond that, remove any delete grants to the table. If it still happens, whomever is doing it has root/dbo access - so change the password...
Try logging all transactions for the time being, even if if it hurts performance. MS offers a mssql profiler, including for express versions if needed. With it, you should be able to log transactions. As an alternative to profilers, you can use the trace_build stored procedure to dump activity into reference files, then just 'ctrl-f' for any instance of the word 'delete' or other similar keywords. For more info, see this SO page...
Logging ALL Queries on a SQL Server 2008 Express Database?
Also, and this may sound stupid, but investigate the possibility that what you are seeing is not deletes. Instead, investigate if records are simply being 'updated', 'replaced if already exists', 'upserted', or whatever you like to call it. In Mysql, this is the 'INSERT ... ON DUPLICATE KEY UPDATE' statement. I'm not sure of the MSSQL variant.
What recovery model is your database in? If it is full Redgate log Rescue is free and works against SQL2000 which might help you retrieve the deleted data. The Overview Video does appear to show a user column.
Or you could roll your own query against fn_dblog
Change all your passwords. Give as few people delete access as possible.
I accidentally deleted some rows from the database. But I already have daily database backups.
How can I restore only deleted records from the backup database?
Thanks in advance
You didn't say a database server, so I can't be sure this will work, but I believe the syntax on MSSQL would be:
UPDATE livefile
SET livefile.bloodtypefield=oldfile.bloodtypefieild
FROM [hospital].[dbo].[tblPatientFile] livefile
INNER JOIN [hospitalRapor].[dbo].[tblPatientFile] oldfile on livefile.patientid=oldfile.patientid
I highly recommend running on a test database first to make sure it has the results you want. You will of course need a user who has access to both databases and depending on whether you have triggers etc defined that may take a long time to run on 400k rows.
I take it you have a restore of the database on the same server, in which case assuming all the previous data was correct you could, although you would overwrite any updates to the blood type that have been made since your error.
I would suggest you also backup your 'incorrect' database before you go any further, so that an additional mistakes can be rectified - undone easily so you can at least return to the initial 'error' state instead of compounding problems.