What are the ways in which we can cascade data to multiple catalogs? - sql-server-2005

I have 10 SQL Servers. On every server there is a catalog MASTER_DATA. This catalog has a table called Employees. Whenever there are changes in the employee info it gets updated on the CENTRAL server in the MASTER_DATA catalog.
Now what I have to do is to cascade the changes in the Employees table to all the MASTER_DATA catalogs on all servers. After this, the same changes needs to be cascaded to all the other catalogs (other then the MASTER_DATA catalog) on all the servers.
I have the following options to do this
SSIS Packages
Replication
Plain Old TSQL Queries
What would be the best way to do this? Also, are there any other ways to do the same?

Based on the information you have provided, and assuming that by "catalog" you mean "database", this seems like an ideal use-case for transactional replication.
Your CENTRAL.MASTER_DATA database would be the publisher; all the other databases would be subscribers.
It's not clear from your description why the second tier of duplication is required - i.e. why each non-MASTER_DATA database requires its own copy of the Employees table - is there a reason not to have queries refer to the local MASTER_DATA copy of the data? You could use a synonym in the non-MASTER_DATA databases to avoid having to change your queries.

Related

SQL operations to Database catalog

Are we able to perform SQL operations like INSERT, UPDATE, DELETE to Database Catalog (It is more theory question than practice)
If a database supports INFORMATION_SCHEMA and provides instruments for altering the database catalog, then yes, you can use SQL operations normally.
For example, in PostgreSQL documentation you can read:
The system catalogs are the place where a relational database management system stores schema metadata, such as information about tables and columns, and internal bookkeeping information. PostgreSQL's system catalogs are regular tables. You can drop and recreate the tables, add columns, insert and update values, and severely mess up your system that way. Normally, one should not change the system catalogs by hand, there are always SQL commands to do that. (For example, CREATE DATABASE inserts a row into the pg_database catalog — and actually creates the database on disk.)
So, you change the catalog indirectly creating a new database. Nonetheless, with PostgreSQL you can directly change the catalog, using SQL commands like DROP, INSERT, UPDATE and so on.
Some RDBMS don't provide such a possibility, such as Oracle Database, IBM DB2, SQLite or Sybase ASE. Some RDBMS provide INFORMATION_SCHEMA, but it is read-only, so you can't do anything crazy, for example, MySQL. Its documentation reads:
Although you can select INFORMATION_SCHEMA as the default database with a USE statement, you can only read the contents of tables, not perform INSERT, UPDATE, or DELETE operations on them.

Query SQL Server 2000 for table creation and alteration queries

I'm looking for a way to get all table creation and alteration queries attached to a database, in SQL Server 2000. Is this stored in a system table, or is there a built in method to remake them?
Goal: to extract the schema for customizable backups.
My research so far turned up nothing. My Google-Fu is weak...
Note that I don't know that there's a way to specify which filegroup a stored procedure is on (other than the default). So what you may consider, in order to at least keep the script repository backup small, is:
create a filegroup called non_data_objects, and make it the default (instead of PRIMARY).
create a filegroup for each set of tables, and create those tables there.
backup each set of tables by filegroup, and always include a backup of non_data_objects so that you have the current set of procedures, functions etc. that belong to those tables (even though you'll also get the others). Because 1. will only contain the metadata for non-data, it should be relatively small.
You might also consider just using a different database for each set of tables. Other than using three-part naming in your scripts that need to reference the different sets, there really is no performance difference. And this makes your backup/recovery plan much simpler.

Can you please tell importance of default databases provided by SQLserver?

In Sqlsever Enterprise manager, there are some default databases are provided like tempdb and etc. What is significance of those databases?
TempDB is used for temporary work in SQL Server. Anytime you create a temp table that storage is done inside of TempDB. Here is a very good article from MSDN
Here are some points referenced from the MSDN:
The tempdb system database is a global resource that is available to all users connected to the instance of SQL Server and is used to hold the following:
•Temporary user objects that are explicitly created, such as: global or local temporary tables, temporary stored procedures, table variables, or cursors.
•Internal objects that are created by the SQL Server Database Engine, for example, work tables to store intermediate results for spools or sorting.
•Row versions that are generated by data modification transactions in a database that uses read-committed using row versioning isolation or snapshot isolation transactions.
•Row versions that are generated by data modification transactions for features, such as: online index operations, Multiple Active Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged. This enables transactions to be rolled back. tempdb is re-created every time SQL Server is started so that the system always starts with a clean copy of the database. Temporary tables and stored procedures are dropped automatically on disconnect, and no connections are active when the system is shut down. Therefore, there is never anything in tempdb to be saved from one session of SQL Server to another. Backup and restore operations are not allowed on tempdb.
There is also the master database (holds information about all databases), Model database, MSDB (stores information on the sql agent, dts, jobs, etc).
More info here as well
MASTER - This keeps all server-level information, and meta-data about all databases on the server. Don't lose this :)
MSDB - Holds information about SQL Agent jobs and job run history
TEMPDB - Used as a temporary "work space" for temporary tables and lots of other stuff (like sorting and grouping)
MODEL - When you create a new, blank database, it makes a copy of MODEL as a template
DISTRIBUTION - (You will only see this on servers where you have set up replication) Holds records pending for replication.
SQL Server uses tempdb to store internal objects such as the intermediate results of a query. You can get more details here.

SQL Server replication question

I had inherited this SQL Server where we put data in a table (Call TableA) on a database (DB-A). I can see the tableA in another database on the same server ( DB-B) gets the same data right away.
Any ideas how this is implemented? I am trying to see the trace but so far no luck. Any one has an idea?
At this stage I am not sure if its replication. This is a guess
It could be replication or it could be a trigger on the source table that is moving the data over.
Perhaps it is transactional replication? You should be able to go to the replication are and see if there are subscribers or publishers.
Either that or you have linked servers, and triggers are copying the data.
This is most likely happening by use of either a synonym or cross-database view. Check to see if the "table" on the other database really is a table. If it IS a table, then they've set up transactional replication between the two databases.
select type_desc from sys.objects where name = 'name_on_database_b'

Suggestions for implementing audit tables in SQL Server?

One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger.
While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes?
There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
How much writing vs. reading of this table(s) do you expect?
I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day.
Added:
If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work.
We are using two table design for this.
One table is holding data about transaction (database, table name, schema, column, application that triggered transaction, host name for login that started transaction, date, number of affected rows and couple more).
Second table is only used to store data changes so that we can undo changes if needed and report on old/new values.
Another option is to use a third party tool for this such as ApexSQL Audit or Change Data Capture feature in SQL Server.
I have found these two links useful:
Using CLR and single audit table.
Creating a generic audit trigger with SQL 2005 CLR
Using triggers and separate audit table for each table being audited.
How do I audit changes to SQL Server data?
Are there any built-in audit packages? Oracle has a nice package, which will even send audit changes off to a separate server outside the access of any bad guy who is modifying the SQL.
Their example is awesome... it shows how to alert on anybody modifying the audit tables.
OmniAudit might be a good solution for you need. I've never used it before because I'm quite happy writing my own audit routines, but it sounds good.
I use the approach described by Greg in his answer and populate the audit table with a stored procedure called from the table triggers.