We have a shop floor database OPERATION that replicates selected data to a database BUSINESS that is used for reporting. The data in OPERATION is deleted daily by the third-party shop floor application so in order to retain the data on BUSINESS I've set the Article Property for DELETE delivery format to be Do not replicate DELETE statements.
This works well, but occasionally somebody wants something extra/different to be replicated. Depending on the nature of the change to the Publication it may prompt for Reinitialization of the snapshot which would of course blow away the database on BUSINESS (as I sadly did one day).
What's the best way around this?
I would suggest you implement an ETL process instead of replication.
You can use SSIS to extract data out of OPERATION database and copy it to BUSINESS database. In the SSIS package you have full control over the logic. For example, you can append the data to existing data in BUSINESS. You can use MERGE, to insert new records and modify existing ones (this way it would be safe to run it repeatedly as the unchanged data would not be overwritten).
If someone requests additional data, you would just wrote a new SSIS package to transfer additional data without affecting your main process.
SSIS can be scheduled to run from a SQL agent job (use dtexec for example).
Related
I am trying to find out an ideal way to automatically copy new records from one database to another. the databases have different structure! I achieved it by writing VBS scripts which copy the data from one to another and triggered the scripts from another application which passes arguments to the script. But I faced issues at points where there were more than 100 triggers. i.e. 100wscript processes trying to access the database and they couldn't complete the task.
I want to find out a simpler solution inside SQL, I read about setting triggers, Stored procedure and running them from SQL agent, replication etc. The requirement is that I have to copy records to another database periodically or when there is a new record into another database.
Which method will suit me the best?
You can use CDC to do this activity. Create a SSIS package using CDC and run that package periodically through SQL Server Agent Job. CDC will store all the changes of that table and will do all those changes to the destination table when you run the package. Please follow the below link.
http://sqlmag.com/sql-server-integration-services/combining-cdc-and-ssis-incremental-data-loads
The word periodically in your question suggests that you should go for Jobs. You can schedule jobs in SQL Server using Sql Server agent and assign a period. The job will run your script as per assigned frequency.
PrabirS: Change Data Capture
This is a good option. Because it uses the truncation-log to create something similar to the Command Query Segregation Pattern (CQRS).
Alok Gupta: A SQL Job that runs in the SQL Agent
This too is a good option, given that you have something like a modified date thus you can filter the altered data. You can create a Stored Procedure and let it run regularly in the SQL Agent.
A third option could be triggers (the change will happen in the same transaction).
This option is useful for auditing and logging. But you should definitely avoid writing business logic in triggers, as triggers are more or less hidden and occur without directly calling them (similar to CDC actually). I have actually created a trigger about half a year ago that captured the data and inserted it somewhere else in xml-format as the columns in the original table could change over time (multiple projects using the same database(s)).
-Edit-
By the way, your question more or less suggest a lack of a clear design pattern and that the used technique is not the main problem. You could try to read how an ETL-layer is build, or try to implement a "separations of concerns". Note; it is hard to tell if this is the case, but given how you formulated your question, an unclear design is something that pops up in my mind as possible problem.
I hope someone can give me advice or point me to some readings for this. I generate business reports for my team. We host a subscription website so we need to track several things sometimes on a daily basis. A lot of sql queries are involved.The problem is querying a large volume information from the live database will slow or cause timeouts to our website.
My current solution requires me to run bcp scripts that copy new rows to a backup database, (that I use purely for reports) daily. Then I use an application I made to generate reports from there. The output is ultimately an excel file or several (for the benefit of the business teams, it's easier for them to read.) There several problems in my temporary solution though,
It only adds new rows. Updates to previous rows are not copied. and
It doesn't seem very efficient.
Is there another way to do this? My main concern is that the generation or the querying should not slow down our site.
I can think of three options for you, each of which could have various implementation methods. The first one is Azure SQL Data Sync Services, the second is the AS COPY COPY operation and the third is rides on top of a backup.
The Sync Services are a good option if you need more real time reporting capability; meaning if you need to run your reports multiple times a day, at just about any time, and you need your data as real time as you can get it. Sync Services could have a performance impact on your primary database because it runs based off of triggers, but with this option you can choose what to sync; in other words you can replicate a filtered set of data, which minimizes the performance impact. Then you can reports on the sync'ed database. Another important shortcoming of this approach is that you would end up maintaining a sync service; if your primary database schema changes, you may need to recreate some or all of the sync configuration.
The second option, AS COPY OF, is a simply database copy operation which essentially gives you a clone of your primary database. Depending on the size of the database, this could take some time, so testing is key. However, if you are performing a morning report for yesterday's activities and having the latest data is not as important, then you could run the AS COPY OF operation on a schedule after hours (or when the activity on your database is the lowest) and run your report on the secondary database. You may need to build a small script, or use third party tools to help you automate this. There would be little to no performance impact on your primary database. In addition, the AS COPY OF operation provides transactional consistency, if this important to you.
The third option could be to use a backup mechanism (such as the Azure Export, or Azure backup tools), and restore the latest backup before running your reports. This has the advantage to leverage your backup strategy without much additional effort.
I have an interesting issue and requirement for a large multi-schema database.
-The database is around 130Gb in Size.
-It is a multi Schema database, each customer has a schema.
-We currently have 102,247 tables in the system.
-Microsoft SQL Server 2k8 r2
This is due to customisation requirements of customers, all using a single defined front end.
The issue we have is that our database backups become astronomical and getting a database restore done for retrieval of lost/missing/incorrect data is a nightmare. The initial product did not have defined audit trails and we don't have 'changes' to data stored, we simply have 1 version of data.
getting lost data back basically means restoring a full 130GB backup and loading differentials/transaction files to get the data.
We want to introduce a 'Changeset' for each important table within each schema. essentially holding a set of the data, then any modified/different data as it is saved - every X number of minutes. This will have to be a SQL job initially, but I want to know what would be the best method.
Essentially I would run a script to insert the 'backup' tables into each schema for the tables we wish to keep backed up.
Then run a job every X minutes to cycle through each schema and insert current - then new/changed data as it spots a change. (based on the modifiedDate of the row) It will then retain this changelog for around a month before self-overwriting.
We still have our larger backups, but we wont need to keep a larger retention period. My point is, what is the best and most efficient method of checking for a changed data and performing an insert.
My gut feeling would be :
INSERT INTO BACKUP_table (UNIQUE ID, col1,col2,col3)
select col1,col2,col3 from table where and ModifiedDate < DATEADD(mi,+90,Current_TimeStamp)
*rough SQL
This would have to be in a loop to go through all schemas and run this. A number of tables wont have changed data.
Is this even a good method?
What does SO think?
My first response would be to consider keeping each customer in their own database instead of their own schema within a massive database. The key benefits to doing this are:
much less stress on the metadata for a single database
you can perform backups for each customer on whatever schedule you like
when a certain customer has high activity you can move them easily
I managed such a system for several years at my previous job and managing 500 databases was no more complex than managing 10, and the only difference to your applications is the database part of the connection string (which is actually easier to make queries adapt to than a schema prefix).
If you're really committed to keeping everyone in a single database, then what you can consider doing is storing your important tables inside of each schema within their own filegroup, and move everything out of the primary filegroup. Now you can backup those filegroups independently and, based on solely the full primary backup and a piecemeal restore of the individual filegroup backup, you can bring just that customer's schema online in another location, and retrieve the data you're after (maybe copying it over to the primary database using import/export, BCP, or simple DML queries), without having to completely restore the entire database. Moving all user data out of the primary filegroup minimizes the time it takes to restore that initial backup and get you on to restoring the specific customer's filegroup. While this makes your backup/recovery strategy a little more complex, it does achieve what you're after I believe.
Another option is to use a custom log shipping implementation with an intentional delay. We did this for a while by shipping our logs to a reporting server, but waiting 12 hours before applying them. This gave us protection from customers shooting themselves in the foot and then requiring a restore - if they contacted us within 12 hours of their mistake, we likely already had the "before-screw-up" data online on the reporting server, making it trivial to fix it on the primary server. It also doubled as a reporting server for reports looking at data older than 12 hours, taking substantial load away from the primary server.
You can also consider change data capture but you will obviously need to test the performance and the impact on the rest of your workload. This solution also will depend on the edition of SQL Server you're using, since it is not available in Standard, Web, Workgroup, etc.
I have a database with 50 tables and I want to log users requests, such as inserts, updates or deletes on all the tables in the database. I can also create a trigger for this for each request type.
What is the best way to do this from a performance perspective or is there a better way to track this?
You can also create audit tables which are populated by triggers (and which allow much more flexibility than change data capture). The critical component is to capture sets of data not try to work row-by-row. It does add some overhead yes, but if you write the triggers correctly, it isn't that much. Be sure to capture who (including which application if you have multiple applications hitting the database) and when as well as the old and new values. Set up one audit table per table you want audited (too much locking if you use only one audit table). And at the time you set up your system, write the code to get data back from a bad transaction or set of transactions. That makes it easier to recover when you do have something go wrong and you need to revert. We use two tables per table audited, one contains the info about the process that did the changes (name of the application, date, user, etc. and an auditid), the other contains the details about what was changed (old and new values, ID of the record being affected and column affected). Our structure enables us to use the same structure for each table being audited, and allows the tables to change without having to change the audit table and allows us to easily script the audit tables for a new tables. It is also easy for us to see what records were changed at the same time or in the same process or to find out which of the many applications which touch our database was responsible for the bad data as well as telling us who in particular was responsible for the bad data. This helps us track down application bugs and find out why the data was changed the way it was in some cases. It also makes it easier for us to track down all the data that was affected by a broken process rather than just the one we knew about.
If you have Enterprise Edition, look into Change Data Capture. If you don't have Enterprise and aren't interested in capturing the historical values of the columns that change, look into Change Tracking.
See Comparing Change Data Capture and Change Tracking to understand the differences between the two.
Assuming all requests to insert, update and/or delete data goes through some middle-tier data access layer, I would suggest you do your logging there. This is where we do all of ours. It is much simpler than trying to extract the actual insert / delete / update statements out of SQL Server.
If you want to do auditing of data, you can look into Change Data Capture (CDC). But this requires the Enterprise Edition.
I've been tasked with hooking in our product with another third party product. One of the things I need to do is mimic some of the third-party's product functionality when adding new "projects" - which can touch several database tables. Is there any way to add some kind of global hook to a database that would record all changes made to data?
I'd like to add the hook, create a project using the third-party application, then check out what all tables were affected.
I know it's more than just new rows as well, I've come across a number of count fields that look to be incremented for new projects and I worry that there might be other records that are modified on a new project insert, and not just new rows being added.
Thanks for any help
~Prescott
I can think of the following ways you can track changes
Run SQL Server Profiler which will capture all queries that run on the server. You can filter these by database, schema or a set of tables, etc.
Use a 3rd party Transaction Log reader. This is very much a less intrusive process. You have to ensure that you are set to FULL recovery on the database.
Make sure the log will not be reused:
the database is in full recovery mode (true full, with an initial backup)
the log backup maintenance tasks are suspended for the duration of the test
Then:
write down the current database LSN
run your 3rd party project create
check the newly added log information with select * from ::fn_log(oldcurrentLSN, NULL);
All write operations will apear in the log. From the physical operation (allocation unit ID) you can get to the logical operation (object id).
Now that being said, you should probably have a decent understanding of the 3rd party schema and data model if you plan to interact with it straight at the database level. If you are planning to update the 3rd party tool and you don't even know what tables to update, you'll more than likely end up corrupting its data.