Using SQL Server Management Studio 2017:
My local database gets push updates to tables when a specific column(or two) of a table gets updated, I want to get notified and be able to grab that specific row's data for use/consumption.
I am a novice in SQL code - I have done some reading on Track Changes in SQL, but is that overkill for such a simple task? Speed is key.
EDIT:
I am using SQL Server Express - CDC not supported...
I will be using C# to call procedures.
You can use triggers for that purpose, but take into account that they can degrade the performance on the table.
Related
Is there a way to check if an SQL Server table (or even better, a page in that table) was modified since a certain moment? E.g. SQL differential backup uses dirty flags to know which parts of data were changed since last backup, and resets these flags after a successful backup.
Is there any way to get this functionality from MS SQL Server? I.e. if I want to cache certain aggregate values on a database table which sometimes changes, how would I know when to invalidate the cache? Or is the only way to do it to implement it programmatically and keep tract of this while writing to the database?
I am using C# .NET 4.5 to access SQL Server 2008 R2 through NHibernate.
I suggest you think about your problem in terms of application layer data caching instead of SQL Server low-level data pages. You can use SqlDependency or QueryNotification in your C# code to get notified of changes to the underlying data. Note that this requires ServericeBroker be enabled in the SQL Server database and there are some restrictions the on queries that qualify for notification.
See http://www.codeproject.com/Articles/529016/NHibernate-Second-Level-Caching-Implementation for an example of using this it with NHibernate.
We have and old database with a poorly thought out table structure, virtually no relationships setup and no naming schemes. I've created a new database with a clean relational data structure that implements proper design practices.
I'm looking for advice on different methods to migrate the old data over to the new format. This will require a lot of data re-shaping which won't be fun. The data is heavily accessed and the challenge will be to keep both databases in sync for all relevant data (accounts, important services etc).
I thought triggers might be the way to go here - but maybe there is a different method that I am unaware of (maybe MS Sync Framework, or a code-level data adapter which will be more work because there is so much data access code spread all over the place, classic ASP and .Net over dozens of projects). The database in question is SQL Server 2005, running in SQL Server 2000 compatibility mode.
I think the way to go is to write a stored procedure in the new database, which will actually pull your delta changes (only the modifications that were done from the last run to the instant the stored proc is run), and put this stored procedure in the sql agent job.
Configure the sql agent job to run for every 15 minutes and let the data sync in.
disadvantages of using triggers in this scenario
triggers will reduce the performance, as the sql server will execute the trigger code as well along with the update/ insert /delete statements and includes these as part of the execution at every time, i.e. if your trigger code takes 2 seconds to execute and the update statement with no trigger takes 2 seconds to execute, then the update time will be increased to 4 seconds with trigger in place. So employing triggers in this case might result in huge performance bottle neck.
I'm dealing with the same situation at my work, and I'm currently writing an application to do the migration. The original database has no established relationships, so it's really like a set of disconnected spreadsheets. By building my own application, I'm able to migrate the data using newly-established foreign keys, and assign data-specific defaults in place of nulls.
I have users entering data in SharePoint (Running on SQL Server), but my application to view that data will be an Oracle Apex app running on Oracle, obviously. How do I have the data be pushed into the Oracle db automatically?
First off, are you sure that you need to replicate the data to Oracle? Oracle Heterogeneous Services allows you to create a database link in Oracle that connects to a non-Oracle database using ODBC (assuming you use the Transparent Gateway for ODBC which is free). Your APEX application could then query and report on data that is in SQL Server by issuing queries that run over the database link. Tim Hall has a good article (though it's a bit dated and some of the components have been renamed, the general approach is still the same) on configuring Heterogeneous Services.
If you do need to replicate the data, you can create materialized views in Oracle that query the objects in SQL Server using the database link you created with Heterogeneous Services and schedule those materialized views to refresh on a regular basis. The materialized views will need to do a complete refresh, though, which means that every row will need to be copied from SQL Server to Oracle every time there is a refresh. That generally limits the frequency with which you can realistically have refreshes happen. If you need the data to be replicated to the Oracle database and you need to send incremental changes so that the Oracle side doesn't lag too far behind, you can use Streams from a non-Oracle database to an Oracle database but that involves a lot more work.
In SQL Server you can setup linked servers that allow you to view data from other db's. You might see if Oracle has something similar, if not the same. Alternatively, you could use the sql's integration services to push the data over to an oracle table. Unfortunately I only know how to setup linked servers in SQL Server and I don't have a lot of experience with ssis to tell you how to do that, but those are the first two options I can think of that you might explore further.
Here's a link I found that might be helpful as well: http://www.dba-oracle.com/t_connecting_sql_server_oracle.htm
There's no way to do it "automatically" that I know of that will work across DBMS. ETL tools like Sql Server Integration Services might help but there's going to be a loading delay (as it will have to poll for changes). You could build some update triggers on the SharePoint database tables but that's going to turn into a support nightmare.
I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.
I would like to log changes made to all fields in a table to another table. This will be used to keep a history of all the changes made to that table (Your basic change log table).
What is the best way to do it in SQL Server 2005?
I am going to assume the logic will be placed in some Triggers.
What is a good way to loop through all the fields checking for a change without hard coding all the fields?
As you can see from my questions, example code would be veeery much appreciated.
I noticed SQL Server 2008 has a new feature called Change Data Capture (CDC). (Here is a nice Channel9 video on CDC). This is similar to what we are looking for except we are using SQL Server 2005, already have a Log Table layout in-place and are also logging the user that made the changes. I also find it hard to justify writing out the before and after image of the whole record when one field might change.
Our current log file structure in place has a column for the Field Name, Old Data, New Data.
Thanks in advance and have a nice day.
Updated 12/22/08: I did some more research and found these two answers on Live Search QnA
You can create a trigger to do this. See
How do I audit changes to sql server data.
You can use triggers to log the data changes into the log tables. You can also purchase Log Explorer from www.lumigent.com and use that to read the transaction log to see what user made the change. The database needs to be in full recovery for this option however.
Updated 12/23/08: I also wanted a clean way to compare what changed and this looked like the reverse of a PIVOT, which I found out in SQL is called UNPIVOT. I am now leaning towards a Trigger using UNPIVOT on the INSERTED and DELETED tables. I was curious if this was already done so I am going through a search on "unpivot deleted inserted".
Posting Using update function from an after trigger had some different ideas but I still believe UNPIVOT is going to be the route to go.
Quite late but hopefully it will be useful for other readers…
Below is a modification of my answer I posted last week on a similar topic.
Short answer is that there is no “right” solution that would fit all. It depends on the requirements and the system being audited.
Triggers
Advantages: relatively easy to implement, a lot of flexibility on what is audited and how is audit data stored because you have full control
Disadvantages: It gets messy when you have a lot of tables and even more triggers. Maintenance can get heavy unless there is some third party tool to help. Also, depending on the database it can cause a performance impact.
Creating audit triggers in SQL Server
Log changes to database table with trigger
CDC
Advantages: Very easy to implement, natively supported
Disadvantages: Only available in enterprise edition, not very robust – if you change the schema your data will be lost. I wouldn’t recommend this for keeping a long term audit trail
Reading transaction log
Advantages: all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
Disadvantages: You need a third party log reader in order to read this effectively
Read the log file (*.LDF) in sql server 2008
SQL Server Transaction Log Explorer/Analyzer
Third party tools
I’ve worked with several auditing tools from ApexSQL but there are also good tools from Idera (compliance manager) and Krell software (omni audit)
ApexSQL Audit – Trigger based auditing tool. Generated and manages auditing triggers
ApexSQL Log – Allows auditing by reading transaction log
Under SQL '05 you actually don't need to use triggers. Just take a look at the OUTPUT clause. OUTPUT works with inserts, updates, and deletes.
For example:
INSERT INTO mytable(description, phone)
OUTPUT INSERTED.description, INSERTED.phone INTO #TempTable
VALUES('blah', '1231231234')
Then you can do whatever you want with the #TempTable, such as inserting those records into a logging table.
As a side note, this is an extremely easy way of capturing the value of an identity field.
You can use Log Rescue. It quite the same as Log Explorer, but it is free.
It can view history of each row in any tables with logging info of user, action and time.
And you can undo to any versions of row without set database to recovery mode.