Finding deployment data to a SQL Database - sql

In an effort to maintain versions of the databases we have in our CMDB, I have to obtain the versions of some databases deployed to our servers by a third party company.
Is there a system table, view or procedure that allows me to view information regarding recent deployments (code changes from an update script) to a SQL database?

You have three options.
First, you can build your own logging based on a table and a ddl trigger which will log each change in any procedure etc.
Second, you can track changes in one of this sys tables:
select * from sys.all_sql_modules -- Get the sourcecode of each proc (and track it)
select * from sys.objects -- Get information which object is modified at which date
Third, you can reverse engineer changes in the recent past by reading the trace log of the sql server itself and filter for drop/create events. (needed SA permission)
-- Get the current server trace file
select *
from fn_trace_getinfo(NULL)
where property=2
and traceid = 1
-- Copy value from the query above and paste it here
select *
from fn_trace_gettable('[PASTE PATH HERE!]', -1)
where EventClass IN(46,47) -- Create/Drop Object
Hopefully there is one solution for you which is helpfully for you.
But by the way. Another idea is, if your workflow allows this. Just use SSDT to create deployment packages and keep track of your changes.
Best regards,
Ionic

Related

Remove All Permission In SSRS 2008

Must remove all the permissions SSRS 2008 all reports and leave only one group, is there any way via script in PS, VB, T-SQL that performs this task?
I can see 2 ways of doing it:
The recommended (supported) way
Go through all reports and restore the parent security.
This can take a lot of time depending on the number or reports you have.
The unsupported way
This should do what you want without too much work, but is quite risky.
Backup your ReportServer DB (important)
Apply the permissions you want on the root in the web interface
Go in the Catalog table and look for the PolicyID of the corresponding entry (it should be the first line, with almost all other columns = NULL, and PolicyRoot = 1)
Execute the following query:
update [dbo].[Catalog] set [PolicyID] = <YourRootPolicyID>
(Optional) Clean the PolicyUserRole table, which maps a user to a role and a policy:
delete from [dbo].[PolicyUserRole] where [PolicyID] <> <YourRootPolicyID>
(Optional) Clean the Policies table, which holds the list of policies (= security settings):
delete from [dbo].[Policies] where [PolicyID] <> <YourRootPolicyID>
All your items will now have same the security settings.

Query that falls back to different table if linked server query fails

Our test database is linked to a database owned by another department within our company. Whenever they bring their database down (like when refreshing with production data) our application goes down as well. The only thing we are doing with their database is we have a view that selects from one of their tables and we join to this view in a number of queries.
Ideally, whenever their system goes down, I'd like our view to pull from a backup of their table that exists in our database. It has slightly stale data, but at least we would be able to continue working. I thought of using a TRY...CATCH in the view or in a sql function, but they are not supported in those. A stored procedure might work, except that you can't join to the results of a stored procedure in queries, can you?
How can I make my SELECT statements fall back to a backup table when the linked server's table is unavailable?
So what I ended up doing was to create a SQL Server Agent job that calls sp_testlinkedserver in a TRY...CATCH every few minutes and if it's down we alter the view to point to our backup table and if it's up, we alter it to point to the "live" data again. We also track the previous state so we only alter the view if the state has changed. It works pretty slick.

SQL Server - mirror some columns from tables to another database on the same server without replication

I have a SQL Server 2012 Web Edition (11.0.5058.0) instance on a VPS which hosts two databases. I would like to mirror a couple of columns from 3 tables to the second database, but I don't have transactional replication installed.
So I have a Staff table on the source database - I just want the staff_code and unique_id - I have an Activity table - I just need the activity_code, description and unique_id.. etc.
What is the best way to go about this - would that be triggers? The data is not regularly updated, possibly once a week - but I would still like the synchronisation to be fast if possible?
The data in the source database may be deleted, updated or inserted, by another application, so I want to ensure the data in my database reflects that information correctly.
Thanks for any suggestions!
UPDATED: Table comparison example:
SELECT CASE WHEN NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
)
AND NOT EXISTS
( SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [DESTINATION-DATABASE].[dbo].[DESTINATION-TABLE]
EXCEPT
SELECT [COLUMN1],[COLUMN2],[UNIQUE_ID] FROM [SOURCE-DATABASE].[dbo].[SOURCE-TABLE]
)
THEN 'True'
ELSE 'False' //GRAB NEW OR UPDATED DATA
END AS result ;
As long as the two databases can be connected (e.g. can you do a SELECT * FROM SecondDB.dbo.Activity?), then I would just
set up a query (stand-alone, or in a stored procedure) that just checks whether or not the data on the source has changed
updates the second database using normal SELECT, INSERT, UPDATE and possibly DELETE statements
set up that query/stored procedure with a SQL Server Agent Job to run at regular intervals, e.g. once every night, once every week - whatever works for you
I don't think triggers would be a good choice here - triggers should be kept very small, lean, fast - and "replicating" to another database sounds like too much processing work for a nimble trigger.... (also if you triggers take a long time to complete, the calling application will have to wait for that whole time..... not good for your application performance!)

SQL 2008 audit - show data deleted, etc

I'm using SQL 2008 and have DELETE, UPDATE & INSERT auditing enabled on table XYZ. It works great other than when I query the data:
SELECT * FROM fn_get_audit_file('H:\SQLAudits\*', default, default)
It doesn't actually show me what was deleted or inserted or updated, only that a deletion, etc ... occurred. The statement column of the above query shows this snippet:
delete [dbo].[XYZ] where ([Name] = #0)
I want it to show me what the value of #0 is. Is there a way of doing this?
From what I've found about it, SQL Server 2008's "auditing" feature is very lacking. It does not act as a traditional data audit trail, where you store a new row every time something changes (via Triggers), with complete information such as the user who made the change. It more or less just tells you something has changed without much detail. I really wish SQL Server would include full data audit trail features.
Reference
While Creating Database Audit Specification, you select operation for the Audit Action Type INSERT, UPDATE, DELETE
This result in showing us logs , saying Select or Insert or Update or Delete...But the individual value can never be seen
Example - Click here to view the Logs for Insert/Update/Delete
The SQL Server Audit tool is very powerful, however, it was never designed to record data changes (eg. col1 was changed from 'fred' to 'santa' in table 'dummy' in db 'test' by 'sa').
For this you will need Change Data Capture (http://msdn.microsoft.com/en-us/library/bb522489.aspx).
Cheers,
Mark
You can monitor the delete sentences using SQL Server Profiler. You will be able to see the changes.
Another way to monitor is using the CDC (Change Data Capture) feature in SQL Server. This feature will let you monitor changes in the tables.
Finally, there are other tools related like ApexSQL Trigger.

Need to alter column types in production database (SQL Server 2005)

I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.