Recently we have decided to Tune the database of our Online Application. The database is big and having lot of unwanted objects. So as a first step in Cleaning up the Database, we decided to remove the Obsolete/Unwanted tables. We have got the list of unwanted tables in DB. Now we have to test the application run as previous only with the required tables. For that we need to make sure that the application not referring any of the obsolete/Unwanted tables. Is there any way to mark the tables as obsolete, so that application wont refer them?
One way of achiving your perpose ,i can think of , is renaming tables(add _Old to the name of table) which you want to remove. Once you run your application you can see where all its breaking. You will get a chance to decide whether to use the table or not. Once you revert the name of the table (remove _Old) it will work as earlier.
You can revoke permissions on them so the queries fail. If you have stored procedures you can look at the dependencies.
By revoking permissions to a certain table, everytime the application tries to access the obsolete table, an exception will be thrown. And there should be a record of this in the database and application logs.
Related
I may just be stupid and missing something simple, but how do I make it so when I publish my SSDT project, it either empties or drops the tables in my DB without actually setting the 'CreateNewDatabase' setting to true in my publishing profile.
I have some post deploy inserts that run every time i publish which results in duplicate rows every time.
Just to clarify, every time you deploy you want to clear out the contents of all tables as you have a load of insert statements in the post-deploy scripts to create the data you want and so you want the tables to be empty?
If that is right then that isn’t the way ssdt is normally used, typically the important bit in a database is the data so you wouldn’t want to clear every table on publish so there isn’t anything built in, other than create new database.
That being said if you don’t need any data then you are in a great position, most problems we have is trying to make sure we don’t delete any data by accident :)
There are a couple of approaches, you could change your insert statements to merge statements, there is a proc called sp_generate_merge that you can get from github which generates a merge statement that you can use, this will make your tables look like the data in your post-deploy scripts and is my preferred.
If tables have more than around 2,000 rows then merge statements might not be right so I would just do a delete of truncate table before inserting my data.
Hope I got the question right :)
Ed
I am attempting to use SQL Schema Compare in Visual Studio 2013/15 and am running into the problem that discluding tables from delete removes them from being processed at all.
The issue is that the tables it is trying to delete are customer made tables, so when we sync our version against their databases it asks to delete them. We do not want to delete them, but some of their tables have constraints on ours so when it attempts to CCDR it fails due to table constraints. Is there a way to add the table to be (re-created? like the rest of them?), without writing scripts for each client to do what SQL Schema Compare already does just for those few tables?
Red-Gate's SQL Compare does this somehow, but it's hidden from us so not quite sure how it's achieved. Discluding doesn't delete, but does not error on the script either.
UPDATE:
The option "Drop constraints not in source" does not appear to work correctly. It does drop some, however there are others that it just does not drop the constraints. In red-gate's tool, when we compared I found how to get the SQL from it, and their product doesn't say the table needs to be updated at all, while Visual Studio's does. They seem to work almost identical, but the tables that fail are the ones that shouldn't be update at all (read below)
Update 2:
Another problem I've found is "Ignore column collation" also doesn't work correctly, as tables that shouldn't be getting dropped are being told they need to be updated even though it's only order of column changes, not actual column or data changes, which makes this feel like more of a bug report than anything.
My suggestion with these types of advance data calculations is to not use Visual Studio. Put the logic on the Sql engine and write the code for this in Sql. Due to the multi user locking issues of a Sql engine these types of processes are prone to fail when the wrong combinations of user actions happen at the same time. The Visual Studio tool can not interface with the data locking issues due to records changing that the Sql engine can. If you even get this to work it will only be safe to run if you are in single user mode.
It is a nice to use tool, easier than writing Sql but there are huge reliability and consistency risks for going down this path.
I don't know if this will help but I've found this paragraph
on the following page:
https://msdn.microsoft.com/en-us/library/hh272690(v=vs.103).aspx
The update will fail because our change involves changing a column
from NOT NULL to NULL and as a result causes data loss. If you want to
proceed with the update, click on the Options button (the fifth one
from the left) on the toolbar for the Schema Compare and uncheck the
block incremental deployment if data loss option.
I have a structured database and software to handle it and I wanted to setup a demo version based off of a simple template version. I'm reading through some resources on temporary tables but I have questions.
What is the best way to go about cloning a "temporary" database while keeping a clean list of databases?
From what I've seen, there are two ways to do this - temporary local versions that are terminated at the end of the session, and tables that are stored in the database until deleted by the client or me.
I think I would prefer the 2nd option, because I would like to be able to see what they do with it. However, I do not want add a ton of throw-away databases and clutter my system.
How can I a) schedule these for deletion after say 30 days and b) if possible, keep these all under one umbrella, or in other words, is there a way to keep them out of my main list of databases and grouped by themselves.
I've thought about having one database and then serving up the information by using a unique ID for the user and 'faux indexes' so that it appears as 1,2,3 instead of 556,557,558 to solve B. I'm unsure how I could solve A, other than adding a date and protected columns and having a script that runs daily and deletes if over 30 days and not protected.
I apologize for the open-ended question, but the resources I've found are a bit ambiguous.
These aren't true temp tables in the sense that your DBMS knows them. What you're looking for is a way to have a demo copy of your database, probably with a cut-down data set. It's really no different from having any other non-production copy of your database.
Don't do this on your production database server.
Do not do this on your production database server.
Script the creation of your database schema. Depending on the DBMS you're using, this may be pretty easy. If you've got a good development/deployment/maintenance process for your system, this should already exist.
Create your database on the non-production server using the script(s) generated in the previous step. Use an easily-identifiable naming convention, like starting the database name with demo.
Load any data required into the tables.
Point the demo version of your app (that's running on your non-production servers) at this new database.
Create a script/process/job which looks at your database server and drops any databases that match your demo DB naming convention and were created more than 30 days ago.
Without details about your actual environment, people can't give concrete examples/sample code/instructions.
If you cannot run a second, independent database server for these demos, then you will have to make do with your production server. This is still a bad idea because of potential security exposures and performance impact on your production database (constrained resources).
Create a complete copy of your database (or at least the schema, with a reduced data set) for each demo.
Create a unique set of credentials for each of these demo databases. This account should have access to only its demo database.
Configure the demo instance(s) of your application to connect to the demo database
Here's why I'm pushing so hard for separate databases: If you keep copying your "demo" tables within the database, you will have to update your application code to point at those tables each time you do a new demo. Once you start doing this, you're taking a big risk with your demos - the code you keep changing isn't really the application you're running in production anymore. And if you miss one of those changes, you'll get unexpected results at best, and mangling of your production data at worst.
I was wondering what are the best practices in order to write SQL scripts to set up databases for production and/or development, for instance:
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
Is correct to disable FK check before executing the body of the script?
May I include the hole script in a transaction?
Is better to generate 1 script per database than one script for all of them?
Thanks!
The problem with your question is is hard to answer as it depends on the way the scripts are used in what you are trying to achieve. you also don't say which DB server you are using as there are tools provided which can make some tasks easier.
Taking your points in order, here are some suggestions, which will probably be very different to everyone elses :)
Should I include the CREATE DATABASE
statement?
What alternative are you thinking of using? If your question is should you put the CREATE DATABASE statement in the same script as the table creation it depends. When developing DB I use a separate create DB script as I have a script to drop all objects and so I don't need to create the database again.
Should I create users for the database in the same script?
I wouldn't, simply because the users may well change but your schema has not. Might as well manage those changes in a smaller script.
Is correct to disable FK check before executing the body of the script?
If you are importing the data in an attempt to recover the database then you may well have to if you are using auto increment IDs and want to keep the same values. Also you may end up importing the tables "out of order" an not want checks performed.
May I include the whole script in a transaction?
Yes, you can, but again it depends on the type of script you are running. If you are importing data after rebuilding a db then the whole import should work or fail. However, your transaction file is going to be huge during the import.
Is better to generate 1 script per database than one script for all of them?
Again, for maintenance purposes it's probably better to keep them separate.
This probably depends what kind of database and how it is used and deployed. I am developing a n-tier standard application that is deployed at many different customer sites.
I do not add a CREATE DATABASE statement in the script. Creating the the database is a part of the installation script which allows the user to choose server, database name and collation
I have no knowledge about the users at my customers sites so I don't add create users statements also the only user that needs access to the database is the user executing the middle tire application.
I do not disable FK checks. I need them to protect the consistency of the database, even if it is I who wrote the body scripts. I use FK to capture my errors.
I do not include the entire script in one transaction. I require from the users to take a backup of the db before they run any db upgrade scripts. For creating of a new database there is nothing to protect so running in a transaction is unnecessary. For upgrades there are sometimes extensive changes to the db. A couple of years ago we switched from varchar to nvarchar in about 250 tables. Not something you would like to do in one transaction.
I would recommend you to generate one script per database and version control the scripts separately.
Direct answers, please ask if you need to expand on any point
* Should I include the CREATE DATABASE statement?
Normally I would include it since you are creating and owning the database.
* Should I create users for the database in the same script?
This is also a good idea, especially if your application uses specific users.
* Is correct to disable FK check before executing the body of the script?
If the script includes data population, then it helps to disable it so that the order is not too important, otherwise you can get into complex scripts to insert (without fk link), create fk record, update fk column.
* May I include the hole script in a transaction?
This is normally not a good idea. Especially if data population is included as the transaction can become quite unwieldy large. Since you are creating the database, just drop it and start again if something goes awry.
* Is better to generate 1 script per database than one script for all of them?
One per database is my recommendation so that they are isolated and easier to troubleshoot if the need arises.
For development purposes it's a good idea to create one script per database object (one script for each table, stored procedure, etc). If you check them into your source control system that way then developers can check out individual objects and you can easily keep track of versions and know what changed and when.
When you deploy you may want to combine the changes for each release into one single script. Tools like Red Gate SQL compare or Visual Studio Team System will help you do that.
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
That depends on your DBMS and your customer.
In an Oracle environment you will probably never be allowed to do such a thing (mainly because in the Oracle world a "database" is something completely different than e.g. in the PostgreSQL or MySQL world).
Sometimes the customer will have a DBA that won't let you create databases (or schemas or users - depending on the DBMS in use). So you will need to supply that information to the DBA in order for him/her to prepare the environment for your script.
May I include the hole script in a transaction?
That totally depends on the DBMS that you are using.
Some DBMS don't support transactional DDL and will implicitely commit any transaction when you execute a DDL statement, so you need to consider the order of your installation script.
For populating the tables with data I would definitely try to do that in a single transaction, but again this depends on your DBMS.
Some DBMS are faster if you commit only once or very seldomly (Oracle and PostgreSQL fall into this category) but will slow down if you commit more often.
Other DBMS handle smaller but more transactions better and will slow down if the transactions get too big (SQL Server and MySQL tend to fall into that direction)
The best practices will differ considerably on whether it is the first time set-up or a new version being pushed. For the first time set-up yes you need create database and create table scripts. For a new version, you need to script only the changes from the previous version, so no create database and no create table unless it is a new table. Now you need alter table statements becasue you don't want to lose the existing data. I do usually write stored procs, functions and views with a drop and create statment as dropping those pbjects doesn't generally affect the underlying data.
I find it best to create all database changes with scripts that are stored in source control under the version. So if a client is new, you run the create version 1.0 scripts, then apply all the other versions in order. If a client is just upgrading from version 1.2 to version 1.3, then you run just the scripts in version 1.3 source control repository. This would also include scripts to populate or add records to lookup tables.
For transactions you may want to break them up into several chunks not to leave a prod database locked in one transaction.
We also write reversal scripts to return to the old version if need be. This makes life easier if you have a part of a change that causes unanticipated problems on prod (usually performance issues).
I am writing a trigger to audit updates and deletes in tables. I am using SQL Server 2008
My questions are,
Is there a way to find out what action is being taken on a record without going through the selection phase of the deleted and inserted tables?
Another question is, if the record is being deleted, how do I record within the audit table the user that is performing the delete. (NOTE: the user connected to the database is a general connection string with a set user, I need the user who is logged into either a web app or a windows app)
Please help?
For part one, you can either set up separate triggers or have one trigger that checks the special tables INSERTED and DELETED to discriminate between updates and deletes.
For part two, there's no way around it in this case, you're going to have to get that username to the database somehow via your web/windows app. Unfortunately you can't communicate with the trigger itself, and with a generic connection string the DB doesn't have any idea who it's dealing with.
I've found that it can be helpful to add a "LastModifiedBy" column to the tables that you plan to audit so that you can store that info on the original tables themselves. Then your trigger just copies that info into the audit table. This is also nice because if you only need to know who the last person to touch something was you don't have to look in the audit table at all, just check that one column.
Consider this, if you don't actually delete records but add a field to mark them as deleted, you can get the user from the last modified. If you want to actually delete records then, you can have a nightly job that deletes in a batch not one at time. This could even be set up to flag if too many records are being deleted and not run.
The easiest way to do this so that nothing breaks is to rename the table, add the column IsDeleted as a bit field and then create a view with the same name the table was orginally called. The view will select all the records where isdelted is null.
Don't let anyone talk you out of using triggers for this. You don't want people who are doing unauthorized changes to be able to escape the auditing. With a trigger (and no rights to anyone except a production dba to alter the table in any way), then no one except the dba can delete without being audited. In a typical system with no stored procedures to limit direct table access, all too many people can usually directly affect a table opening it wide for fraud. People committing fraud do not typically use the application they are supposed to use to change the data. You must protect data at the database level.
When you write your triggers make sure they can handle multi-row inserts/updates/deletes. Triggers operate on the whole set of data not one row at a time.
As roufamatic said, you can either set up triggers specific to each action or you can check against the INSERTED and DELETED tables.
As for the deleting user, it is possible to pass that information into the trigger as long as the code in your application handles it. I encountered this requirement about a year ago with a client and the solution that I came up with was to use SET CONTEXT_INFO and CONTEXT_INFO() to pass the user name along. All of our database access was through stored procedures, so I just needed to add a line or two of code to the delete stored procedures to SET CONTEXT_INFO then I changed the delete triggers to get the user from CONTEXT_INFO(). The user name had to be passed as a parameter from the application of course. If you aren't using stored procedures you might be able to just do the SET CONTEXT_INFO in the application. I don't know how connection pooling might affect that method. Obviously, if someone does a delete outside of the application there wouldn't be a record of that unless you also separately captured the USERNAME() in your trigger (probably a good idea, although it wasn't necessary for our audit log, which was more for reporting than security).
There was a little bit of trickiness because CONTEXT_INFO is a binary string, but it didn't take long to get that all sorted out.
I'm afraid that I don't have any of the code handy since it was for a past client. If you run into any problems after going through the help for CONTEXT_INFO and SET CONTEXT_INFO then feel free to post here and I'll see what I can remember.
To find out what action is being taken you can use the INSERTED and DELETED tables to compare before and after values. There is no magic way to tell which user of a web app has made a change. The usual method is to have a modified column in your table and have the web app code populated this with the relevant username