Include but not Delete SQL Schema Compare - sql

I am attempting to use SQL Schema Compare in Visual Studio 2013/15 and am running into the problem that discluding tables from delete removes them from being processed at all.
The issue is that the tables it is trying to delete are customer made tables, so when we sync our version against their databases it asks to delete them. We do not want to delete them, but some of their tables have constraints on ours so when it attempts to CCDR it fails due to table constraints. Is there a way to add the table to be (re-created? like the rest of them?), without writing scripts for each client to do what SQL Schema Compare already does just for those few tables?
Red-Gate's SQL Compare does this somehow, but it's hidden from us so not quite sure how it's achieved. Discluding doesn't delete, but does not error on the script either.
UPDATE:
The option "Drop constraints not in source" does not appear to work correctly. It does drop some, however there are others that it just does not drop the constraints. In red-gate's tool, when we compared I found how to get the SQL from it, and their product doesn't say the table needs to be updated at all, while Visual Studio's does. They seem to work almost identical, but the tables that fail are the ones that shouldn't be update at all (read below)
Update 2:
Another problem I've found is "Ignore column collation" also doesn't work correctly, as tables that shouldn't be getting dropped are being told they need to be updated even though it's only order of column changes, not actual column or data changes, which makes this feel like more of a bug report than anything.

My suggestion with these types of advance data calculations is to not use Visual Studio. Put the logic on the Sql engine and write the code for this in Sql. Due to the multi user locking issues of a Sql engine these types of processes are prone to fail when the wrong combinations of user actions happen at the same time. The Visual Studio tool can not interface with the data locking issues due to records changing that the Sql engine can. If you even get this to work it will only be safe to run if you are in single user mode.
It is a nice to use tool, easier than writing Sql but there are huge reliability and consistency risks for going down this path.

I don't know if this will help but I've found this paragraph
on the following page:
https://msdn.microsoft.com/en-us/library/hh272690(v=vs.103).aspx
The update will fail because our change involves changing a column
from NOT NULL to NULL and as a result causes data loss. If you want to
proceed with the update, click on the Options button (the fifth one
from the left) on the toolbar for the Schema Compare and uncheck the
block incremental deployment if data loss option.

Related

Method for updating tables that users are looking at?

I'm looking for a method or solution to allow for a table to be updated that others are running select queries on?
We have an MS SQL Database storing tables which are linked through ODBC to an Access Database front-end.
We're trying to have a query run an update on one of these linked tables but often it is interrupted by users running select statements on the table to look at data though forms inside access.
Is there a way to maybe create a copy of this database table for the users to look at so that the table can still be updated?
I was thinking maybe a transaction but can you perform transactions for select statements? Do they work that way?
The error we get from inside access when we try to run the update while a user has the table open is:
Any help is much appreciated,
Cheers
As a general rule, this should not be occurring. Those reports should not lock nor prevent the sql system from not allowing inserts.
For a quick fix, you can (should) link the reports to some sql server views for their source. And use this for the view:
SELECT * from tblHotels WITH (NOLOCK)
In fact in MOST cases this locking occurs due to combo boxes being driven by a larger table in from SQL server - if the query does not complete (and access has the nasty ability to STOP the flow of data, then you get a sql server table lock).
You also can see the above "holding" of a lock when you launch a form with a LARGE dataset If access does not finish pulling the table/query from SQL server - again a holding lock on the table can remain.
However, I as a general rule NOT seen this occur for reports.
However, it not all clear how the reports are being used and how their data sources are setup.
But, as noted, the quick fix is to create some views for the reports, and use the no-lock hint as per above. That will prevent the tables from holding locks.
Another HUGE idea? For the reports, if they often use some date range or other critera? MAKE 100% sure that sql server has index on the filter or critera. If you don't, then SQL server will scan/lock the whole table. This advice ALSO applies VERY much to say a form in which you filter - put indexing (sql server side) on those common used columns.
And in fact, the notes about the combo box above? We found that JUST adding a indexing to the sort column used in the combo box made most if not all locking issues go away.
Another fix that often works - and requires ZERO changes to the ms-access client side software?
You can change this on the server:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The above also will in most cases fix the locking issue.

SQL Server : list all columns used in queries

Is there a way to detect which columns and which tables are used in a SQL Server database?
Just against SQL Server 2012 would be fine.
We can assume there are no '*' for column usage in the legacy site.
Details:
I'm working on updating the table structure of a legacy system to work on a newer database (2005 to 2012)
There are a lot of bloated tables, with columns that are never used, and even tables that are never used. Identifying all of them would be a pain by manually going through the code.
(My assumption is that we can run SQL Server profiler while running a complete test pass on the app, but I don't know a convenient way to extract the columns)
Thanks.
You can list dependencies for a table in Mgmt Studio, which will show you which SPs, UDFs etc depend on the table in question. You can't do that for a single field. However, that would only show the internal dependencies. Sql Profiler would theoretically show you all fields that get requested by your app however it still would not really tell you much as the app may not do anything with the values it retrieves. If you are going to change the db it would only really make sense to put in the effort if you were also going to change the app and then you should be really get some input from users on what features are still useful and what is broken before you get too involved in a back-end refresh. IMHO.

SQL Server DDL changes (column names, types)

I need to audit DDL changes made to a database. Those changes need to be replicated in many other databases at a later time. I found here that one can enable DDL triggers to keep track of DDL activities, and that works great for create table and drop table operations, because the trigger gets the T-SQL that was executed, and I can happily store it somewhere and simply execute it on the other servers later.
The problem I'm having is with alter operations: when a column name is changed from Management Studio, the event that is produced doesn't contain any information about columns! It just says the table was locked... What's more, if many columns are changed at once (say, column foo => oof, and also, column bar => rab) the event is fired only once!
My poor man's solution would be to have a table to store the structure of the table that's going to be altered, before and after the alter operation. That way, I could compare both structures and figure out what happened to which column(s).
But before I do that, I was wondering if it is possible to do it using some other feature from SQL Server that I have overlooked, or maybe there's a better way. How would you go about this?
There is a product meant for doing just that (I wrote it).
It monitors scripts that contained ddl changes, who wrote them and when together with their effect on performance, and it gives you the ability to easily copy them as one deployment script. For what you asked, the free version is sufficient.
http://www.seracode.com/
There is no special feature in SQL Server regarding your need. You can use triggers, but they require a lot of T-SQL coding for proper function. Fast solution would be some third party tools, but they're not free. Please take a look at this answer regarding the third party tools https://stackoverflow.com/a/18850705/2808398

Best practices for writing SQL scripts for deployment

I was wondering what are the best practices in order to write SQL scripts to set up databases for production and/or development, for instance:
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
Is correct to disable FK check before executing the body of the script?
May I include the hole script in a transaction?
Is better to generate 1 script per database than one script for all of them?
Thanks!
The problem with your question is is hard to answer as it depends on the way the scripts are used in what you are trying to achieve. you also don't say which DB server you are using as there are tools provided which can make some tasks easier.
Taking your points in order, here are some suggestions, which will probably be very different to everyone elses :)
Should I include the CREATE DATABASE
statement?
What alternative are you thinking of using? If your question is should you put the CREATE DATABASE statement in the same script as the table creation it depends. When developing DB I use a separate create DB script as I have a script to drop all objects and so I don't need to create the database again.
Should I create users for the database in the same script?
I wouldn't, simply because the users may well change but your schema has not. Might as well manage those changes in a smaller script.
Is correct to disable FK check before executing the body of the script?
If you are importing the data in an attempt to recover the database then you may well have to if you are using auto increment IDs and want to keep the same values. Also you may end up importing the tables "out of order" an not want checks performed.
May I include the whole script in a transaction?
Yes, you can, but again it depends on the type of script you are running. If you are importing data after rebuilding a db then the whole import should work or fail. However, your transaction file is going to be huge during the import.
Is better to generate 1 script per database than one script for all of them?
Again, for maintenance purposes it's probably better to keep them separate.
This probably depends what kind of database and how it is used and deployed. I am developing a n-tier standard application that is deployed at many different customer sites.
I do not add a CREATE DATABASE statement in the script. Creating the the database is a part of the installation script which allows the user to choose server, database name and collation
I have no knowledge about the users at my customers sites so I don't add create users statements also the only user that needs access to the database is the user executing the middle tire application.
I do not disable FK checks. I need them to protect the consistency of the database, even if it is I who wrote the body scripts. I use FK to capture my errors.
I do not include the entire script in one transaction. I require from the users to take a backup of the db before they run any db upgrade scripts. For creating of a new database there is nothing to protect so running in a transaction is unnecessary. For upgrades there are sometimes extensive changes to the db. A couple of years ago we switched from varchar to nvarchar in about 250 tables. Not something you would like to do in one transaction.
I would recommend you to generate one script per database and version control the scripts separately.
Direct answers, please ask if you need to expand on any point
* Should I include the CREATE DATABASE statement?
Normally I would include it since you are creating and owning the database.
* Should I create users for the database in the same script?
This is also a good idea, especially if your application uses specific users.
* Is correct to disable FK check before executing the body of the script?
If the script includes data population, then it helps to disable it so that the order is not too important, otherwise you can get into complex scripts to insert (without fk link), create fk record, update fk column.
* May I include the hole script in a transaction?
This is normally not a good idea. Especially if data population is included as the transaction can become quite unwieldy large. Since you are creating the database, just drop it and start again if something goes awry.
* Is better to generate 1 script per database than one script for all of them?
One per database is my recommendation so that they are isolated and easier to troubleshoot if the need arises.
For development purposes it's a good idea to create one script per database object (one script for each table, stored procedure, etc). If you check them into your source control system that way then developers can check out individual objects and you can easily keep track of versions and know what changed and when.
When you deploy you may want to combine the changes for each release into one single script. Tools like Red Gate SQL compare or Visual Studio Team System will help you do that.
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
That depends on your DBMS and your customer.
In an Oracle environment you will probably never be allowed to do such a thing (mainly because in the Oracle world a "database" is something completely different than e.g. in the PostgreSQL or MySQL world).
Sometimes the customer will have a DBA that won't let you create databases (or schemas or users - depending on the DBMS in use). So you will need to supply that information to the DBA in order for him/her to prepare the environment for your script.
May I include the hole script in a transaction?
That totally depends on the DBMS that you are using.
Some DBMS don't support transactional DDL and will implicitely commit any transaction when you execute a DDL statement, so you need to consider the order of your installation script.
For populating the tables with data I would definitely try to do that in a single transaction, but again this depends on your DBMS.
Some DBMS are faster if you commit only once or very seldomly (Oracle and PostgreSQL fall into this category) but will slow down if you commit more often.
Other DBMS handle smaller but more transactions better and will slow down if the transactions get too big (SQL Server and MySQL tend to fall into that direction)
The best practices will differ considerably on whether it is the first time set-up or a new version being pushed. For the first time set-up yes you need create database and create table scripts. For a new version, you need to script only the changes from the previous version, so no create database and no create table unless it is a new table. Now you need alter table statements becasue you don't want to lose the existing data. I do usually write stored procs, functions and views with a drop and create statment as dropping those pbjects doesn't generally affect the underlying data.
I find it best to create all database changes with scripts that are stored in source control under the version. So if a client is new, you run the create version 1.0 scripts, then apply all the other versions in order. If a client is just upgrading from version 1.2 to version 1.3, then you run just the scripts in version 1.3 source control repository. This would also include scripts to populate or add records to lookup tables.
For transactions you may want to break them up into several chunks not to leave a prod database locked in one transaction.
We also write reversal scripts to return to the old version if need be. This makes life easier if you have a part of a change that causes unanticipated problems on prod (usually performance issues).

Purging SQL Tables from large DB?

The site I am working on as a student will be redesigned and released in the near future and I have been assigned the task of manually searching through every table in the DB the site uses to find tables we can consider for deletion. I'm doing the search through every HTML files source code in dreamweaver but I was hoping there is an automated way to check my work. Does anyone have any suggestions as to how this is done in the business world?
If you search through the code, you may find SQL that is never used, because the users never choose those options in the application.
Instead, I would suggest that you turn on auditing on the database and log what SQL is actually used. For example in Oracle you would do it like this. Other major database servers have similar capabilities.
From the log data you can identify not only what tables are being used, but their frequency of use. If there are any tables in the schema that do not show up during a week of auditing, or show up only rarely, then you could investigate this in the code using text search tools.
Once you have candidate tables to remove from the database, and approval from your manager, then don't just drop the tables, create them again as an empty table, or put one dummy record in the table with mostly null values (or zero or blank) in the fields, except for name and descriptive fields where you can put something like "DELETED" "Report error DELE to support center", etc. That way, the application won't fail with a hard error, and you have a chance at finding out what users are doing when they end up with these unused tables.
Reverse engineer the DB (Visio, Toad, etc...), document the structure and ask designers of the new site what they need -- then refactor.
I would start by combing through the HTML source for keywords:
SELECT
INSERT
UPDATE
DELETE
...using grep/etc. None of these are HTML entities, and you can't reliably use table names because you could be dealing with views (assuming any exist in the system). Then you have to pour over the statements themselves to determine what is being used.
If [hopefully] functions and/or stored procedures were used in the system, most DBs have a reference feature to check for dependencies.
This would be a good time to create a Design Document on a screen by screen basis, listing the attributes on screen & where the value(s) come from in the database at the table.column level.
Compile your list of tables used, and compare to what's actually in the database.
If the table names are specified in the HTML source (and if that's the only place they are ever specified!), you can do a Search in Files for the name of each table in the DB. If there are a lot of tables, consider using a tool like grep and creating a script that runs grep against the source code base (HTML files plus any others that can reference the table by name) for each table name.
Having said that, I would still follow Damir's advice and take a list of deletion candidates to the data designers for validation.
I'm guessing you don't have any tests in place around the data access or the UI, so there's no way to verify what is and isn't used. Provided that the data access is consistent, scripting will be your best bet. Have it search out the tables/views/stored procedures that are being called and dump those to a file to analyze further. That will at least give you a list of everything that is actually called from some place. As for if those pages are actually used anywhere, that's another story.
Once you have the list of the database elements that are being called, compare that with a list of the user-defined elements that are in the database. That will give you the ones that could potentially be deleted.
All that being said, if the site is being redesigned then a fresh database schema may actually be a better approach. It's usually less intensive to start fresh and import the old data than it is to find dead tables and fields.