sql constraints - sql

Why do i have to drop all constraints (keys in general) before i can drop a table in sql server. I dont understand why this would be necessary if I have permissions to drop the table and the know how to do it why not drop all constraints for me?
Is there some sort of technical or database design reason for this?

Because of referential integrity, you (or someone else) might mistakenly attempt to delete a table that holds supporting information.
Just today, my office found an issue with records missing from a report due to account records being purged without checking for data integrity. Those records now have to be restored...
The idea is, as much as it's a pain to do, the process makes absolutely sure the operation is occurring as intended.

Related

SQL Server DDL changes (column names, types)

I need to audit DDL changes made to a database. Those changes need to be replicated in many other databases at a later time. I found here that one can enable DDL triggers to keep track of DDL activities, and that works great for create table and drop table operations, because the trigger gets the T-SQL that was executed, and I can happily store it somewhere and simply execute it on the other servers later.
The problem I'm having is with alter operations: when a column name is changed from Management Studio, the event that is produced doesn't contain any information about columns! It just says the table was locked... What's more, if many columns are changed at once (say, column foo => oof, and also, column bar => rab) the event is fired only once!
My poor man's solution would be to have a table to store the structure of the table that's going to be altered, before and after the alter operation. That way, I could compare both structures and figure out what happened to which column(s).
But before I do that, I was wondering if it is possible to do it using some other feature from SQL Server that I have overlooked, or maybe there's a better way. How would you go about this?
There is a product meant for doing just that (I wrote it).
It monitors scripts that contained ddl changes, who wrote them and when together with their effect on performance, and it gives you the ability to easily copy them as one deployment script. For what you asked, the free version is sufficient.
http://www.seracode.com/
There is no special feature in SQL Server regarding your need. You can use triggers, but they require a lot of T-SQL coding for proper function. Fast solution would be some third party tools, but they're not free. Please take a look at this answer regarding the third party tools https://stackoverflow.com/a/18850705/2808398

SQL Relationship Error

I have a database and for the sake of simplicity, lets say it has two tables. One is Employee and the other is EmployeeType. In the Database diagram i am trying to build a relationship of the two tables but it gives me the following error....
'EmployeeType (HumanResources)' table saved successfully
'Employee (HumanResources)' table
- Unable to create relationship 'FK_Employee_EmployeeType'.
The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_Employee_EmployeeType". The conflict occurred in database "TheDB", table "HumanResources.EmployeeType", column 'iID'.
I don't know what is wrong with it. please help...
Ouch, so many downvotes. At least he bothered to post the error message. As Daniel A. White said above, you likely already have the foreign key you're trying to make. Your tag says you're using SQL Server 2008, so I'm going to assume you Microsoft SQL Server Management Studio. If that is the case, one mistake I used to make (and sometimes still do!), is I go to create a new relationship by right-clicking any of the columns in the Design view of the table I want to make a foreign key, and hit Relationships. In the Foreign Key Relationships menu that appears, I hit "Add" at the bottom, and then expand the "Tables and Columns Specification" section. Then I hit the "..." next to "Tables and Columns Specification," which lets you choose what columns you want to relate.
Now the problem here that I sometimes run into is, if you don't finalize this relationship right here, you're setting yourself up for situations like this. The reason is, is that when you hit "Add" a few steps back, you already created a foreign key on this table. Even if you cancel out and close everything, the table retains that foreign key. If you experimented around with this, you may have already accidentally created the relationship, you just didn't know it then.
I realized how often I made this mistake when I first started, because I looked back at my very first project a year after I began working with SQL Server 2008, and the Foreign Key Relationships are funny, because I have a ton of foreign key entries pointing to no where in every table, where I obviously was just experimenting as I was learning.
Another thing, just to cover some more bases here, is that you mentioned you were doing this in the Database Diagram view. Don't think there's anything wrong with that, but I've found that sometimes doing things via the GUI, it can be kind of flakey. By that, what I mean is, sometimes I will want to make a really small and minor change, but the GUI will tell me that I can't do that kind of change after the table has been made. But then if I just do exactly what I want to do via a typed out query, it runs just fine.
The ultimate test to try to see if you have your desired functionality is to simply try putting in a record that would violate the foreign key restraint. If it goes through, then the relationship isn't there or it's not set up properly. If it fails, then you know you've already got what you want.
I think you got some downvotes because this isn't a very complicated problem/should usually be easily solvable on your own, especially with Google. Since it's not, but you bothered to post anyway, I'm assuming you're somewhat new to this, so I just wanted to throw out my advice as a fellow newbie. Don't let the downvotes get you down, just maybe next time ask a little more concrete question other than "what's wrong?" For instance, "This error seems to be telling me the key already exists. Can anyone tell me how to check and see what relationships already exist? I don't remember making this relationship, how could this have happened?" etc.
A simple answer: you have data in your tables which would break this relationship, so you cannot create the relationship.
In this case, check the Employee table and look for rows where the id for EmployeeType (EmployeeTypeId?) is either NULL or illegal (not to be found in the EmployeeType table). Fix these rows and the relationship will save OK.

Is there downside to creating and dropping too many tables on SQL Server

I would like to know if there is an inherent flaw with the following way of using a database...
I want to create a reporting system with a web front end, whereby I query a database for the relevant data, and send the results of the query to a new data table using "SELECT INTO". Then the program would make a query from that table to show a "page" of the report. This has the advantage that if there is a lot of data, this can be presented a little at a time to the user as pages. The same data table can be accessed over and over while the user requests different pages of the report. When the web session ends, the tables can be dropped.
I am prepared to program around issues such as tracking the tables and ensuring they are dropped when needed.
I have a vague concern that over a long period of time, the database itself might have some form of maintenance problems, due to having created and dropped so many tables over time. Even day by day, lets say perhaps 1000 such tables are created and dropped.
Does anyone see any cause for concern?
Thanks for any suggestions/concerns.
Before you start implementing your solution consider using SSAS or simply SQL Server with a good model and properly indexed tables. SQL Server, IIS and the OS all perform caching operations that will be hard to beat.
The cause for concern is that you're trying to write code that will try and outperform SQL Server and IIS... This is a classic example of premature optimization. Thousands and thousands of programmer hours have been spent on making sure that SQL Server and IIS are as fast and efficient as possible and it's not likely that your strategy will get better performance.
First of all: +1 to #Paul Sasik's answer.
Now, to answer your question (if you still want to go with your approach).
Possible causes of concern if you use VARBINARY(MAX) columns (from the MSDN)
If you drop a table that contains a VARBINARY(MAX) column with the
FILESTREAM attribute, any data stored in the file system will not be
removed.
If you do decide to go with your approach, I would use global temporary tables. They should get DROPped automatically when there are no more connections using them, but you can still DROP them explicitly.
In your query you can check if they exist or not and create them if they don't exist (any longer).
IF OBJECT_ID('mydb..##temp') IS NULL
-- create temp table and perform your query
this way, you have most of the logic to perform your queries and manage the temporary tables together, which should make it more maintainable. Plus they're built to be created and dropped, so it's quite safe to think SQL Server would not be impacted in any way by creating and dropping a lot of them.
1000 per day should not be a concern if you talk about small tables.
I don't know sql-server, but in Oracle you have the concept of temporary table(small article and another) . The data inserted in this type of table is available only on the current session. when the session ends, the data "disapear". In this case you don't need to drop anything. Every user insert in the same table, and his data is not visible to others. Advantage: less maintenance.
You may check if you have something simmilar in sql-server.

Manual Cascaded Deletion in SQL Server 2005

I am writing an SSIS package where in a SQL task, I have to delete a record from a table. This record is linked to some tables and these related tables may be related to some other tables. So when I attempt to delete a record, I should remove all the references of it in other tables first.
I know that setting Cascaded delete is the best option to achieve this. However, it’s a legacy database where this change is not allowed. Moreover, it’s a transactional database where any accidental deletes from the application should be avoided.
Is there any way that SQL Server offers to frame such cascaded delete queries? Or writing the list of deletes manually is the only option?
The way that SQL Server offers to frame cascaded deletes is to use ON DELETE CASCADE which you have said you can't use.
It's possible to query to metadata to get a list of affected records in other tables, but it would be complicated since you want to remove the constraint (and therefore the metadata reference) before the delete.
You would need to, in a single transaction:
Query the metadata to get a list of affected tables. This would need to be recursive so you can get tables affected by the first tier, then those affected by those affected by the first tier, and so on.
Drop the constraint. This will also need to be recursive for the same reasons as listed above.
Delete the record(s) in all affected tables
Re-enable the constraints
Someone else may have a more elegant solution but I think this is probably it.
It could be easier to do in .NET with SQL Management Objects as well, if that's an option.
I should clarify too that I'm not endorsing this as the potential for issues is very very high.
I think your safest course of action is to manually write out the deletes.

Quickest way to roll back SQL data Was: Best way to develop a data-mangling stored procedure

Edit: OK I asked the wrong question here.
I'm going to be coding a stored proc that affects a lot of data, so I need to know the quickest, easiest way to roll back the data to the original state after I run a test.
Old question:
I have a development database holding live data. This needs to be obfuscated for privacy, particularly company names and contact details.
To throw a spanner in the works the company NAME is the primary key. (.... yes, i know. legacy code. hooray.)
Now, I need to obfuscate the company name (say, change each to "Company 001" etc.) while preserving referential integrity with dozens of tables linked by this value. During my testing I'm going to mangle a lot of data, and then need to roll back to the original state after testing, probably many times before i get the procedure correct.
So the process will be:
Mangle company data
test within the application to ensure linked data displays correctly
roll back data for bugfixes
repeat
My initial thought is to simply back up and restore after each test. But this seems time consuming. Is there a better way?
If you are doing data tests of some kind, please considering using a test database or a playground database for this...
If that is not possible... The code below will rollback all the data changes...
BEGIN TRANSACTION
--do my tests
ROLLBACK
EDIT
You could also add some code into your application that will perform the tests and then restore a backup after your test is complete.
Use Test database with dummy data if that is available. Blow it away and repopulate with more random dummy data. If you already have fixtures for this then you are most of the way there. If you don't this is a good way to get started writing them.
Other than that backup and restore is probably your best bet.
IMHO mangling data inside your database is going to do little more than muck up your database. If prying eyes have gained access to your database I fear you have bigger problems on your hands at worst and at best they'll figure out how to undo the mangling. They did, hypothetically after all, just gain access to your database. Reversing some obfuscation will be the easy part.
Options I've discovered so far:
Microsoft SQL Server Database Publishing Wizard 1.1 for MS SQL 2005
http://www.microsoft.com/downloads/details.aspx?FamilyId=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en
(one hyperlink limit)
If the database has referential integrity enabled via foreign key constraints that aren't setup to auto update on change then the commit should fail.
You could update the constraints to auto change on update so when you update the parent record, the child records are then updated automatically.