Manual Cascaded Deletion in SQL Server 2005 - sql

I am writing an SSIS package where in a SQL task, I have to delete a record from a table. This record is linked to some tables and these related tables may be related to some other tables. So when I attempt to delete a record, I should remove all the references of it in other tables first.
I know that setting Cascaded delete is the best option to achieve this. However, it’s a legacy database where this change is not allowed. Moreover, it’s a transactional database where any accidental deletes from the application should be avoided.
Is there any way that SQL Server offers to frame such cascaded delete queries? Or writing the list of deletes manually is the only option?

The way that SQL Server offers to frame cascaded deletes is to use ON DELETE CASCADE which you have said you can't use.
It's possible to query to metadata to get a list of affected records in other tables, but it would be complicated since you want to remove the constraint (and therefore the metadata reference) before the delete.
You would need to, in a single transaction:
Query the metadata to get a list of affected tables. This would need to be recursive so you can get tables affected by the first tier, then those affected by those affected by the first tier, and so on.
Drop the constraint. This will also need to be recursive for the same reasons as listed above.
Delete the record(s) in all affected tables
Re-enable the constraints
Someone else may have a more elegant solution but I think this is probably it.
It could be easier to do in .NET with SQL Management Objects as well, if that's an option.
I should clarify too that I'm not endorsing this as the potential for issues is very very high.
I think your safest course of action is to manually write out the deletes.

Related

How to design a immutable append-only database?

for a project I need to implement a database which is immutable and only allows new entries. Editing or deleting entries should be impossible in any case.
I was thinking about a database which allows editing and deleting only for admins (so only me). However, I'm unsure if that is 100% safe or if it's possible to illegally get admin rights and forge the data. So the best solution would be to have a database which does not offer editing or deleting in the first place.
Suggestions appreciated! Thanks
PostgreSQL supports, since 9.5, Row Security Policies, which allow you to define select, insert, delete and update policies depending on the user, and/or some fields values in the table. You might find what you search there.
The simplest way is to GRANT separated rights to INSERT/UPDATE/DELETE to users but it may be insufficient for some business rules. However, many DBMS (SQL Server for example) support INSTEAD OF triggers which can quietly bypass any DELETE/UPDATE and process INSERT depending on your custom criteria implemented in trigger code.
You can also define an updateable view having INSTEAD OF triggers to insert-only data.

Having foreign keys between two different databases using linked servers?

I have two databases that I have connected using linked servers.I have DB1 and DB2 which I only have read access to. I'm using DB1 for my application and have linked DB2 so I can combine queries. Is it possible to have foreign keys in DB1 that are linked to DB2?
No, it is not possible to create foreign keys between objects in different databases (even if they are on the same server). The official documentation is pretty clear about that:
FOREIGN KEY constraints can reference only tables within the same database on the same server. Cross-database referential integrity must be implemented through triggers. For more information, see CREATE TRIGGER (Transact-SQL).
It even points you to the possible workaround, i.e. to try to implement some kind of referential integrity checks using triggers. You can add after insert/update triggers on both sides to validate the data changes, and after delete triggers on the primary table to check are there child records. If the validation fails, you will raise an error. You can also use instead of triggers.
But the solution with triggers will not guarantee the referential integrity anyway. You can lose connectivity between databases. You can restore one of the databases from older backup. All kind of things can go wrong. You better try to reconsider your database design. Is it possible to combine these two databases into one? Is it possible to maintain copies of both tables into each of the databases and try to replicate stuff?

Full SQL Statement History

I am facing a problem on a particular table in my database. The rows are being deleted without any reason (I have some procedures and triggers that modify the information inside the table but they are already tested).
So I need to see which DML statements are executed against the table.
I have already tried some methods, like using this query:
select SQL_FULLTEXT, FIRST_LOAD_TIME, ROWS_PROCESSED, PARSING_SCHEMA_NAME from v$sql;
filtering by the name of my table, or tried the SQL log.
Both methods don't show me the complete history of SQL executed (for example I can't see the statements executed by the procedures).
Can anyone give me some advice of where I can see ALL the DML executed in the database?
You're using a few terms that aren't defined within the context of Oracle Database, both 'sentence' and 'register.'
However.
If you want to see WHO is touching your data in a bad place, causing it to be deleted or changed, then you have 2 options.
Immediately, check your REDO logs. We have a package, dbms_logmnr, that will allow you to see what activity has been logged. Assuming that your tables weren't created with NOLOGGING clause, those UPDATEs and DELETEs should be recorded.
Tim has a nice article on this feature here.
The better solution going forward is AUDITING. You'll want to enable auditing in the database to record WHO is doing WHAT to your tables/data. This is included as part of the Enterprise Edition of the database. There is a performance hit, the more you decide to record, the more resources it will require. But it will probably be worth paying that price. And of course you'll have to manage the space required to maintain those logs.
Now, as to 'SQL Developer' and it's HISTORY feature. It ONLY records what you are executing in a SQL Worksheet. It won't see what others are doing. It can't help you here - unless this is a 1-man database, and you're only making changes with SQL Developer. Even then, it wouldn't be reliable as it has a limit, and only records changes done via the Worksheet.

SQL Server DDL changes (column names, types)

I need to audit DDL changes made to a database. Those changes need to be replicated in many other databases at a later time. I found here that one can enable DDL triggers to keep track of DDL activities, and that works great for create table and drop table operations, because the trigger gets the T-SQL that was executed, and I can happily store it somewhere and simply execute it on the other servers later.
The problem I'm having is with alter operations: when a column name is changed from Management Studio, the event that is produced doesn't contain any information about columns! It just says the table was locked... What's more, if many columns are changed at once (say, column foo => oof, and also, column bar => rab) the event is fired only once!
My poor man's solution would be to have a table to store the structure of the table that's going to be altered, before and after the alter operation. That way, I could compare both structures and figure out what happened to which column(s).
But before I do that, I was wondering if it is possible to do it using some other feature from SQL Server that I have overlooked, or maybe there's a better way. How would you go about this?
There is a product meant for doing just that (I wrote it).
It monitors scripts that contained ddl changes, who wrote them and when together with their effect on performance, and it gives you the ability to easily copy them as one deployment script. For what you asked, the free version is sufficient.
http://www.seracode.com/
There is no special feature in SQL Server regarding your need. You can use triggers, but they require a lot of T-SQL coding for proper function. Fast solution would be some third party tools, but they're not free. Please take a look at this answer regarding the third party tools https://stackoverflow.com/a/18850705/2808398

saving track of changes made by users in a Multi-user sql database

I'm working on a design of a relational database. It has several tables and there are multiple users on application level. I need to know that changes to a certain record of a certain table are made, by which user, which time, and what has actually changed. There is a table for saving the user's information and this table is also included in this behavior.
How should I do this in the SQL database design so I can let users see which one of them made these changes?
What you want is a Wiki-like versioning. Basically, for every table you want to keep versions, you'll want to create at least a copy of that table with the fields you mentioned added (userid, when it was added). That's probably all there is to it, as long as you only need to track changes. Then, upon an edit, you just create a backup of the current row in that copied table and put the new one in the actual table. This way you can (hopefully) add the versioning without having to touch existing presentational code.
It gets a little more tricky, if you need to record additional actions like creation of new rows and deletion.
If you need a code example, just have look under the hood of some Wiki like https://mediawiki.org/
For starters you can look at sql server version tracking mechanisms (row versioning or row changes). After that you can look at sql server audit features. I think sql server audit would be the best for your needs.
On the other hand, if you want to make ad-hok versioning then YOU MUST NOT go to triggers. Imagine, you must create triggers for all tables for inserts, updates and deletes. This IS bad practice.
I think ad-hoc versioning should be avoided (degradation in performance and difficult to support) but in case it cannot be avoided, I would surely use CONTEXT_INFO in order to track current user and then I would try to create something that would read the schema of the table, I would get changes by using sql server change tracking mechanisms and store that in a tablename, changeduser, changedtime, column, prevValue, newValue style. I would not replicate each and every table for the changes.