Is there a way to prevent DROP TABLE in SQL Server somehow, simply by using SSMS and its features?
Don't give users permissions to drop tables.
You might think a DDL trigger can prevent this. It does, in a way: it lets the drop happen, then it rolls it back. Which is not quite preventing it, but I suppose it might be good enough.
Check this , There are two methods basically
The first one is based on creating a view on the table with option
SCHEMABINDING. When the SCHEMABINDING option is used the table cannot
be modified in a way that will affect the view definition, as well as
it cannot be dropped unless the view is dropped first.
The second method is using the new DDL triggers in SQL Server 2005.
Defining a trigger for DROP_TABLE with rollback in the body will not
allow dropping tables.
Related
Is it possible to automate the creation of triggers in Oracle SQL, like if a drop table command is ran, not all of your triggers have to be recreated? I didn't find anything online to solve this problem.
Thanks in advance for your answers
Put simply, no. When a table is dropped, everything associated with it goes away, including all indexes, triggers, and etc. If the table is then recreated, all the triggers must be recreated. This is hard-wired into the database, and there's no DDL statement for DROP TABLE XYZ123 EXCEPT FOR ALL THE TRIGGERS WHICH YOU CAN JUST LEAVE FLOATING AROUND IN SPACE UNTIL AND IF THE TABLE IS RECREATED;
As someone else mentioned, you might want to consider using TRUNCATE TABLE, which blows the data away but leaves everything else intact. Another option is to use a global temp table - see this article at Oracle-Base
I have a application written by other team in our company that insert data in one table.
Let's say they write data into table Log1 with fields:
Id (auto-generated primary key);
KeyId;
Value1;
Value2;
Value3.
For now I need to have another additional record in another table (Log2) from them that has only part of their data:
Id (it will be my own auto-generated Id);
KeyId;
Value1.
I see 2 ways to do that:
Create trigger that on adding records into Log1 will automatically create record in Log2 with required data;
Implement SP that will accept all required data for Log1 table and will create records in both tables, then ask those applications authors use SP instead of direct INSERT query.
What do you think is the best way in this case and why?
Thank you very much for your help.
P.S. I'm using MS SQL 2005
Go with option 1.
It means that the tables will be synchronised properly even if the "correct" stored procedure interface isn't used and it will be easier and more efficient to insert multiple rows (How would you do this with a stored procedure in SQL Server 2005? - Call it multiple times? Convert all the data to XML format first?)
If you use a trigger, be aware that as it seems both Log1 and Log2 use identity columns, that you can't use SELECT ##IDENTITY to return the PK of Log1 - you will need to use SCOPE_IDENTITY().
On the other hand, if you use a SPROC, what you can do is revoke INSERT privileges to your table from (just about) everyone, and instead grant EXEC on your SPROC. This way access to your table should be fairly well guarded.
The only way to really guarantee your data integrity is with a trigger. There is always a chance that someone will execute an operation (bulk operation, sql insert statement, etc.) that will bypass your SP.
Go with option 2.
Triggers should be avoided whenever possible.
One not so obvious reason: Have you ever used SQL Server replication facilities? Triggers won't be very straightforward to replicate. (ie it is not as easy as a couple of clicks, like it is for tables for instance). But I'm going off topic ... bottom line, triggers are evil... avoid when you can.
EDIT
More reasons: Triggers are not easy to see like other objects in the DBMS. On the application side, they are invisible, and if not well documented, they tend to be forgotten. If there are changes to the schema ... oh well, it's just easier to maintain stuff with stored procedures.
If we want to change the name of MyColumnName to MyAlteredColumnName...
...and we have a SQL Server 2008 table that looks like:
MyTable
MyColumnName
and a view that references the underlying column:
CREATE VIEW MyDependentView WITH SCHEMABINDING
AS
SELECT ..., MyTable.MyColumnName
We end up following this procedure:
Dropping the View
Altering MyTable.MyColumnName to MyTable.MyAlteredColumnName
Recreating the View with a reference to MyAlteredColumnName
We do this with migrator dot net.
Is there a better way to do this? Is there T-SQL that will alter a view column name? Or any support in SQL Server 2008 for automagically tying the columns together?
Without the use of a third-party tool, this is one of the only ways to do it. You can obviously also use ALTER VIEW instead of a DROP and CREATE.
It should be noted that Red-Gate makes a tool called SQL Refactor which will automate this sort of change (no I do not work for them). I'm sure there are other similar database refactoring tools out there.
Use sp_refreshview:
EXEC sp_refreshview #viewName
If you want to refresh all your views, you'll have to iterate over a loop of them, which means dynamic SQL.
And if you layered them (a view is dependent on another view - bad), you'll have to refresh the parent first...
If it's a SELECT * view, you can call sp_refreshview, as OMG_Ponies suggested. It will recompile the view and update the column metadata appropriately. This is one area where judicious use of SELECT * could have benefits, if used appropriately within a coherent scheme.
Otherwise, you must redefine the view. Any explicit references to the old column name will now raise an error.
Ah, one more alternative:
EXEC sp_rename 'MyTable.MyColumnName', 'MyAlteredColumnName'
ALTER TABLE MyTable ADD MyColumnName AS MyAlteredColumnName
EXEC sp_rename 'MyView.MyColumnName', 'MyAlteredColumnName'
It's a hack, and it's dangerous, since the stored view definition will now be out of sync with the view metadata. And you have littered the db with superfluous computed columns.
But it will work (until you forget what you did, or someone else has to maintain the system, and things start to break mysteriously).
I use a third party tool for this, it hasn't failed me yet. It's ApexSQL Refactor, here's the how-to tutorial
How to rename a column without breaking your SQL database
I'm pretty sure there is no way, but i'm putting this out there for those expert beyond my knowledge.
What i am looking to do is to somehow alter SELECT statements before they are executed, at the database level. For a seriously pared-down example, i'd like to do something like the following... when someone executes the following SQL
SELECT * FROM users.MESSAGES
i'd like to catch it, before it executes, and alter the statement to something like
SELECT * FROM users.MESSAGES WHERE RECIPIENT = ORIGINAL_LOGIN()
allowing me to enforce user limitations on the data in a fashion similar to ORACLE's VPDs, without needing to resort to creating views on top of all my tables that might need this.
Look into using a VIEW.
Sadly, this is not possible.
Even the Microsoft SQL Server row-level security features (e.g. in the security catalogs) are implemented using views.
So, if you really need the feature, you're going to have to set up views with SUSER_NAME() or some similar individual- or role-identifier in the WHERE clauses.
Sorry!
Use views (or inline table-valued functions), generate the views automatically and remove rights from the tables.
There used to be a round-about unethical way in SQL 2000. You could create a trigger on master..sysprocesses table for INSERT and do this kind of manipulation. Thankfully, this is not possible, at least AFAIK, in SQL 2005, as master..sysprocesses is a fake table.
For the benefit of some of US still using SQL 2000, here is how to do this in SQL 2000:
In the console right click on
servername
In the properties go to server
settings tab
Then check allow modifications to be
made directly to the system catalogs
checkbox.
Selected the sysprocesses table in
sysobjects and change it's xtype
from S(system) to U(user)
Now go to Master DB, tables - right
click on sysprocesses-All
tasks-Manage Triggers
Then you can write your trigger
After you are done, turn back
everything to it's original state.
Even with all of this, I still doubt if you can change the Select statement.
Raj
Disclaimer: Try this at your own risk. Be warned that you are making changes to system objects which could lead to undesirable results.
First, grant no direct table access to the users so they can't see the data from an ad hoc query. Make them use a stored proc to access the table and one of the parameters of the proc is the user login. Then write the code so that it only selects records for that login.
Is it possible to create more than one table at a time using single create table statement.
For MySQL, you can use multi-query to execute multiple SQL statements in a single call. You'd issue two CREATE TABLE statements separated by a semicolon.
But each CREATE TABLE statement individually can create only one table. The syntax supported by MySQL does not allow multiple tables to be created simultaneously.
#bsdfish suggests using transactions, but DDL statements like CREATE TABLE cause implicit transaction commits. There's no way to execute multiple CREATE TABLE statements in a single transaction in MySQL.
I'm also curious why you would need to create two tables simultaneously. The only idea I could come up with is if the two tables have cyclical dependencies, i.e. they reference each other with foreign keys. The solution to that is to create the first table without that foreign key, then create the second table, then add the foreign key to the first table with ALTER TABLE ADD CONSTRAINT. Dropping either table requires a similar process in reverse.
Not with MS SQL Server. Not sure about mysql.
Can you give more info on why you'd want to do this? Perhaps there's an alternative approach.
I don't know, but I don't think you can do that. Why you want to do this?
Not in standard SQL using just the 'CREATE TABLE' statement. However, you can write multiple statements inside a CREATE SCHEMA statement, and some of those statements can be CREATE TABLE statements. Next question - does your DBMS support CREATE SCHEMA? And does it have any untoward side-effects?
Judging from the MySQL manual pages, it does support CREATE SCHEMA as a synonym for CREATE DATABASE. That would be an example of one of the 'untoward side-effects' I was referring to.
(Did you know that standard SQL does not provide a 'CREATE DATABASE' statement?)
I don't think it's possible to create more than one table with a 'CREATE TABLE' command. Everything really depends on what you want to do. If you want the creation to be atomic, transactions are probably the way to go. If you create all your tables inside a transaction, it will act as a single create statement from the perspective of anything going on outside the transaction.