Sql Server modify select - sql

I'm pretty sure there is no way, but i'm putting this out there for those expert beyond my knowledge.
What i am looking to do is to somehow alter SELECT statements before they are executed, at the database level. For a seriously pared-down example, i'd like to do something like the following... when someone executes the following SQL
SELECT * FROM users.MESSAGES
i'd like to catch it, before it executes, and alter the statement to something like
SELECT * FROM users.MESSAGES WHERE RECIPIENT = ORIGINAL_LOGIN()
allowing me to enforce user limitations on the data in a fashion similar to ORACLE's VPDs, without needing to resort to creating views on top of all my tables that might need this.

Look into using a VIEW.

Sadly, this is not possible.
Even the Microsoft SQL Server row-level security features (e.g. in the security catalogs) are implemented using views.
So, if you really need the feature, you're going to have to set up views with SUSER_NAME() or some similar individual- or role-identifier in the WHERE clauses.
Sorry!

Use views (or inline table-valued functions), generate the views automatically and remove rights from the tables.

There used to be a round-about unethical way in SQL 2000. You could create a trigger on master..sysprocesses table for INSERT and do this kind of manipulation. Thankfully, this is not possible, at least AFAIK, in SQL 2005, as master..sysprocesses is a fake table.
For the benefit of some of US still using SQL 2000, here is how to do this in SQL 2000:
In the console right click on
servername
In the properties go to server
settings tab
Then check allow modifications to be
made directly to the system catalogs
checkbox.
Selected the sysprocesses table in
sysobjects and change it's xtype
from S(system) to U(user)
Now go to Master DB, tables - right
click on sysprocesses-All
tasks-Manage Triggers
Then you can write your trigger
After you are done, turn back
everything to it's original state.
Even with all of this, I still doubt if you can change the Select statement.
Raj
Disclaimer: Try this at your own risk. Be warned that you are making changes to system objects which could lead to undesirable results.

First, grant no direct table access to the users so they can't see the data from an ad hoc query. Make them use a stored proc to access the table and one of the parameters of the proc is the user login. Then write the code so that it only selects records for that login.

Related

Creating view with nested 'no-lock' on SQL Server

Here is the scenario: I have some database model with about 500K new records everyday. The database is almost never updated (only insert statement and delete).
Many users would like to perform queries against database with tools such as PowerBI or so, but I haven't given any access to anybody to prevent deadlocking (I only allow specific IT managed resource to access the data).
I would like to open up data access, but I must prevent any one from blocking the new records insertions.
Could I create a view with nested no-lock inside it assuming no dirty read are created since no update are performed?
Would that be an acceptable design? I know it's is not a perfect solution and it's not mean for that.
It's a compromise to allow user with no SQL skills to perform ad-hoc queries and lookup.
Anything I might be missing?
I think that you can use 'WITH (NoLock) ' in front of table name in query, such as :
SELECT * FROM [table Name] WITH (NoLock)

How do you save a CREATE VIEW statement?

EDIT: This question was based on the incorrect premise that SQL VIEWS were cleared from a database when the user that created them disconnects from the server. Leaving this question in existence in case others have that assumption.
I'm trying to use views in my database, but I'm running up against an inability to save the code as a SQL Server object for repeated use.
I tried saving CREATE VIEW statements as procedures and user defined functions, but as many have answered on stack overflow, CREATE PROCEDURE and CREATE FUNCTION are incompatible with CREATE VIEW due to the only one in batch issue.
Obviously I don't want to retype my CREATE VIEW statements every time, and I'd prefer not to have to load them from text files. I must be missing something here.
You don't really "save" CREATE/ALTER statements. The create or alter statement changes the structure of the database. You can use SSMS to generate the statement again later by right clicking on the view, and choosing Script as->Create. This inspects the structure of the database and generates the statement.
The problem with this approach is your database now consists of both a structure definition(DDL) as well as its contents, the data. If you dropped/created the database to clear its data, you'd also have lost the structure. So you always need a database hanging around for the structure and back it up to ensure you don't ever lose the DDL.
Personally I would use Database Projects as part of Visual Studio and SQL Server Data Tools. This allows you to keep each View, Table, etc. as separate files, and then update the database using schema compare. The main benefit being you can separate the definition of the database from the database itself, and also source control or backup the DDL files.
If you really want to, you could create a view in a proc like this:
CREATE PROCEDURE uspCreateView AS
EXEC('CREATE VIEW... ')
Though, you'll have to escape single quotes in your view code with ''
However, I have to agree with the other comments that this seems like a strange thing to do.
Some other thoughts:
You can use sp_helptext to get the code of an existing view:
sp_helptext '<your view name here>'
Also, INFORMATION_SCHEMA.VIEWS includes a VIEW_DEFINITION column with the same code:
SELECT * FROM INFORMATION_SCHEMA.VIEWS

Preventing DROP table in SQL Server

Is there a way to prevent DROP TABLE in SQL Server somehow, simply by using SSMS and its features?
Don't give users permissions to drop tables.
You might think a DDL trigger can prevent this. It does, in a way: it lets the drop happen, then it rolls it back. Which is not quite preventing it, but I suppose it might be good enough.
Check this , There are two methods basically
The first one is based on creating a view on the table with option
SCHEMABINDING. When the SCHEMABINDING option is used the table cannot
be modified in a way that will affect the view definition, as well as
it cannot be dropped unless the view is dropped first.
The second method is using the new DDL triggers in SQL Server 2005.
Defining a trigger for DROP_TABLE with rollback in the body will not
allow dropping tables.

Trigger or SP: what should I use in my case?

I have a application written by other team in our company that insert data in one table.
Let's say they write data into table Log1 with fields:
Id (auto-generated primary key);
KeyId;
Value1;
Value2;
Value3.
For now I need to have another additional record in another table (Log2) from them that has only part of their data:
Id (it will be my own auto-generated Id);
KeyId;
Value1.
I see 2 ways to do that:
Create trigger that on adding records into Log1 will automatically create record in Log2 with required data;
Implement SP that will accept all required data for Log1 table and will create records in both tables, then ask those applications authors use SP instead of direct INSERT query.
What do you think is the best way in this case and why?
Thank you very much for your help.
P.S. I'm using MS SQL 2005
Go with option 1.
It means that the tables will be synchronised properly even if the "correct" stored procedure interface isn't used and it will be easier and more efficient to insert multiple rows (How would you do this with a stored procedure in SQL Server 2005? - Call it multiple times? Convert all the data to XML format first?)
If you use a trigger, be aware that as it seems both Log1 and Log2 use identity columns, that you can't use SELECT ##IDENTITY to return the PK of Log1 - you will need to use SCOPE_IDENTITY().
On the other hand, if you use a SPROC, what you can do is revoke INSERT privileges to your table from (just about) everyone, and instead grant EXEC on your SPROC. This way access to your table should be fairly well guarded.
The only way to really guarantee your data integrity is with a trigger. There is always a chance that someone will execute an operation (bulk operation, sql insert statement, etc.) that will bypass your SP.
Go with option 2.
Triggers should be avoided whenever possible.
One not so obvious reason: Have you ever used SQL Server replication facilities? Triggers won't be very straightforward to replicate. (ie it is not as easy as a couple of clicks, like it is for tables for instance). But I'm going off topic ... bottom line, triggers are evil... avoid when you can.
EDIT
More reasons: Triggers are not easy to see like other objects in the DBMS. On the application side, they are invisible, and if not well documented, they tend to be forgotten. If there are changes to the schema ... oh well, it's just easier to maintain stuff with stored procedures.

MySql to Sql Server migration questions

I did a succesful migration from MySql to Sql Server using the migration tool.
Unfortunately for some reason it labels the tables database.DBO.tablename instead of just database.tablename
I have never used Sql Server so perhaps this is just the way they name their tables.
When I do:
SELECT TOP 1000 [rid]
,[filename]
,[qcname]
,[compound]
,[response]
,[isid]
,[isidresp]
,[finalconc]
,[rowid]
FROM [test].[calibration]
it does not work
But, when I do:
SELECT TOP 1000 [rid]
,[filename]
,[qcname]
,[compound]
,[response]
,[isid]
,[isidresp]
,[finalconc]
,[rowid]
FROM [test].[dbo].[calibration]
it works.
Does anyone know why it prefixes with DBO?
dbo is the standard database owner for anything you create (tables, stored procedures, etc,..), hence the migration tool automatically prefixing everything with it.
When you access something in Sql Server, such as a table called calibration, the following are functionally equivalent:
calibration
dbo.calibration
database_name.dbo.calibration
server_name.database_name.dbo.calibration
MySql doesn't, as far as I remember (we migrated a solution from MySql to SqlServer about 12 months ago using custom scripts executed by nant) support database owner's when referencing objects, hence you're probably not familiar with four part (server_name.database_name.owner_name.object_name) references.
Basically, if you want to specify the database you're accessing, you also need to specify the "owner" of the object. i.e, the following are functionally identical:
USE [master]
GO
SELECT * FROM [mydatabase].[dbo].[calibration]
USE [mydatabase]
GO
SELECT * FROM [calibration]
SqlServer uses an owner name when it references tables. In this case, dbo is the owner.
MySQL doesn't use owner for table names, which is why you didn't see those names before.
SQL Server has something called schemas, in this case the default schema is dbo but it could be anything you wanted. Schemas are used to logically group objects. So you can create a Employee schema and have all the Employee tables, views, procs and functions in there, this then also enables you to give certain users only access to certain schemas
Tell me your migration tool you have used, and let me know the version of from and to databases.
Regards
Eugene
You do have an issue here with the default schema, if it's set to 'dbo' for the user you logged in as you don't need to specify it. See http://msdn.microsoft.com/en-us/library/ms176060.aspx