Do a stored procedure not infer permission on internal operations - sql

SQL Server ....
I have long been under the assumption that granting stored procedure exec to a principal means that the SP can do whatever it needs to do and optionallly return a result.
I am developing currently on a 2012 database. I created a SP and granted exec to a SQL login.
The user got error messages.
I also had to grant rights on a table and a function that I use inside the SP.
My world view also held that views and functions did NOT transfer rights in this way.
Has something changed? Have I just operated under a false pretense all this time?
I have googled for an answer, but cant seem to find an article that discusses this topic.
Any thoughts?
Thanks
Greg

Databaes chaining is the concept that addresses these type issues.
In 2005 forward, the notion of owner of an object became the notion of the schema that holds the object.
In my particular case, the stored procudure is in one schema , the function in another, and the table in yet another schema. Hence the need for these multiple grants.
I have never really used schemas before this assignment. Thats just how they do things here, and thats ok. Hence, my surprise at this behaviour.
Greg

Related

Run xp_create_subdir without admin privilidges

The Point: I want to be able to create a directory on the filesystem through a non-sysadmin SQL user.
I'm creating a web front-end for a deployment script which creates new databases from a specified template database.
Essentially I'm backing up said template database and then restoring this as a brand new database with a different name.
Our DB server has our client databases stored in sub-folders within our database store. If I were to use the default settings it would look something like:
D:\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\[ClientRef]\[ClientRef].mdf
D:\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\[ClientRef]\[ClientRef].ldf
I only have SQL access to the database server (via a programming language, hosted on a separate box) so I can't execute anything other than SQL.
My database user is extremely limited, however I would like to somehow grant this user to access/execute master.dbo.xp_create_subdir only. Is this possible at all?
I'm loathe to give our local DB user sys-admin rights, it has a limited user for a reason.
DB Server is Microsoft SQL Server 2008 R2.
Cheers, any help will be appreciated.
One possible solution is to write your own sproc that internally uses master.dbo.xp_create_subdir.
Create the sproc while logged in as an account that's a member of the sysadmin role and use "WITH EXECUTE AS SELF". Then grant permissions to that other account to execute this sproc. The database catalog where you create this wrapper-sproc must be marked as "trustworthy" or you'll still get the: User must be a member of 'sysadmin' server role. error.
E.g.
CREATE PROCEDURE [dbo].[sprocAssureDirectory] #directoryFullPath varchar(4000)
WITH EXECUTE AS SELF
AS
BEGIN
EXEC master.dbo.xp_create_subdir #directoryFullPath;
END
Just make sure you add any needed assertions/checks to your sproc that make sense for your application (e.g. the path can only be of a pattern that you expect).
Belated Update: Added the critical mention of marking the catalog as trustworthy.
You could give access for the user to use that stored proc explicitly. It is gonna be something like:
GRANT EXECUTE ON OBJECT::master.dbo.xp_create_subdir
TO <SQL USER>;
It sounds like that user is limited for a reason though and getting the extra permissions to run something like that can get a little push back from whoever is managing the DB. So be careful when dealing with getting the elevated privledges.

authentication when creating table synonym in remote server

I just came across the concept of SYNONYM in a database. By reading this: http://msdn.microsoft.com/en-us/library/ms187552.aspx
and this What is the use of SYNONYM in SQL Server 2008? I figure out the purposse of synonym.
however, I still don't understand a little step in real process of creating a synonym for a remote table. I have search the web, but generally the instruction mainly focus on SQL syntax(for example this one:http://www.oninit.com/manual/informix/english/docs/dbdk/is40/sqls/02cr_prc8.html). And I find none of the guidance mention the authentication part when creating a synonym for remote table. I guess a database can't just let anyone make a synonym then get the access to its tables?
so I curious how the target remote table's database can know if the synonym reference accessing its table is legal?
The answer to your question is going to depend a lot on what database platform you are using to contain the synonym; in your question, you referenced documentation from at least two (SQL Server and Informix). I don't know much about Informix, but I'm going to assume that it's security model is different than SQL Server.
For SQL Server, the remote server must be set up as a linked server first (assuming that you are using a remote object). See http://technet.microsoft.com/en-us/library/ms188279.aspx for details on how to do that.
From CREATE SYNONYM:
You do not need permission on the base object to successfully compile the CREATE SYNONYM statement, because all permission checking on the base object is deferred until run time.
That is, there's no security issues around synonyms, because the permissions checks take place when the synonym is used, and the permission checks are based on the real object, not the synonym.

SQL Server stored procedure permissions checking

My Asp.Net MVC web-site uses stored procedures based data access to the SQL Server database. Almost every procedure should check permissions, if current user can perform this operation. I've solve it by passing additional parameter UserId to every procedure and checking if user has special permission code in special table.
It causes many copy-pasted script. I wonder, is there any another way? Or may be you have an advices to improve my solution...
Just a thought, but what if you encapsulate the logic to lookup the permission for the given user and item in a user defined function. Then, invoke the function from the necessary stored procedures and check the return value. You still will wind up with some copying and pasting but in theory it should result in a cleaner approach.

What good are SQL Server schemas?

I'm no beginner to using SQL databases, and in particular SQL Server. However, I've been primarily a SQL 2000 guy and I've always been confused by schemas in 2005+. Yes, I know the basic definition of a schema, but what are they really used for in a typical SQL Server deployment?
I've always just used the default schema. Why would I want to create specialized schemas? Why would I assign any of the built-in schemas?
EDIT: To clarify, I guess I'm looking for the benefits of schemas. If you're only going to use it as a security scheme, it seems like database roles already filled that.. er.. um.. role. And using it a as a namespace specifier seems to have been something you could have done with ownership (dbo versus user, etc..).
I guess what I'm getting at is, what do Schemas do that you couldn't do with owners and roles? What are their specifc benefits?
Schemas logically group tables, procedures, views together. All employee-related objects in the employee schema, etc.
You can also give permissions to just one schema, so that users can only see the schema they have access to and nothing else.
Just like Namespace of C# codes.
They can also provide a kind of naming collision protection for plugin data. For example, the new Change Data Capture feature in SQL Server 2008 puts the tables it uses in a separate cdc schema. This way, they don't have to worry about a naming conflict between a CDC table and a real table used in the database, and for that matter can deliberately shadow the names of the real tables.
I know it's an old thread, but I just looked into schemas myself and think the following could be another good candidate for schema usage:
In a Datawarehouse, with data coming from different sources, you can use a different schema for each source, and then e.g. control access based on the schemas. Also avoids the possible naming collisions between the various source, as another poster replied above.
If you keep your schema discrete then you can scale an application by deploying a given schema to a new DB server. (This assumes you have an application or system which is big enough to have distinct functionality).
An example, consider a system that performs logging. All logging tables and SPs are in the [logging] schema. Logging is a good example because it is rare (if ever) that other functionality in the system would overlap (that is join to) objects in the logging schema.
A hint for using this technique -- have a different connection string for each schema in your application / system. Then you deploy the schema elements to a new server and change your connection string when you need to scale.
At an ORACLE shop I worked at for many years, schemas were used to encapsulate procedures (and packages) that applied to different front-end applications. A different 'API' schema for each application often made sense as the use cases, users, and system requirements were quite different. For example, one 'API' schema was for a development/configuration application only to be used by developers. Another 'API' schema was for accessing the client data via views and procedures (searches). Another 'API' schema encapsulated code that was used for synchronizing development/configuration and client data with an application that had it's own database. Some of these 'API' schemas, under the covers, would still share common procedures and functions with eachother (via other 'COMMON' schemas) where it made sense.
I will say that not having a schema is probably not the end of the world, though it can be very helpful. Really, it is the lack of packages in SQL Server that really creates problems in my mind... but that is a different topic.
I tend to agree with Brent on this one... see this discussion here. http://www.brentozar.com/archive/2010/05/why-use-schemas/
In short... schemas aren't terribly useful except for very specific use cases. Makes things messy. Do not use them if you can help it. And try to obey the K(eep) I(t) S(imple) S(tupid) rule.
I don't see the benefit in aliasing out users tied to Schemas. Here is why....
Most people connect their user accounts to databases via roles initially, As soon as you assign a user to either the sysadmin, or the database role db_owner, in any form, that account is either aliased to the "dbo" user account, or has full permissions on a database. Once that occurs, no matter how you assign yourself to a scheme beyond your default schema (which has the same name as your user account), those dbo rights are assigned to those object you create under your user and schema. Its kinda pointless.....and just a namespace and confuses true ownership on those objects. Its poor design if you ask me....whomever designed it.
What they should have done is created "Groups", and thrown out schemas and role and just allow you to tier groups of groups in any combination you like, then at each tier tell the system if permissions are inherited, denied, or overwritten with custom ones. This would have been so much more intuitive and allowed DBA's to better control who the real owners are on those objects. Right now its implied in most cases the dbo default SQL Server user has those rights....not the user.
I think schemas are like a lot of new features (whether to SQL Server or any other software tool). You need to carefully evaluate whether the benefit of adding it to your development kit offsets the loss of simplicity in design and implementation.
It looks to me like schemas are roughly equivalent to optional namespaces. If you're in a situation where object names are colliding and the granularity of permissions is not fine enough, here's a tool. (I'd be inclined to say there might be design issues that should be dealt with at a more fundamental level first.)
The problem can be that, if it's there, some developers will start casually using it for short-term benefit; and once it's in there it can become kudzu.
In SQL Server 2000, objects created were linked to that particular user, like if a user, say
Sam creates an object, say, Employees, that table would appear like: Sam.Employees. What
about if Sam is leaving the compnay or moves to so other business area. As soon you delete
the user Sam, what would happen to Sam.Employees table? Probably, you would have to change
the ownership first from Sam.Employees to dbo.Employess. Schema provides a solution to
overcome this problem. Sam can create all his object within a schemam such as Emp_Schema.
Now, if he creates an object Employees within Emp_Schema then the object would be
referred to as Emp_Schema.Employees. Even if the user account Sam needs to be deleted, the
schema would not be affected.
development - each of our devs get their own schema as a sandbox to play in.
Here a good implementation example of using schemas with SQL Server. We had several ms-access applications. We wanted to convert those to a ASP.NET App portal. Every ms-access application is written as an App for that portal. Every ms-access application has its own database tables. Some of those are related, we put those in the common dbo schema of SQL Server. The rest gets its own schemas. That way if we want to know what tables belong to an App on the ASP.NET app portal that can easily be navigated, visualised and maintained.

Creating stored procedure on the fly. What are the risks/problems?

I am thinking about creating stored procedures on the fly.
ie running CREATE PROCEDURE... when the (web) application is running.
What are the risks or problems that it can cause?
I know that the database account needs to have the extra privileges.
It does NOT happen everyday. Only from time to time.
I am using sql server and interested in mysql and postgres as well.
Update1:
Thanks to comments, I am considering creating a new version of stored procedure and switching over instead of ALTERing the sp. example: sp1 -> sp2 -> sp3
Update2:
The reason:
My schema changes because of custom fields (unknown number and type of columns)
I tried dynamic sql and sp_executesql first. Of course it works. Dynamic sql works greate for 1,2,3 simple update,inserts.
But it got too ugly and a lot of work and it does not mix well with stored procedure, problems with sql parameterization because it is used inside a stored procedure and the number and type of params is not known at compile time (long story).
At least the basic scenario for this solution is not that complicated.
The logic of the sp does NOT change. For each custom field I have to add a new parameter to sp and add a column to update, insert, etc.
I also considered making stored procedure parameters dynamic like sp_executesql that accepts any number and type of params but could not find a way.
For a transactional system it's probably quite expensive. If you have a large batch job and want to use a code generator for some reason (quite a common architecture in ETL tools, notably Oracle Warehouse Builder and Wherescape Red), it's not unreasonable to do this.
You mentioned that you would be adding and/or changing the calling profile of the stored procedure when you do this alteration. How are you lock-stepping the new calling profile with the application that makes the call to this? What's your roll-back plan if you ever have to revert a change that was made?
In the past what I've done is just append an incrementing numeric suffix to the stored procedure name with the new calling profile -- then you can modify the old version of the SP to call the new one with a default value for the parameter, and then you can release your software calling the new version.
If something breaks in your new version and you have to rollback, calls to the old stored proc will still work without error, and just populate the custom fields with your default values.
Firstly, the answer to this question really depends on what exactly this stored procedure is intended to do. If it's just reading data or creating a result set for reporting and you don't mind if it's a little inconsistent, then you're probably fine. If it's doing anything remotely interesting with your data then it's a very risky thing to be doing. You should think about whether it's possible (and what would happen) for two users users (or the same user twice) to run multiple versions of the the same stored procedure at the same time. Smells like a train wreck to me. One option is to only allow this procedure alteration to take place when no other users are logged into the system, or forcibly boot them off the database if they are. Another option is to create your new stored procedure with a slightly different name and swap them over when you deem it safe to do so.
Another issue is that one of the major benefits of stored procedures is that the execution plan is cached, meaning it will execute faster. If you are creating them on the fly you lose that advantage.
If you really need to do this then you should randomise the name of the procedure to avoid clashing with other users. Remember always that other users may be doing their own thing at the same time - most database systems won't give transactional isolation for stored procedures (Postgres is the only one I know of that does).
It would be extremely rare that this would be a desirable thing to do - could you elaborate at all on what made you choose this approach?
I would not do that personally.
As you mentioned you will need extra privileges to grant access to create/alter database objects. That can create a serious security risk as nothing would stop your application from creating a malicious stored procedure if someone discovered a security hole in it.
If your schema changes, change the stored procedures with the schema.
You will not be able to alter the procedure if one or more users are running the procedure, or another procedure that references your procedure. You will block until all the dependent procedures and the procedure you want to compile (and I think the procedure s you invoke from your procedure, but I am not certain) are not in use. This may be a long time on a busy production system, and if you are unlucky, you may timeout waiting for all the dependencies to not be in use (5 minutes on Oracle).
You can also get into very ugly situations (I have). Take for example stored procedures B and C, both of which call A, the procedure that you are trying to compile. When no one is running B, the system locks B. Now any user trying to run B will stall. The system then tries to lock C, but C is generating a very lengthy report that will not be done for another 10 minutes. You will timeout waiting for the lock, and some of your users will have an unresponsive system for 5 minutes. My experience is with Oracle, I would make sure your target DBMS does not behave in the same fashion, or has quicker failures or a better lock acquisition strategy.
I guess I am cautioning that what looks like may work on a development server may fail dramatically on a busy production system.
I'm not sure that the locking discussed by Tony BanBrahim is true in SQL Server 2005.
I have some long-running SPs (a 3 hours batch process of about 30 sub-processes), and I have been able to alter the SP while it is still running. (I don't believe the changes take effect until the next run, but it doesn't cause any blocking or any error). Now the outer long-running SP does both call SPs dynamically with EXEC and statically, but I've change both the root and nested SPs while they are running without error messages or blocks.
WRT your original question, I would think that your tactic is fine if used in a controlled way.
I don't know for sure, but it sounds like one or both:
an architectural problem
is existing code locking the schema tables from the application?
I'd take a look to see what code is locking the schema tables and rewrite that code. Do you have a 3rd party something or other that is locking those tables?