SQL Server Update Permissions - sql

I'm currently working with SQL Server 2008 R2, and I have only READ access to a few tables that house production data.
I'm finding that in many cases, it would be extremely nice if I could run something like the following, and get the total record count back that was affected :
USE DB
GO
BEGIN TRANSACTION
UPDATE Person
SET pType = 'retailer'
WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
ROLLBACK TRANSACTION
However, seeing as I don't have the UPDATE permission, I cannot successfully run this script without getting :
Msg 229, Level 14, State 5, Line 5
The UPDATE permission was denied on the object 'Person', database 'DB', schema 'dbo'.
My questions :
Is there any way that my account in SQL Server can be configured so that if I want to run an UPDATE script, it would automatically be wrapped in a transaction with an rollback (so no data is actually affected)
I know I could make a copy of that data and run my script against a local SSMS instance, but I'm wondering if there is a permission-based way of accomplishing this.

I don't think there is a way to bypass SQL Server permissions. And I don't think it's a good idea to develop on production database anyway. It would be much better to have development version of the database you work with.
If the number of affected rows is all you need then you can run select instead of update.
For example:
select count(*)
from Person
where pTrackId = 20
AND pWebId LIKE 'rtlr%';

If you are only after the amount of rows that would be affected with this update, that would be same amount of rows that currently comply to the WHERE clause.
So you can just run a SELECT statement as such:
SELECT COUNT(pType)
FROM Person WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
And you'd get the resulting potential rows affected.

1.First Login as admin in sqlserver
2.Goto login->your name->Check the roles.
3.IF u have write access,then you can accomplish the above task.
4.If not make sure you grant access to write.

If it's strictly necessary to try the update, you could write a stored procedure, accepting dynamic SQL as a string (Your UPDATE query) and wrapping the dynamic SQL in a transaction context which is then rolled back. Your account could then be granted access to that stored procedure.
Personally, I think that's a terrible idea, and incredibly unsafe - some queries break out of such transaction contexts (e.g. ALTER TABLE). You may be able to block those somehow, but it would still be a security/auditing problem.
I recommend writing a query to count the relevant rows:
SELECT COUNT(*)
FROM --tables
WHERE --your where clause
-- any other clauses here e.g. GROUP BY, HAVING ...

Related

How to get a mutual exclusion on select queries in SQL Server

I know maybe I'm asking something stupid in my application users can create a sort of agendas but only a specific number of agendas is allowed per day. So, users perform this pseudo-code:
select count(*) as created
from Agendas
where agendaDay = 'dd/mm/yyyy'
if created < allowedAgendas {
insert into Agendas ...
}
All this obviously MUST be executed in mutual exclusion. Only one user at time can read the number of created agendas and, possibly, insert a new one if allowed.
How can I do this?
I tried to open a transaction with default Read Committed isolation level but this doesn't help because during the transaction the other users can still get the number of the created agendas at the same time with a select query and so try
to insert a new one even if it wouldn't be allowed.
I don't think changing the isolation level could help.
How can I do this?
For testing I'm using SQL Server 2008 while in our production server SQL Server 2012 is run.
it sounds like you have an architecture problem there, but you may be able to achieve this requirement with:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
If you're reading an inserting within the same transaction, I don't see where the problem will be, but if you're expecting interactive input on the basis of the count then you should probably ensure you do this within a single session of implement some kind of queuing functionality

How to test your query first before running it sql server

I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)

permission problem on dynamic query running in a stored procedure

i have this stored procedure, doing a select query with couple of inner joins (one of the tables is in another db). Now, i had to write this query as dynamic cause first i had to find which db the select query should run. Anyway, none of the tables have permissions on them, just giving permission to the stored procedure for the database role "personel" (which includes everyone).
But now, someone with a personel role runs this stored proc, they are getting the error "The SELECT permission was denied on the object 'tbl_table', database 'Db', schema 'dbo'." no difference in the schema, and there are other procs using the same table that are running normally.
Can using a dynamic query (exec (Use DB; select ...) ) be the reason for this? Like cause it is dynamic, i should give permissions to the tables also ?
Thanks
The short answer is yes.
When you compile a stored procedure, permissions of the user/login creating the stored procedure are checked. When someone else executes it, their ability to read those tables is no longer relevant (in most cases), but rather just their ability to execute the SP.
When executing the dynamic code, however, the permissions regarding the tables have to be checked there and then. This means that the executing user's permissions are being checked.
Yes, this can be the reason. Read this to get an explanation and a possible solution.

multiple select statements in single ODBCdataAdapter

I am trying to use an ODBCdataadapter in C# to run a query which needs to select some data into a temporary table as a preliminary step. However, this initial select statement is causing the query to terminate so that data gets put into the temp table but I can't run the second query to get it out. I have determined that the problem is the presence of two select statements in a single dataadapter query. That is to say the following code only runs the first select:
select 1
select whatever from wherever
When I run my query directly through SQL Server Management Studio it works fine. Has anyone encountered this sort of issue before? I have tried the exact same query previously on similar databases using the same C# code (only the connection string is different) and had no problems.
Before you ask, the temp table is helpful because otherwise I would be running a whole lot of inner select statements which would bog down the database.
Assuming you're executing a Command that's command type is CommandText you need a ; to separate the statements.
select 1;
select whatever from wherever;
You might also want to consider using a Stored Procedure if possible. You should also use the SQL client objects instead of the ODBC client. That way you can take advantage of additional methods that aren't available otherwise. You're supposed to get better perf as well.
If you need to support multiple Databases you can just use the DataAdapter class and use a Factory o create the concrete types. This gives you the benefits of using the native drivers without being tied to a specific backend. ORMS that support multiple back ends typically do this. The Enterprise Library Data Access Application Block while not an ORM does this as well.
Unfortunately I do not have write access to the DB as my organization has been contracted just to extract information to a data warehouse. The program is one generalized for use on multiple systems which is why we went with ODBC. I suppose it would not be terrible to rewrite it using SQL Management Objects.
ODBC Connection requires a single select statement and its retrieval from SQL Server.
If any such functionality is required, a Hack can do the purpose
use the query
SET NOCOUNT ON
at the top of your select statement.
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned.
When SET NOCOUNT is OFF, the count is returned. It is used with any SELECT, INSERT, UPDATE, DELETE statement.
The setting of SET NOCOUNT is set at execute or run time and not at parse time.
SET NOCOUNT ON mainly improves stored procedure (SP) performance.
Syntax:
SET NOCOUNT { ON | OFF }

How can a SQL Sever TSQL script tell what security permissions it has?

I have a TSQL script that is used to set up a database as part of my product's installation. It takes a number of steps which all together take five minutes or so. Sometimes this script fails on the last step because the user running the script does not have sufficient rights to the database. In this case I would like the script to fail strait away. To do this I want the script to test what rights it has up front. Can anyone point me at a general purpose way of testing if the script is running with a particular security permission?
Edit: In the particular case I am looking at it is trying to do a backup, but I have had other things go wrong and was hoping for a general purpose solution.
select * from fn_my_permissions(NULL, 'SERVER')
This gives you a list of permissions the current session has on the server
select * from fn_my_permissions(NULL, 'DATABASE')
This gives you a list of permissions for the current session on the current database.
See here for more information.
I assume it is failing on an update or insert after a long series of selects.
Just try a simple update or insert inside a transaction. Hard-code the row id, or whatever to make it simple and fast.
Don't commit the transaction--instead roll it back.
If you don't have rights to do the insert or update, this should fail. If you DO, it will roll back and not cause a permanent change.
try the last insert/update up front with some where condition like
insert/update
where 1=2
if (##error <> 0)
raise error 6666 'no permissions'
this would not cause any harm but would raise a flag upfront about the lack of rights.