Creating a (temp?) table that is accessible only in the current connection - sql

I want to enforce row-level security for any client connecting to a particular SQL Server database (just one database). I don't want to impose any particular set up required to happen on the client side (because this would mean that a client can set up itself so that it would have access to anything - which of course would be bad; BTW, the client is a WinApp that connects using either Windows Auth or SQL Auth). I basically want this to be transparent to any client. The client should not even know this is happening.
The enforcement of the row level security will be performed in views inside the database that are layered above the tables (in essence: no one will have the ability to perform DML against the tables directly; instead, all operations shall be performed against the views that lay on top of the tables. These views will have instead-of triggers running under a particular 'execute as', to ensure the DML operations can be correctly executed).
So basically, I want to remove the potential of the client to circumvent this security model by baking it into the database itself.
I also want to separate out the permissions granted to the user from the effective permissions that are applied to the current connection (think of it this way: if you are connected to the DB, you have a Security Context associated with your connection - maintained in the DB, mind you - this Security Context contains the information about which items you have access to. Upon establishing the connection, this Security Context is created and populated with information based of the permissions assigned to you and when the connection is closed, the information in this Security Context is removed; in fact, the entire Security Context should be removed). Of course, the Security Context should only be available within a given connection, connections should not have the ability to even see the existence of a security context for other connections.
(EDIT: one of the scenarios explicitly targeted, which explains why the Security Context is separated from the 'defined permissions' is as follows: if you establish a connection to the DB, you get a SecContext; now while your connection is active/not closed, the admin assigns new permissions to you, these new permissions will not be applied to this currently open connection. You must close the connection and re-establish a connection to have these changes reflected in your SecContext)
I know how I can enforce that the views will only return information that the user has access to. That's the easy part... This question is about creation and deletion of the Security Context I am talking about.
The Security Context should be created on establishing the connection.
It should also reside in an artifact that is accessible only to the current connection.
It must be queryable during the lifespan of the connection by the connecting user.
Finally, the Security Context must be dropped/removed/deleted when the connection is closed.
Would anyone have an idea about how this can be achieved.
Thanks

A SQL Server temp table (one with a name starting with #) is only available to the current connection, and dropped at the end. You only have to deal with establishing it on creating a new connection.
However it sounds like you are re-implementing a lot of what the DBMS already does for you. I would recommend reading up more on SQL Server's built-in security mechanisms, in particular login/user/schema separation.

I don't really understand your application in the abstract, so I don't entirely understand where you're coming from here, but here's a question for you: it sounds like you're giving your users a direct connection to your database. Do you need to do that?
Perhaps I'm missing the point, but could not all of this row-level security be done away with entirely if you built into your application an API, rather than provided your users with a direct database connection? If you set things up that way, your API could be the gatekeeper which prevents users from making changes to rows to which they should not be given access. Perhaps that would be simpler than working directly at the database level?
I hope that's helpful.

Related

Webserver with database : one connection per user

Our aim is to implement the principle of least privilege with a defense in depth approach. In this particular case, this means that a query sent by an unprivileged user should not have admin rights on the database side. RDBMS such as PostgreSQL provide very powerful, expressive and well-tested access control mechanisms : RBAC, row-level security, parametrized views, etc. These controls, indeed, are usually totally ignored in web applications which use the paradigm "1 application == 1 user", this user has thus admin role. But heavy clients often use several different users on the database side (either one per final user or one per specific role) and thus benefit from the access control of the database.
Access control from the DB is an addition to access control in the web application. AC in the webapp will be more precise but may probably suffer from some bugs ; AC in the DB will be a bit more laxist but better enforced, limiting damages in case of an application bug.
So in our case, we want to create a DB user for every application user. Then, the connection to the database belongs to this specific user and the database can thus enforce that a simple user cannot execute admin operations. An intermediate possibility would be to drop some privileges before executing a query, but our preferred way is to connect to the database as the currently logged-in user. The login-password is sent by the user when he authenticates and we just pass it to the DBMS. Scalability is not (yet) an issue for our application, we can sacrifice some scalability for this type of security.
Would you have any hints to help us achieve this ?

SQL Access for web apps

Background:
Our team is building an inhouse Intranet web application. We are using a standard three layer approach. Presentation layer (mvc web app), Business layer and data access layer.
Sql database is used for persistence.
Web app / iis handles user authentication (windows authentication). Logging is done in business and data access layer.
Question service account vs user specific Sql accounts:
Use service / app account:
Dev team is proposing to set up service account (set up for application only). This service account needs write & read access to db.
Vs
Pass on user credentials to SQL
IT ops is saying that using a service account (specifically created for app only) for db access is not deemed best practice. Set up Kerberos delegation configured from the web server to the SQL server so that you can pass on the Windows credentials of the end users & create a database role that grants the appropriate data access levels for end users
What is the best practice for setting up accounts in sql where all request to db will come through the front end client (ie via bus layer and then data layer)
The Best Practice here is to let the person/team responsible for the database make the decision. It sounds like the dev team wants to forward (or impersonate) some credentials to the DB which I know that some small teams like doing, but yes that can leave things a bit too open. The app can do whatever it likes to the database, which is not much of a separation if you're into that kind of thing.
Personally, if I understand what you're saying above, I do more of what the IT team is thinking about (I use Postgres). In other words my app deploys over SSH using a given account (let's say it's the AppName account). That means I need to have my SSH keys lined up for secure deployment (using a PEM or known_keys or whatever).
In the home root for AppName I have a file called .pgpass which has pretty specific security on it (0600). This means that my AppName account will use local security to get in rather than a username/password.
I do this because otherwise I'd need to store that information in a file somewhere - and those things get treated poorly pushed to github, for instance.
Ultimately, think 5 years from now and what your project and team will look like. Be optimistic - maybe it will be a smashing success! What will maintenance look like? What kinds of mistakes will your team make? Flexibility now is nice, but make sure that whomever will get in trouble if your database has a security problem is the one who gets to make the decision.
The best practice is to have individual accounts. This allows you to use database facilities for identifying who is accessing the database.
This is particularly important if the data is being modified. You can log who is modifying what data -- generally a hard-core requirement in any system where users have this ability.
You may find that, for some reason, you do not want to use your database's built-in authentication mechanisms. In that case, you are probably going to build a layer on top of the database, replicating much of the built-in functionality. There are situations where this might be necessary. In general, this would be a dangerous approach (the database security mechanisms probably undergo much more testing than bespoke code).
Finally, if you are building an in-house application with just a handful of users who have read-only access to the database, it might be simpler to have only a single login account. Normally, you would still like to know who is doing what, but for simplicity, you might forego that functionality. However, knowing who is doing what is usually very useful knowledge for maintaining and enhancing the application.

What would happen if an SQL Server instance become offline/Inaccessible if you have an Entity Data Model pointing to one of the intance's databases?

I am currently writing an application in VS2010 which will access many different SQL Servers spread out to a couple of servers on our network. This is however a dynamic environment, and the servers might be subject to decommissioning. I have a couple of entity data models which point to custom information-gathering databases in those servers, which will become useless to me when the servers decommission. The problem is that I am worried that if one of these servers decommission, my application would fail because the entity data models won't be able to point to the databases anymore. I cannot go like every 2 weeks to change the source code of the application to meet new server needs, as development time would be wasted.
Are my suspicions right, that my application would fail to work if the data models point to databases which may not exist anymore? Is there a workaround to cater for my needs to "ignore" a connection to a non-existent database?
You will get an exception when you try to do the first thing which connects to the DB.
The exception will note that the underlying provider failed on open, and will have a SqlException as the InnerException giving details of that.
Probably the best thing for you to do is to manually create and open the connection and pass that to the context in the constructor, using this overload.

Speed up strategy for SQL Server when EXECUTE AS is used for security

I've just started looking at a system that implements security a little differently to the norm.
They create a new SQL user for each user of the system (of which there are about 32K now). Each query is sent via a connection that is initially using the SA account (lets not get bogged down on this), then after we know who the user is, the EXECUTE AS USER is used each query.
Now that there are so many users, creating new users and switching has a noticeable performance hit and the company is looking at improving the situation.
A few points:
- SQL Code is dynamic sql (not stored procedures)
- The original idea was to alleviate the need for the developers of the company to worry about writing SQL worrying about permissions - and let another layer worry about it.
How does one try and improve the query execution time and avoid the EXECUTE AS USER code and still get the same security scrutiny?
Does SQL Server support session variables to store a user account?
It's difficult to know whether this is helpful without a bit more detail on how your application security model and how it controls and/or pools database connections. Is it a fat client, n-tier or something else?
SQL connections can support a single "session variable" using CONTEXT_INFO - a user-configurable binary(128) field which persists for the duration of the connection. This may meet your requirements, but you need to beware that if you use it to store security information it will be accessible to the end user - you should therefore probably encrypt or salt and hash any security information in the CONTEXT_INFO to prevent users tampering with their permissions; this may have performance implications.
It may not be applicable depending on your application architecture, but have you considered switching to Windows authorisation and organising permissions though Active Directory users and groups?

SQL Server how to control access to data between clients on the same database

We have a system with 2 clients (which will increase). These two clients connect to the same server/database, however neither should be able to see the others sensitive information. There is however some shared non sensitive information.
There is also an administrative department who does work on behalf on both of the clients. They are allowed to see all sensitive data.
We currently handle this by holding a ClientID against the tables in question and with a mixture of views and queries check against the ClientID to control access for each client.
I want to move to a consistent handling of this in our system e.g. all views, or all queries, however I just wondered if there was perhaps an easier/ better Pattern than using views to handle this situation?
We're using Sql Server 2005 however upgrade to 2008 is possible.
Cheers
the most logical way is to have (indexed) views filtered by what each user can see.
add read/write permisisons to each client for their views. admins access the tables directly.
but it looks to me that each client is a logicaly separated entity form the others.
if that's the case you might consider having 1 db per client and 1 db for shared stuff.
admins can access everything, each client cas only access it's own db and read from common db.
a 3rd option is to look into schemas and separate your clients there.