Background:
Our team is building an inhouse Intranet web application. We are using a standard three layer approach. Presentation layer (mvc web app), Business layer and data access layer.
Sql database is used for persistence.
Web app / iis handles user authentication (windows authentication). Logging is done in business and data access layer.
Question service account vs user specific Sql accounts:
Use service / app account:
Dev team is proposing to set up service account (set up for application only). This service account needs write & read access to db.
Vs
Pass on user credentials to SQL
IT ops is saying that using a service account (specifically created for app only) for db access is not deemed best practice. Set up Kerberos delegation configured from the web server to the SQL server so that you can pass on the Windows credentials of the end users & create a database role that grants the appropriate data access levels for end users
What is the best practice for setting up accounts in sql where all request to db will come through the front end client (ie via bus layer and then data layer)
The Best Practice here is to let the person/team responsible for the database make the decision. It sounds like the dev team wants to forward (or impersonate) some credentials to the DB which I know that some small teams like doing, but yes that can leave things a bit too open. The app can do whatever it likes to the database, which is not much of a separation if you're into that kind of thing.
Personally, if I understand what you're saying above, I do more of what the IT team is thinking about (I use Postgres). In other words my app deploys over SSH using a given account (let's say it's the AppName account). That means I need to have my SSH keys lined up for secure deployment (using a PEM or known_keys or whatever).
In the home root for AppName I have a file called .pgpass which has pretty specific security on it (0600). This means that my AppName account will use local security to get in rather than a username/password.
I do this because otherwise I'd need to store that information in a file somewhere - and those things get treated poorly pushed to github, for instance.
Ultimately, think 5 years from now and what your project and team will look like. Be optimistic - maybe it will be a smashing success! What will maintenance look like? What kinds of mistakes will your team make? Flexibility now is nice, but make sure that whomever will get in trouble if your database has a security problem is the one who gets to make the decision.
The best practice is to have individual accounts. This allows you to use database facilities for identifying who is accessing the database.
This is particularly important if the data is being modified. You can log who is modifying what data -- generally a hard-core requirement in any system where users have this ability.
You may find that, for some reason, you do not want to use your database's built-in authentication mechanisms. In that case, you are probably going to build a layer on top of the database, replicating much of the built-in functionality. There are situations where this might be necessary. In general, this would be a dangerous approach (the database security mechanisms probably undergo much more testing than bespoke code).
Finally, if you are building an in-house application with just a handful of users who have read-only access to the database, it might be simpler to have only a single login account. Normally, you would still like to know who is doing what, but for simplicity, you might forego that functionality. However, knowing who is doing what is usually very useful knowledge for maintaining and enhancing the application.
Related
I have an ASP.NET Web Application that is connected to a Database that is installed in several clients in production environment.
Some of those clients manage critical information (in other schemas, not accesible for the Web App, like people's money) so the access to execute scripts directly in the database to fix things in my Web App, if it's needed, requires time and also approbation, sometimes it takes weeks..
As some of my clients have a volatile reallity, my Web App has to manage a lot of changes in some short periods of time, that means script executions in the database to alter data or schema, and that means time waste !
Long story short, my question is, is it a good practice to implement a page, only for administrator users, that executes a raw query directly to database?
Think in the scenario where security issue is managed properly.
Something like: Sql Pad where you cannot see the entire database system, just the query and the result as the target database is only one.
No. It's a terrible idea. The security issue is probably not manageable - a web page that's available on the public internet which grants schema modification rights to the logged in user is a horrible security risk. Even if you can't get to another schema, you can easily bring the server to its knees by writing simple SQL which consumers all CPU, memory or disk space.
It's also terrible because you lose any track of what changes were installed in which environment.
If the IT department won't approve your scripts when run from management studio they certainly won't let you loose on your own via a web interface.
I've always solved this problem via automated deployment scripts - execute the schema changes etc. as a part of installing the new version of the web application. That way, you can do things like back up the database before running your changes, keep track of versioning and control access.
I'm trying to get some advice on how to approach a security architecture on Azure.
Background:
We are looking at building a multi-tenant app on Azure that needs to be extremely secure (personally sensitive data). The app will be accessed by standard browsers and mobile devices.
Security access types:
We have three types of users / access types...
1 - plain old user/password over https is fine, accessing both general, non private SQL plus hosted files
2 - user/pass over https, but need authentication of users via certificates that will be installed on user machines/devices. This level of user will need access to sensitive data which should be encrypted at rest both in database, and also any uploaded files.
3 - same as (2) but with the addition of some two factor authentication (we have used YubiKey for other things - might look towards a phone OTP offering as well)
Most users will only have access to their own tenant databases, however we have "account manager" type users that need access to selected tenant data, therefore we expect that they will need either a copy of one certificate per tenant they serve, or we will have to use some kind of master certificate.
Database type:
From a multi-tenant point of view it seems Azure Federated SQL is a good way to go because (a) we simply write one app with "TenentID" key in each table, and after login, set a global filter that handles the isolate for us (b) we understand that Azure federated SQL actually in the background maintains separate SQL database instances per tenant.(Ref: http://msmvps.com/blogs/nunogodinho/archive/2012/08/11/tips-amp-tricks-to-build-multi-tenant-databases-with-sql-databases.aspx)
Can anyone point to any links or give advice in relation to the approach needed to setup and manage file shares, encryption of SQL and file data at rest, authentication of users etc. (automated management on new user signup pref).
I can't really help on the certificates, but you will indeed need some "master certificate". If you are planning on using Azure website, you can't use your own certificates currently.
Concerning the database setup. SAAS applications build on trust, so you NEVER (EVER) want to be showing or editing the data of using to other users.
Therefore I strongly suggest that you don't use the TenantID for each table. This would leave still the possiblity of an attack by a malicious user or an error by some developer.
The only way to get around these risks are
extensive testing
physical different tables to store each tenant data.
Personally I believe that even with very extensive+automated testing you can't have 100% code coverage against malicious users. I guess I am not alone.
The only way out IMHO is physical different tables. Let's look at the options:
different server: valid, but pretty expensive in azure
different database: valid, less management overhead but same objection as the previous option - expensive if you have a lot of tenants
different schema's: the solution. Think about it...
you only have to manage users and there default schema's
you can back-up schema's using powershell
you can move schema's to other databases with some work
You can still digg into SQL federation if you need to.
the major drawback is that you will need to support database upgrades for each tenant.
Have you read on azure.com any articles about multi-tenancy? http://msdn.microsoft.com/en-us/library/windowsazure/hh689716.aspx
I only starting to learn about SQL Azure, have spoken to some potential clients, they say they have not chosen Azure due to the private nature of their customers information.
Reading about Azure it has firewalls to prevent unauthorised access.
I was just wonder what other way I could market Azure so that clients who potentially want to use it would not be concerned about privacy issues.
Also as I understand Azure supports Hybrid solutions where you can store data locally or remotely?
Thanks
SQL Azure is a public service and the data is stored somewhere in the cloud provider facility. With all security measures including firewalls and sentry dogs the data is still under zero customer control.
So the provider could do some backup and store it for some very long time and you might want to destroy the data ASAP and will be unable to have it done.
Also here's what technically could happen (not that I'm saying it is likely):
the provider might dispose of undestroyed hard disks
a bug could cause the authorization to fail and allow an unauthenticated user (because you see, you don't control what software updates the provider applies)
the provider employee might be bribed and copy the data
So if the user really wants privacy (or the laws say the data he deals with must be processed according to certain requirements) or he wants actual control on how the data is dealt with then a public storage service like SQL Azure is technically inapplicable for him. You trying to market Azure as providing the same level of control and security as a local facility would provide are deceiving the customer.
Sad but true and you can't lie to the compiler. There's no such thing as control over your data in a public storage service. Risks of negative outcomes are perceived as rather low, but they exist and they are real.
Yes, the Azure service bus has connecting private and public clouds as a feature. Keeping sensitive data locally may be what your clients want/need to push parts of their infrastructure to the cloud, although it will take some effort for sure to keep that separation clear, and I'm not just talking technically.
That said, marketing Azure to a client that's not ready for the cloud may very well lose you the entire deal, so make sure you're not pushing anything they aren't ready to cope with to start with.
A good starting point is the Windows Azure Trust Center to learn about Windows Azure privacy and security.
There's also a 7-part Windows Azure security best practice series on the ISV Developer Community Blog. Part 1 has links to the remaining entries, at the end of the post.
Microsoft's data centers are run by Global Foundation Services, which has its own set of security and compliance. There you'll find a data center tour video
I searched online a bit and couldn't find anything that really nailed the spot or covered the bases how to go about setting up users/roles on a database.
Basically, there would be a user that would be used to access the database from the application (web application in this case) that will need access to database for the regular database operations (select, insert, update, delete) and executing stored procedures (with exec to run stored procedures within other stored procedures/UDFs).
Then, we would also have a user that would be main admin (this is simple enough).
I currently have a development environment where we don't really manage the security too well in my opinion (application uses a user with db_owner role, though it is an intranet application). Even though it is an intranet application, we still have security in mind and would like to see what are some of the ways developers set up the users/roles for this type of environment.
EDIT: Web application and SQL Server reside on separate machines.
EDIT: Forgot to mention that an ORM is used that would need direct read/write access.
Question:
What are the "best practices" on setting up the user for application access? What roles would apply and what are some of the catches?
First, I tend to encapsulate permissions in database roles rather than attach them to single user principals. The big win here is roles are part of your database, so you can completely script security then tell the deployment types to "add a user and add him to this role" and they aren't fighting SQL permission boogeymen. Furthermore, this keeps things clean enough that you can avoid developing in db_owner mode and feel alot better about yourself--as well as practice like you play and generally avoid any issues.
Insofar as applying permissions for that role, I tend to cast the net wider these days, especially if one is using ORMs and handling security through the application. In T-SQL terms, it looks like this:
GRANT SELECT, UPDATE, INSERT, DELETE, EXECUTE on SCHEMA::DBO to [My DB Role]
This might seem a bit scary at first, but it really isn't -- that role can't do anything other than manipulate data. No access to extended procs or system procs or granting user access, etc. The other big advantage is that changing the schema--like adding a table or a procedure--requires no further security work so long as you remain within that schema.
Another thing to take into consideration for SQL 2005+ is to use database schemas to secure groups of objects. Now, the big trick here is that many ORMs and migration tools don't like them, but if you render the default schema [dbo] to the app, you can use alternative schemas for special secured stuff. Eg--create an ADMIN schema for special, brutal database cleanup procedures that should be manually run by admins. Or even a separate schema for a special, highly secured part of the application that needs more granular DB permissions.
Insofar as wiring in users where you have separate boxes, even without a domain you can use Windows authentication (in Sql Server terms integrated authentication). Just make a user with the same credentials (user/pass combo) on both boxes. Setup an app domain to run as that user on the web box and setup a Sql Server user backed by that principal on the sql box and profit. That said, using the database roles can pretty much divorce you from this decision as the deployment types should be able to handle creating sql users and modifying connection strings as required.
For a long time the SQL Server guidelines for application access to the database were to isolate access to data into stored procedures, group procedures into a schema and grant execute on the schema to the principal used by the application. Ownership chaining would guarantee data access to the procedure callers. The access can be reviewed by inspecting the stored procedures. This is a simple model, easy to understand, design, deploy and manage. Use of stored procedure can leverage code signing, the most granular and powerfull access control method, and the only one that is tamper evident (signature is lost if procedure is altered).
The problem is that every bit of technology comming out from the Visual Studio designers flies in the face of this recommendation. Developers are presented with models that are just hard to use exclusively with stored procedures. Developers love to design their class models first and generate the table structure from the logical model. The procedure based guidelines reuire the procedures to exists first, before the first line of the application is written, and this is actually problematic in development due to the iterative way of modern development. This is not unsolvable, as long as the team leadership is aware of the issue and addresses it (ie. have the procedures ready, even as mocks, when the dev cycle starts).
Create a user 'webuser' that the web application uses.
Only grant stored proc execute permissions to this user. Do not allow direct table read/write. If you need to read something from a table, write a proc. If you need to write data, write another proc.
This way everything is kept nice and simple. One app user, with only the relevant permissions. If security is compromised, then all the intruder can do is run the procs.
we have several sites for several different clients, each with several different databases.
Some of the databases are at client location, some are on our site.
I have been tasked with creating a few sharepoint sites that will display information from the databases.
Is it okay to call stored procedures from my sharepoint sites? Since the database is not for the sharepoint site, I feel like that site should not have direct access to the DB and should get the data through web services. Certainly, this would be the case if the data were exposed to another company, but since we are responsible for all of it, is that okay?
In my opinion you save yourself from lot of trouble by just going directly to the database, since you control both ends. The direct access to DB will also have better performance than writing some web services in between the two systems.
If the other system wouldn't be yours, I'd definitely hope that it had a web services (or RESTful web services) interface. My reasoning here is that in most software, web services are really actually meant for integrations and thus changes to them are kept at minimum. Database schema changes are fairly typical during the lifetime of a software product and thus it's not generally easy to evolve the schema if other people build integrations directly against the DB.
Querying the database directly is not a supported scenario, thus you shouldn't ever need to do that.
Best practice is to use the existing Web Services, or implement your own custom Web Service.