I have the following scenario.
At my company we use Oracle 11g. The authentication on the frontend is using database users. So, every user of the frontend has a his own user account in the database system.
This implies that they have the ability to connect directly to the database, if they know the IP address, port, etc,. Of course, this is not considered a security concern because our strict managment of roles and privileges. This also implies that when a new user is added our DBA have to create the user and assing the proper roles and privileges.
Until now, our frontend is accesed only by our internal users. However, We are planning to add the capability for our external users can login in our frontend.
Our estimation is about 750,000 external users with annual increments of 50,000. This users are supposed to access our system three or four times per year.
The question we have is how to grant access to this users.
By using our already implemented authentication system. Every user has his own database user account.
Generating an authentication system for external users only. Like most of the CMS's in the market, with tables as an ACL (Access Control List) for users, passwords and roles for our 750,000 external users.
My main concern is to have +750,000 database user accounts that will be unused most of the time and eventually could make a mess with our internal users.
Someone have a similiar experience with this amount of users and how did you deal with it?
Best regards.
Off the top of my head..
Make sure that whatever outward facing boxes are few in number.
For the boxes that can connect to the database - make them purely
authentication or get/put for the data. don't run the web server on the databases or on the same LAN segment.
If possible encrypt communications from the client to the database so that if any of your intermediate hops get rooted they'll only see junk.
Use a firewall to ensure that only the bare minimum can get through.
For validating authentication, don't let their 'real' password get off the web server. Keep it hashed, San Diego!
Related
With a surge of applications that can be used to pull information, my sql server is constantly getting tapped, and there are a couple of users that keeps running refresh. Is there a way to reject query based on specific client_app_name and nt_username?
Alternatively, is there a way to add the combination of the user and the app to security to decline access to SQL? i.e. Approve the user access if client_appname is excel but decline if the appname is 'Mashup Engine'.
What you really need is resource governance. With it you can restrict the resources a user can consume. This way the users can refresh as much as they like, but they won't be able to consume the server resources, their queries will instead slow down as they are exhausting the allowed resources. Other users will still be able to run queries at full speed.
The assignment of users to resource groups ('pools') is based on a classification function run at login time, and this function can consider user name, application name, workstation name, client IP etc.
I'm unsure if this is a programming question or a database question
I'm making a web api, I use bearer token authentication and its working really well, I can login get resources and logout, I can added users and users can add users which ill explain next.
I have 3 main roles, CompanyAdmin (users who created the account in the first place), CompanyUser (the user company admins make) and StandardUser (users the company users make)
Everyone has a row in the users table for logging in, now when a user logs in I don't want them loading up fiddler and seeing another users data by manipulating the url, I want to make sure companys cant see other companies data, users cant see other users data from the same or another company.
I'm using WebAPI MVC, SQL azure database and the website is hosted by my hosting provider. all accounts that can log into the site and access the api have a row in the users table from which only a few tables hang off it mostly for claims, roles, profile, company data etc.
Any help, pointers in the right direction would be helpful thanks
Getting the access to the data by simply passing the request payload means, that there is no security in the broker between the client and database. There are some ways to secure the access that are exist and actively used, one of them, for example, is the simple, basic authentication where you are setting the context by passing credentials, (dis)advantages are described on the same page above. Or using token-based authentication (more detailed walkthrough).
So, please avoid the possibility to do a just pass-through the request through the Web API.
Also you may check that old but good book on creating the multitenant solution and partition your database for customers/tenants on Azure. Some titles from that are not available already (Ex.: Federations), but examples of how to partition your SQL Database and some code samples should work.
I have a postgres database and a web app.
The web app allows the existance of users and they do some stuff on the webapp.
I am new to postgres but what we used to do in SQL Server was ActiveDirectory - CreateLogins - CreateRoles etc..
Now in this database is it possible to create a login for each user? or is a table with users (username password) better? or worse?
I was thinking about having 1 login user in the database that can only do specific procedures and see views and just authenticate the user through the table.
Which is better solution?
I think it's better to use the database's built-in role management, as it keeps it DRY. Despite myths, you can still use connection pooling. It makes it much easier to audit, and to use row-level security later if you need it.
Postgres can authenticate users using built-in roles, or using Kerberos, GSSAPI, SSPI or LDAP.
You can sync LDAP users/groups/roles with Postgres:
https://github.com/larskanis/pg-ldap-sync
You can delegate authentication to the web server. Login as the web server role, and then change the user's role, to act as that user, using SET ROLE X. This lets you use connection pooling.
Use one or more roles with specific permissions that access the login tables, along with the rest of the app's tables. You obviously don't want that role to be the database/table owner, to mitigate damage from an attack.
Be sure to use the pgcrypto module to salt and hash user passwords.
I have read about LDAP on wikipedia and I kind of understand what it is. However what I did not get what why so many organizations are using LDAP authentication server over having a simple table with userid and hashed password.
LDAP server surely brings more complexity to the infrastructure. What gains justify this added complexity?
LDAP is complex, but it brings a lot more to the table than just centralized authentication. For example, many email clients can be hooked in to do LDAP searches to find other users - i.e. look up an employee by name, find their email address and phone number right from your email client.
Also, it is extensible - you can define your own types of objects and store them in the directory, so it can be used to store even data that the original implementers did not have in mind when designing it.
For example, OpenSolaris (and therefore I presume Solaris) machines can grab significant amounts of their own configuration over LDAP.
While setting up LDAP is not for the faint of heart and it makes little sense for the home user / small smattering of machines, the aggregate savings over thousands or tens of thousands of computers can make it worth it if administered properly.
Using a simple table seems like a good start until you need to use that same username and password in other locations. When your other systems (email, code, server login, bug tracking/ticket systems, etc.) start getting into the mix and you need to maintain all of them the table approach will be unmanageable fast because you would have to write an adapter for all of them to connect to your table for auth. Using ldap which is a standard and used by many projects will make it easier for you to maintain
Table with name and hash does not define an authentication scheme, it just defines a storage for the credentials. Authentication involves a protocol for the user to proove it's identity, like Kerberos or HTTP Digest. Organizations that deploy ldap don't, t deploy it for auth per se, they use Kerberos for that. Ldap is used for things like user organisatinal structure management (OU) or asset inventory. Once you deployed Kerberos for authentication and autheization it makes sense to use LDAP as your organization structure store, since most kern implementation will create an ldap anyway, eg. NT domain controllers.
At an application level..
In a Windows domain environment it can make sense to use LDAP as a means to use existing Active Directory information instead of duplicating all of your authentication.
I am working on a system architecture for a fund/pension manager. We are providing two ASP.NET MVC web applications; one to allow members of the pension fund to login and check their balances, manage their investment, etc and another to allow employers to make contributions to the fund on the employees (members) behalf. There are also internal applications delivered via the intranet.
We have been considering using Active Directory for storing, authentication/authorisation of not just the internal users (who are already using AD for logging into the domain and resource authorisation) but for the member and employer user accounts. The member and employer user accounts would be located in a different hierarchy (maybe even a different AD instance?) to the internal users.
However I am wondering if this is the best use-case for AD... given AD is such an 'internal' resource, should it be used to hold auth details for 'external' users (the alternative being a USERS table in a database)?
The benefits are: AD is designed and optimised for holding this sort of data, ASP.NET apps integrate with AD authorisation easily, there possibly are existing tools for working with the data (password resets, etc).
What are the risks?
I would recommend against a hybrid of internal and external users. Speaking from experience it opens a lot of security headaches. It might be better to create separate authentication systems, one that uses AD directly against the internal domain and another that uses an ADAM directory designed simply to hold external users. (i.e. - internal users should be authenticated using NTLM with the AD to ensure a kerberos encrypted login, while forms would be usable for the ADAM instance).
AD is very easy to integrate though, and if direct integration is undesirable due to the networking lumps, you can always attempt an LDAP:// to achieve the same authentication results.
I think your biggest risk is that AD would not scale to the amount of users you might have from an Internet app. I would use the Membership provider, unless you are trying to achieve SSO with internal and external accounts.