In HANA, we have one database user which is shared across 100 users (may not be best practice). We are frequently coming across the situation where user is getting deactivated due to connection attempt with wrong credentials.
Maximum_invalid_connect_attempts are 6. Is it possible to find out last application users or OS users who have tried to connect with wrong credentials?
We are also thinking of increasing Maximum_invalid_connect_attempts and number of users. But before that, is there a way to find application user/OS users trying to connect wrongly?
Best Regards
We have one db user which is shared across 100 users (May not be best practice)
There is no ambiguity here, this is not just "not best practice" it's plain wrong to share a user account across multiple end users. By doing that, you abandon all account related security and the problem you seem to face is a direct consequence of that.
To find out which OS user tried to log on to the system (successful or not) the db auditing needs to be configured. The action that needs to be audited is VALIDATE USER and is available with HANA 2.
You can of course also just disable the whole max_invalid_connect_attempts as you don't seem to worry about DB access security anyway.
Related
I have a unique use case. I want to create a front-end system to manage employee pay. I will have a profile for each employee and their hourly rate stored for viewing/updates in the future.
With user permissions, we can block certain people from seeing pay in the frontend.
My challenge is that I want to keep developers from opening up the database and viewing pay.
An initial thought was to hash the pay against my password. I'm sure there is some reverse engineering that could be used to get the payout, but it wouldn't be as easy.
Open to thoughts on how this might be possible.
This is by no means a comprehensive answer, but I wanted at least to point out a couple of things:
In this case, you need to control security at the server level. Trying to control security at the browser level, using Javascript (or any similar frameword like ReactJs) is fighting a losing battle. It will be always insecure, since any one (given the necessary time and resources) will eventually find out how to break it, and will see (and maybe even modify) the whole database.
Also, if you need an environment with security, you'll need to separate developers from the Production environment. They can play in the Development environment, and maybe in the Quality Assurance environment, but by no means in the Production environment. Not even read-only access. A separate team controls Production (access, passwords, firewalls, etc.) and deploys to it -- using instructions provided by the developers.
There are Many SQL Servers hosted on different different Servers.
All Servers are working based on "SQL Server Authentication". So the Same Login is used by many people in the Organization.
How to trace who deleted some of the records in particular table?
Do we need any additional coding like Triggers are required or its a in-build feature of SQL server to provide those details?
Please help me.
Thank You.
If the deletion has already occurred and you had nothing in place to track / log this, then the chances are going to be very low - they are not zero, but not far above it.
If you use the transaction log to identify the exact deletion and the session id of the deletion, which we already know is the shared user login - and you have got successful login security auditing enabled you would in theory be able to trace it back to the IP address that made the deletion.
However - that is a pretty slim chance - I would suspect that the login is from the actual application software and you would of needed that to be running directly on the users machine, e.g not a 3-tier / web based server of any flavor, but a good old thick client app making direct connections.
That gets you an IP and a time, but not a who was logged in on that machine at that time, if its shared in any form, then you are having to get login records on the machine etc.
I am currently working on a file system application in C# that requires users to login to a Perforce server.
During our analysis, we figured that having unique P4 login accounts per user is not really beneficial and would require us to purchase more licenses.
Considering that these users are contractual and will only use the system for a predefined amount of time, it's hard to justify purchasing licenses for each new contractual user.
With that said, are there any disadvantages to having "group" of users share one common Login account to a Perforce server ? For example, we'd have X groups who share X logins.
From a client-spec point-of-view, will Perforce be able to detect that even though someone synced to head, the newly logged user (who's on another machine), also needs to sync to head ? Or are all files flagged as synced to head since someone else synced already ?
Thanks
The client specs are per machine, and so will work in the scenario you give.
However, Perforce licenses are strictly per person, and so you will be breaking the license deal and using the software illegally. I really would not advocate that.
In addition to the 'real' people you need licenses for, you can ask for a couple of free 'robot' accounts to support things like automatic build services, admin etc.
Perforce have had arrangements in the past for licensing of temporary users such as interns, and so what I would recommend is you contact them and ask what they can do for you in your situation.
Greg has an excellent answer and you should follow his directions first. But I would like to make a point on the technical side of sharing clients on multiple machines. This is generally a bad idea. Perforce keeps track of the contents of each client by client name only. So if you sync a client on one machine, and then try to sync the same client on another machine, then the other machine will only get the "recently" changed files and none of the changes that were synced on the first machine.
The result of this is that you have to do a lot of force syncing. Or keep track of the changelists you sync to and do some flushing and then syncing.
I want to create my site and in the page have it so that the forum pages will use the forum mysql user having privileges on mydb.forum_table, mydb_forum_table2.
and the profile page to use the profile user having access to mydb.users and mydb.profiefields
and so on with the photogallery, blog, chat and...
is this the right way to do it! I'm thinking of principle of least privileges but I wonder why I haven't seen other big known CMS do it!
One of the critical resources for a database is connections. Generally databases are configured with a maximum number of connections, an each time a process needs to make a query, it needs a connection to do so. Database connections are expensive objects to create -- they take time and memory, and most importantly, connections are established for a specific user. The generally accepted 'best practice' for web applications is for the application, when it needs a database connection, to check a pool for an available connection. If there's a free connection in the pool, the web app will pull that connection, use it as necessary, and then return it to the pool for reuse. If there are no free connections, the app will create a new one, use it, and then place it in the pool for reuse.
If you're dealing with an application that uses multiple database users (for privilege management) and you need to use connection pooling, your application will need to establish many pools (one for each user), which will usually result in your application acquiring at least one connection for each database user it is using. This is inefficient, error prone, and needlessly complex.
If you're truly intent on limiting your application's access to data, then you should probably investigate how much support your database has for views. If views are well-supported, then you can create a view (or views) that are customized to the needs any given portion of your application.
My recommendation would be to stick to a single database user, and then use the time you just freed up to do more debugging of your application. You'll get better results, and will aggravate fewer DBAs.
If I understand correctly, the question is about implementing module access control based on the permissions on the tables that are used by the module.
I think it would be complicated to maintain (the link between modules, and tables), and slow to have to check the permissions on each table accessed by the module.
I am building a web application that will essentially allow authenticated users access to mass amounts of data, but I don't want users to only have read-only access. If there are records missing fields but a user has found information to fill these fields or correct already populated data, I would like the user to be able to do so.
However, I'm worried about mean-spirited folks coming in and simply clearing out records out of sheer boredom and am wondering what the best way to prevent this from happening would be.
My first thought is to have users submit edits, and have a page devoted to batch approvals of these edits after myself or trusted individuals skim over the page. Of course, this would be time consuming (especially as the database grows larger), and I'm curious to know of any better ways to give users editing privileges.
As you are in Rails, there are a number of plugins that provide auditing and versioning of records -
http://github.com/andersondias/acts_as_auditable
http://github.com/laserlemon/vestal_versions
These should let you build something that allows edits but still support reversions in the worst case scenario.
Support rollbacks, like Wikis, to undo malicious edits.