With a surge of applications that can be used to pull information, my sql server is constantly getting tapped, and there are a couple of users that keeps running refresh. Is there a way to reject query based on specific client_app_name and nt_username?
Alternatively, is there a way to add the combination of the user and the app to security to decline access to SQL? i.e. Approve the user access if client_appname is excel but decline if the appname is 'Mashup Engine'.
What you really need is resource governance. With it you can restrict the resources a user can consume. This way the users can refresh as much as they like, but they won't be able to consume the server resources, their queries will instead slow down as they are exhausting the allowed resources. Other users will still be able to run queries at full speed.
The assignment of users to resource groups ('pools') is based on a classification function run at login time, and this function can consider user name, application name, workstation name, client IP etc.
Related
What is the best way to restrict the scope of a connected app to a set of objects? My current solution is to use the Manage user data via APIs scope but that still grants more access than required.
A solution I see frequently is to create a user with a restricted profile and connect with that user but then you lose context of actions made by users in the connected app so this solution doesn't work
Tricky, you typically don't. (consider posting on https://salesforce.stackexchange.com/, there might be a clever way I didn't think of).
You can flip the connected app from "all users can self authorise" to "admin-approved users are preauthorised" and then allow only certain profiles / permission sets to use the app. But the bulk of it is "just" enabling the connection via API and cutting it to say Chatter only or OpenId identifiers. And that's already an improvement compared to SOAP APIs where you don't have scopes and the app can completely impersonate the user, do everything they can do in UI.
Profiles/permission sets/sharing rules are "the" way even in not immediately obvious situations like Lighting Connect Salesforce to Salesforce or Named Credentials access to another org.
If you can't restrict the visibility with profiles and access to all tables user can see is not acceptable...
you could create series of Apex classes exposing certain queries, updates etc and grant profile access to these classes - but without full api access? You could even let them pass any SOQL (evil) but use with sharing, WITH SECURITY_ENFORCED, stripInaccessible + custom restriction on tables before returning results
you could look into https://developer.salesforce.com/docs/atlas.en-us.238.0.apexref.meta/apexref/apex_class_Auth_ConnectedAppPlugin.htm although I suspect it's run only on connect, not on every request. So at best you could deny access if user has right to see some sensitive data, not great
if there are few objects you need to block updates if done via app - Quiddity might be the way to go. Throw error in a trigger if action started from REST context?
give the Transaction Security trailhead a go. If it looks promising (there's way to check "application" and "queried entities" according to this) - might be a solution. You'll likely have to cough up $ though, last time I checked the cool bits of event monitoring & transaction security were hidden behind an extra paid addon (standalone or bundled with platform encryption and Field Audit Track into Salesforce Shield solution)
2 logins? dedicated user for querying stuff but inserts/updates running as your end user?
I am using Splunk (7.3.3) and I am having tremendous difficulties trying to create a dashboard that can show (or 'report') the following information:
unsuccessful admin logins
unsuccessful admin logins after duty hours (WINDOWS, ALL HOURS RIGHT NOW)
admin logins from OCONUS IPs
admin logon with account locked
attempts to logon with expired password
unsuccessful attempts to bypass login or logins not enforcing PKI, multifactor, and or modified authentication enforcement
system time outs
system memory spikes
system network traffic spikes
system errors
I feel like the majority of these would be common things that people want to use to track these type of issues for their applications and was wondering if anybody would be available to share queries they have used in Splunk 7.3.3.
For simpler stuff such as Windows logons (event code) I am having success using the following query:
index=windows EventCode=4624 | stats count BY TargetUserName - I also pipe in some AND NOTs to prune out bad logs that I am not interested in but took them out for sake of query
For Windows Admin Logins.. I am creating a report that runs query index=* source="*WinEventLog:Security" EventCode=4720 OR (EventCode=4732 Administrators) and add the report to the dashboard.
Some of those questions have example answers in the Splunk Security Essentials app. Others may be answered by the Splunk Essentials for Infrastructure Troubleshooting and Monitoring app or another app. See apps.splunk.com for these and other apps that may help.
Looking for OCONUS logins is a matter of using the iplocation command to map an IP address to a country and filtering out all the "United States" results, this leaving only OCONUS logins (mostly).
Unsuccessful attempts to bypass login may or may not be reported. This depends on the specific application or device. Likewise for unenforced PKI, MFA, etc. Or, you may need to correlate logs from separate sources, such as AD for login and another source for MFA.
I need some clarification in the testing process, specifically when multiple users (100 Users) login to a web application through JMeter.
I can log in with a single valid user but if there are 100 users and 1 is a valid user and 99 are invalid users, the 99 users cannot log in.
The problem is creating 100 is a difficult process.
Now, is testing login as mentioned above the same as testing with 100 valid users?
If not, is there any better process to test login with multi-users?
There is only one obvious requirement: each JMeter thread (virtual user) should use different credentials, in other words JMeter user must represent real user using real browser as close as possible, otherwise your load testing will not make sense.
So ideally you should have 100 different credentials so each virtual user could use its own username/password combination and have its own session. It particularly matters when your test scenario assumes some business processes, i.e. one user starts workflow, another one approves, third one finishes, etc.
If each load test iteration assumes "clean" system you could consider automating user creation process via setUp Thread Group where you can create the prerequisites (users, content, whatever). Ask around, it might be the case you can create the user using a single REST API or Database call, or it could be possible to import users from LDAP or using a shell command
As a last resort you can use single credentials with multiple JMeter virtual users, however in this case you may run into issues with your application so try avoiding CRUD operations so your test would represent just browsing.
How can I tell if a user is logged in using straight SQL based on their email address?
We have a system that is highly coupled with ExpressionEngine and cannot use the Magento API in many of the EE templates.
Edit to show current login code:
Mage::getSingleton('core/session', array('name'=>'frontend'));
$session = Mage::getSingleton('customer/session');
$session->login($ParticipantInfo['PreferedEmailAddress'],'default_password');
$session->setCustomerAsLoggedIn($session->getCustomer());
TL;DR: As far as i know, even if session data is stored in the db, there is no definite way of telling only via plain SQL.
Question would also be: Which user? Customer, admin or api user? Assuming you store session data within the file system, I could think of some options:
API
For API-Users, have a look at the api_session table, you can do a join with the api_user table, which stores the email address. However, there is no way, the information in these two tables will suffice, as only session id and logdate are saved for a specific user id and you have no way of telling if a session is still active.
Querying for this data those would probably be something along the lines of:
SELECT *
FROM api_user
INNER JOIN api_session ON api_user.user_id = api_session.user_id
WHERE api_user.email = "<known_email>"
Admin & Customer
Admin users are stored within admin_user, however, like for api_user, no information is stored along for session management.
Customers are stored within the customer_* tables. You can look them up in the log_visitortable:
SELECT *
FROM log_visitor_online
INNER JOIN customer_entity ON customer_entity.entity_id = log_visitor_online.customer_id
WHERE c.email = "<known_email>"
Again, no information can be retrieved, if the session is still valid. EDIT: Tim showed how to do it correctly in his answer.
The bad news, in general
No information is stored directly, if a user is logged in currently, only, when the creation date of the session. With out-of-the box functionality you should not be able to tell accurately via SQL if a user is currently logged in or not - this would be insensible at best, as magento checks the user's session's validity against the stored session data in the db/filesystem, so without the user's session data, you can determine nothing with 100% accuracy.
The good news if you can write a module
With a little bit of work you can hook into the session management of Magento. There's a cheat sheet for events the core ships with. You can also create you own custom events, which you may listen to and execute code upon.
The idea here would be to write a module which could store extra information on the customer (admin or api user vice versa) or within an extra module table. You can hook into the login process and set a timestamp for the api_user/customer/admin has logged in and refresh that timestamp upon a request. If a user's timestamp hasn't been refreshed for, let's say, X Seconds, you assume the user is logged in any more. You delete the user's timestamp upon the logout event.
However, this is also not 100% accurate and it heavily depends on what a user does in you system.
Anyway, I hope I could provide some insight.
lg,
flo
I have the following scenario.
At my company we use Oracle 11g. The authentication on the frontend is using database users. So, every user of the frontend has a his own user account in the database system.
This implies that they have the ability to connect directly to the database, if they know the IP address, port, etc,. Of course, this is not considered a security concern because our strict managment of roles and privileges. This also implies that when a new user is added our DBA have to create the user and assing the proper roles and privileges.
Until now, our frontend is accesed only by our internal users. However, We are planning to add the capability for our external users can login in our frontend.
Our estimation is about 750,000 external users with annual increments of 50,000. This users are supposed to access our system three or four times per year.
The question we have is how to grant access to this users.
By using our already implemented authentication system. Every user has his own database user account.
Generating an authentication system for external users only. Like most of the CMS's in the market, with tables as an ACL (Access Control List) for users, passwords and roles for our 750,000 external users.
My main concern is to have +750,000 database user accounts that will be unused most of the time and eventually could make a mess with our internal users.
Someone have a similiar experience with this amount of users and how did you deal with it?
Best regards.
Off the top of my head..
Make sure that whatever outward facing boxes are few in number.
For the boxes that can connect to the database - make them purely
authentication or get/put for the data. don't run the web server on the databases or on the same LAN segment.
If possible encrypt communications from the client to the database so that if any of your intermediate hops get rooted they'll only see junk.
Use a firewall to ensure that only the bare minimum can get through.
For validating authentication, don't let their 'real' password get off the web server. Keep it hashed, San Diego!