If I enable SSH authentication for an Azure DevOps organization, will that enforce (only allow) SSH key pairs to be used for Git authentication requests, explicitly? Or will existent OAuth tokens still work as-intended after enabling SSH keys? In my scenario, currently, SSH authentication is disabled and Third-party application access via OAuth is enabled going into this question. Thanks for your time. ( Options found under: Azure DevOps > Organization Settings (bottom left) > Policies (left middle) )
I am assuming enabling both allows for either to be used, but want to confirm before potentially blocking developers from being able to push code.
I would like all developer workstations to use SSH key authentication instead of OAuth, and don't want to put a wrench in their system in the meantime. Also I much prefer using private key authentication simply because of the inherent security benefits of using asymmetric cryptography.
It's safe to enable. We have it enabled in our organization, and almost none of us actually have added SSH keys. We're pretty much exclusively OAuth.
By default, your organization allows access for all authentication methods(OAuth, SSH authentication, PATs). You can limit access, but you must specifically restrict access for each method. When you deny access to an authentication method, no application can access your organization. Any app that previously had access gets an authentication error and has no access to your organization.
So enabling SSH authentication will not affect OAuth authentication.
Here is the official document you can refer to.
Related
So, im doing research to know if its a good alternative to implement keycloak on the environment i'm working at.
Im using LDAP to manage users at my workingplace. I was wondering if is there a way to use keycloak as auth service in all upcoming systems and some of the existing ones. We are currently managing it with an IDP that we need to improve or replace, also there are some systems use their own login (this will eventually change).
The main problem i've crossed is that keycloak synchronizes against ldap and i dont want user data to be stored on keycloak, maybe if its only login data. User data is planned to be kept only on ldap's database in case that any userdata needs to be updated.
So is there a way to use keycloak only as an auth service fetching user credentials from ldap on every auth request?
pd: maybe i am mistaken on the meaning of what's an auth service an whats an IDP.
Actually it is not necessary that LDAP users are synced to Keycloak.
Keycloak supports both options
Importing and optionally syncing users from LDAP to Keycloak
or
Always getting the User info from LDAP directly.
But keycloak will always generate some basic federated user in it's database (e.g. for keeping up a session when using OpenID Connect - but you should not really care about that).
As far as I know (but I've not used that myself) you could also use keycloak to maintain the LDAP users data and write changes back to LDAP (see "Edit Mode" in Keycloak documentation)
Check Keycloak documentation regarding LDAP stuff to get more information https://www.keycloak.org/docs/6.0/server_admin/#_ldap
Beside the User-Data Topic, Keycloak provides a lot of different Protocols (like SAML and OpenIDConnect) to provide authentication for your services. So you could use different/multiple authentication protocols depending on your applications with just one "LDAP-Backend"
So I want to have single sign in, in all the products using a auth server but that's not only for employees, keycloak should be used to that like auth0?
There are also some advantages to Keycloak:
Keycloak is also available with support if you buy JBoss EAP (see http://www.keycloak.org/support.html). This might be cheaper than the enterprise version of Auth0. If you want to use custom DB, you need enterprise version of Auth0 anyway.
Keycloak has features which are not available in Auth0:
Fine-grained permissions and role-based access control (RBAC) and attribute-based access control (ABAC) configurable via web admin console or custom code or you can write yuour own Java and JavaScript policies. This can be also implemented in Auth0 via user rules (custom JavaScript) or Authorization plugin(no code, less possibilities). In Keycloak you can do more without code (there are more types of security policies available out of the box e.g. based on role, groups, current time, an origin of the request) and there is a good support for custom developed access control modules. Here some more detailed research would be interesting to compare them.
Keycloak also offers a policy enforcer component - which you can connect to from your backend and verify whether the access token is sufficient to access a given resource. It works best with Java Web servers, or you can just deploy an extra Java Server with Keycloak adapter which will work as a gatekeeper and decide which request go through and which are blocked. All this happens based on the rules which you can configure via Keycloak web interface. I am not sure such policy enforcer is included in Auth0. On top of that, Keycloak can tell your client application which permissions you need when you want to access a given resource so you do not need to code this in your client. The workflow can be:
Client application wants to access resource R.
Client application asks Keycloak policy enforcer which permission it needs to access resource R.
Kecloak policy enforcer tells the client application which permission P it needs.
The client application requests an access token with permission P from Keycloak.
The client makes a request to the resource server with the access token containing permission P attached.
Policy enforcer which guards the resource server can ask Keycloak whether permission P is enough to access resource R.
When Keycloak approves, the resource can be accessed.
Thus, more can be centralized and configured in Keycloak. With this workflow, your client and resource server can outsource more security logic and code to Keycloak. In Auth0 you probably need to implement steps 2,3,6 on your own.
Both Auth0 and Keycloak should be able to achieve your goal - assuming you want only social (facebook, google etc), and /or username & password authentication?
Auth0 is the less risky option, keycloak is good for non-commercial & where you can afford production outages without a global 24x7 support team. Here a few other reasons why I'd recommend Auth0 - the documentation is world class, they have quickstart samples so you can get up and running in minutes, and easy access to more advanced options - passwordless, authentication, MFA, anomaly detection, x9's reliability, rate-limiting, an extensive management api, extensions for everything eg exporting logs to log aggregator, and so on. Anyhow, good luck with your project, and obviously what suits best may simply be down to your own project requirements.
Should add, if you are doing mobile, then Auth0 put a lot of effort into adding the necessary specialised security flows to target mobile (native / hybrid) apps. For instance, PKCE usage when using /authorize endpoint. Please bear that in mind, as not certain how keycloak has been implemented to handle this - alot of IDMs still do this incorrectly today.
I have integrated milton webdav with hadoop hdfs and able to read/write files to the hdfs cluster.
I have also added the authorization part using linux file permissions so only authorized users can access the hdfs server, however, I am stuck at the authentication part.
It seems hadoop does not provide any in built authentication and the users are identified only through unix 'whoami', meaning I cannot enable password for the specific user.
ref: http://hadoop.apache.org/common/docs/r1.0.3/hdfs_permissions_guide.html
So even if I create a new user and set permissions for it, there is no way to identify whether the user is authenticate or not. Two users with the same username and different password have the access to the all the resources intended for that username.
I am wondering if there is any way to enable user authentication in hdfs (either intrinsic in any new hadoop release or using third party tool like kerbores etc.)
Edit:
Ok, I have checked and it seems that kerberos may be an option but I just want to know if there is any other alternative available for authentication.
Thanks,
-chhavi
Right now kerberos is the only supported "real" authentication protocol. The "simple" protocol is completely trusting the client's whois information.
To setup kerberos authentication I suggest this guide: https://ccp.cloudera.com/display/CDH4DOC/Configuring+Hadoop+Security+in+CDH4
msktutil is a nice tool for creating kerberos keytabs in linux: https://fuhm.net/software/msktutil/
When creating service principals, make sure you have correct DNS settings, i.e. if you have a server named "host1.yourdomain.com", and that resolves to IP 1.2.3.4, then that IP should in turn resolve back to host1.yourdomain.com.
Also note that kerberos Negotiate Authentication headers might be larger than Jetty's built-in header size limit, in that case you need to modify org.apache.hadoop.http.HttpServer and add ret.setHeaderBufferSize(16*1024); in createDefaultChannelConnector(). I had to.
After some theoretical help on the best approach for allowing a SaaS product to authenticate users against a tenant's internal Active Directory (or other LDAP) server.
The application is hosted, but a requirement exists that tenants can delegate authentication to their existing user management provider such as AD or OpenLDAP etc. Tools such as Microsoft Online's hosted exchange support corporate AD sync.
Assuming the client doesn't want to forward port 389 to their domain controller, what is the best approach for this?
After doing some research and talking to a few system admins who would be managing this, we've settled on an two options, which should satisfy most people. I'll describe them here for those who were also interested in the outcome.
Authentication Service installed in the origanisation's DMZ
If users wish to utilise authentication with an on-premises active directory server they will be required to install an agent in their DMZ and open port 443 to it. Our service will be configured to hit this service to perform authentication.
This service will sit in the DMZ and receive authentication requests from the SaaS application. The service will attempt to bind to active directory with these credentials and return a status to indicate success or failure.
In this instance the application's forms based authentication will not change, and the user will not be aware of the authentication behind the scenes.
OpenId
Similar to the first approach, a service will be installed in the client's DMZ, and port 443 will be opened. This will be an OpenId provider.
The SaaS application will be an OpenId consumer (already is for Facebook, Twitter, Google etc login).
When a user wishes to log in, the OpenId provider will be presented, asking them to enter their user name and password. This login screen would be served from the client's DMZ. The user would never enter their username or password into the SaaS application.
In this instance, the existing forms based authentication is replaced with the OpenId authentication from the service in the client's DNZ.
A third option that we're investigating is Active Directory Federated Services, but this is proprietary to Active Directory. The other two solutions support any LDAP based authentication across the internet.
Perhaps this might help…
This vendor, Stormpath, offers a service providing: user authentication, user account management, with hookups to your customers’ on-premise directories.
What about an LDAPS connection to the customer's user directory? They can firewall this off so that only your servers have access if they're concerned about it being public. Since it's SSL it's secure end to end. All you need from them is the certificate from their issuing CA (if it's not a public one). I struggled to get this working for an internal web project in the DMZ and there's a real lack of any guides online. So I wrote one up when I'd got it working:
http://pcloadletter.co.uk/2011/06/27/active-directory-authentication-using-ldaps/
Your best bet is to implement a SAML authentication for your SaaS application, and then sign up with identity providers like Okta or OneLogin. Once that's done then you can also connect it with ADFS to provide Single Sign On for your web application through Active Directory.
I'm just doing this research myself and this is what I've came across of, will have more updates once implementation is done. Hope this gives you enough keywords to do another google search
My understanding is that there are three possible solutions:
Installing something on the domain controller to capture all user changes (additions, deletions, password changes) and send updates to the remote server. Unfortunately there's no way for the website to know the initial user passwords - only new ones once they are changed.
Provide access for the web server to connect to your domain controller via LDAP/WIF/ADFS. This would probably mean opening incoming ports in the company's firewall to allow a specific IP.
Otherwise, bypass usernames/passwords and use email-based authentication instead. Users would just have to authenticate via email once every 3-6 months for each device.
I have to begin implementing this for an upcoming project and I'm seriously leaning towards option #3 for simplicity.
Several sites, including this one, are using OpenID to authenticate their users. And of course, OpenID is a good solution to manage user accounts, simply by linking them to their OpenID account.
But are there similar solutions that could be used for desktop applications? I know there's CardSpace, where you create a custom ID card to contain your identity and optionally protect it with a pincode. But are there more alternatives for authentications on a desktop system or on systems within a local intranet environment?
And yes, I can write my own system where I keep a list of usernames and (hashed) passwords and then build my own login system but I just hate to invent my own wheel, especially when I need to keep it secure.
I would recommend that you look into the option of building an STS (using WIF, aka Geneva) and use (active) WS-federation in your windows app. Or if you can wait that long, just use Geneva Server when that is released.
We have a solution that works more or less like this:
Desktop tool prompts the user for ID/password
Desktop tool sends the ID/password over an encrypted (SSL) channel to the server.
Server initiates an HTTP request to a known URL of a login form and inputs the username and password as if they were form fields.
If the HTTP server responds appropriately, the server accepts the client as authenticated.
The target of that HTTP request should be tied to whatever single sign-on system that you use for the web application environment. In our case it happens not to be OpenID but it could be.