Single SignOn - Best practice - authentication

I need to build a scalable single sign-on mechanism for multiple sites. Scenario:
Central web application to register/manage account (Server in Europe)
Several web applications that need to authenticate against my user database (Servers in US/Europe/Pacific region)
I am using MySQL as database backend. The options I came up with are either replicating the user database across all servers (data security?) or allowing the servers to directly connect to my MySQL instance by explicitly allowing connections from their IPs in my.cnf (high load? single point of failure?).
What would be the best way to provide a scalable and low-latency single sign-on for all web applications? In terms of data security would it be a good idea to replicate the user database across all web applications?
Note: All web applications provide an API which users can use to embed widgets into their own websites. These widgets work through a token auth mechanism which will again need to authenticate against my user database.

I would not integrate the authentication on the database level, as in replicating the db or allowing access from the other servers. This might become hard to maintain. I would prefer a loosely coupled approach by exposing a simple service on your central server that lets the other app servers run authentication requests.
You should look into the following issues (probably more):
How to avoid cleartext transmission of passwords between servers
You probably can't throttle the service if a network of application servers authenticates all their users from the same IP, so you might want to restrict access to certain clients to avoid rogue machines mass-probing for valid accounts.
How to centrally enforce things such as session expiration
How to handle / avoid service downtime
Techniques that might be helpful:
Cryptographic CRAM (to avoid password transmission)
Certificates (to prove the clients' identity)
Alternatively, you might want to have the clients use the central service to obtain a token that is then promoted to and verified by the target server. There's architectures that work similarly (e. g. Kerberos ticket servers) which may serve as inspiration.

You should go for Oauth2 or SAML.

Related

How can handle the local authentication when a SSO website is down?

We are developing an SSO application using IdentityServer4 as the authentication (not authorization) infrastructure for other (client) websites in our company. One of our main concerns is the failure of the SSO website. In this situation, what considerations should we consider to minimize clients issues?
For example, we want to create a local login page in each application and ask each application to authenticate it using the OTP mechanism. Is this enough or are there better solutions?
For security reasons, you should not try to add some local login, it will just make things more complicated, complex and probably less secure.
Because your tokens have certain lifetime (like 1 hour default and if your SSO goes down for a short while, then you clients can continue to operate (unless you query your SSO all the time).
If you want to make it more reliable, then you need to start looking at load-balancers and having multiple instances running of IdentityServer. That can work if you do take care to have the same keys on all the instances.

How to protect secrets properly?

I am using HERE api in both frontend and backend. If I try to put my app_id and app_code into the frontend code, it will be available to anyone seeing my site.
I can try to create a domain whitelist and put my domain in this. But still, if I set the HTTP header "Referer" to my domain, I am able to access the API from any IP.
So, what do I do?
The Difference Between WHO and WHAT is Accessing the API Server
Before I dive into your problem I would like to first clear a misconception about WHO and WHAT is accessing an API server.
To better understand the differences between the WHO and the WHAT are accessing an API server, let’s use this picture:
So replace the mobile app by web app, and keep following my analogy around this picture.
The Intended Communication Channel represents the web app being used as you expected, by a legit user without any malicious intentions, communicating with the API server from the browser, not using Postman or using any other tool to perform a man in the middle(MitM) attack.
The actual channel may represent several different scenarios, like a legit user with malicious intentions that may be using Curl or a tool like Postman to perform the requests, a hacker using a MitM attack tool, like MitmProxy, to understand how the communication between the web app and the API server is being done in order to be able to replay the requests or even automate attacks against the API server. Many other scenarios are possible, but we will not enumerate each one here.
I hope that by now you may already have a clue why the WHO and the WHAT are not the same, but if not it will become clear in a moment.
The WHO is the user of the web app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
OAUTH
Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.
OpenID Connect
OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.
While user authentication may let the API server know WHO is using the API, it cannot guarantee that the requests have originated from WHAT you expect, the browser were your web app should be running from, with a real user.
Now we need a way to identify WHAT is calling the API server, and here things become more tricky than most developers may think. The WHAT is the thing making the request to the API server. Is it really a genuine instance of the web app, or is a bot, an automated script or an attacker manually poking around with the API server, using a tool like Postman?
For your surprise, you may end up discovering that It can be one of the legit users manipulating manually the requests or an automated script that is trying to gamify and take advantage of the service provided by the web app.
Well, to identify the WHAT, developers tend to resort to an API key that usually is sent in the headers of the web app. Some developers go the extra mile and compute the key at run-time in the web app, inside obfuscated javascript, thus it becomes a runtime secret, that can be reverse engineered by deobusfaction tools, and by inspecting the traffic between the web app and API server with the F12 or MitM tools.
The above write-up was extracted from an article I wrote, entitled WHY DOES YOUR MOBILE APP NEED AN API KEY?. While in the context of a Mobile App, the overall idea is still valid in the context of a web app. You wish you can read the article in full here, that is the first article in a series of articles about API keys.
Your Problem
I can try to create a domain whitelist and put my domain in this. But still, if I set the HTTP header "Referer" to my domain, I am able to access the API from any IP.
So this seems to be related with using the HERE admin interface, and I cannot help you here...
So, what do I do?
I am using HERE API in both frontend and backend.
The frontend MUST always delegate access to third part APIs into a backend that is under the control of the owner of the frontend, this way you don't expose access credentials to access this third part services in your frontend.
So the difference is that now is under your direct control how you will protect against abuse of HERE API access, because you are no longer exposing to the public the HERE api_id and api_code, and access to it must be processed through your backend, where your access secrets are hidden from public pry eyes, and where you can easily monitor and throttle usage, before your bill skyrockets in the HERE API.
If I try to put my app_id and app_code into the frontend code, it will be available to anyone seeing my site.
So to recap, the only credentials you SHOULD expose in your frontend is the ones to access your backend, the usual api-key and Authorization tokens, or whatsoever you want to name them, not the api_id or api_code to access the HERE API. This approach leaves you only with one access to protect, instead of multiple ones.
Defending an API Server
As I already said, but want to reinforce a web app should only communicate with an API server that is under your control and any access to third part APIs services must be done by this same API server you control. This way you limit the attack surface to only one place, where you will employ as many layers of defence as what you are protecting is worth.
For an API serving a web app, you can employ several layers of dense, starting with reCaptcha V3, followed by Web Application Firewall(WAF) and finally if you can afford it a User Behavior Analytics(UBA) solution.
Google reCAPTCHA V3:
reCAPTCHA is a free service that protects your website from spam and abuse. reCAPTCHA uses an advanced risk analysis engine and adaptive challenges to keep automated software from engaging in abusive activities on your site. It does this while letting your valid users pass through with ease.
...helps you detect abusive traffic on your website without any user friction. It returns a score based on the interactions with your website and provides you more flexibility to take appropriate actions.
WAF - Web Application Firewall:
A web application firewall (or WAF) filters, monitors, and blocks HTTP traffic to and from a web application. A WAF is differentiated from a regular firewall in that a WAF is able to filter the content of specific web applications while regular firewalls serve as a safety gate between servers. By inspecting HTTP traffic, it can prevent attacks stemming from web application security flaws, such as SQL injection, cross-site scripting (XSS), file inclusion, and security misconfigurations.
UBA - User Behavior Analytics:
User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about the detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns—anomalies that indicate potential threats. Instead of tracking devices or security events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA functionality by allowing them to analyze petabytes worth of data to detect insider threats and advanced persistent threats.
All these solutions work based on a negative identification model, by other words they try their best to differentiate the bad from the good by identifying what is bad, not what is good, thus they are prone to false positives, despite the advanced technology used by some of them, like machine learning and artificial intelligence.
So you may find yourself more often than not in having to relax how you block the access to the API server in order to not affect the good users. This also means that these solutions require constant monitoring to validate that the false positives are not blocking your legit users and that at the same time they are properly keeping at bay the unauthorized ones.
Summary
Anything that runs on the client side and needs some secret to access an API can be abused in different ways and you must delegate the access to all third part APIs to a backend under your control, so that you reduce the attack surface, and at the same time protect their secrets from public pry eyes.
In the end, the solution to use in order to protect your API server must be chosen in accordance with the value of what you are trying to protect and the legal requirements for that type of data, like the GDPR regulations in Europe.
So using API keys may sound like locking the door of your home and leave the key under the mat, but not using them is liking leaving your car parked with the door closed, but the key in the ignition.
Going the Extra Mile
OWASP Web Top 10 Risks
The OWASP Top 10 is a powerful awareness document for web application security. It represents a broad consensus about the most critical security risks to web applications. Project members include a variety of security experts from around the world who have shared their expertise to produce this list.

Microservices - how to solve security and user authentication?

There is a lot of discussion about microservice architecture. What I am missing - or maybe what I did not yet understand is, how to solve the issue of security and user authentication?
For example: I develop a microservice which provides a Rest Service interface to a workflow engine. The engine is based on JEE and runs on application servers like GlassFish or Wildfly.
One of the core concepts of the workflow engine is, that each call is user centric. This means depending of the role and access level of the current user, the workflow engine produces individual results (e.g. a user-centric tasklist or processing an open task which depends on the users role in the process).
In my eyes, thus a service is not accessible from everywhere. For example if someone plans to implement a modern Ajax based JavaScript application which should use the workflow microservice there are two problems:
1) to avoid the cross-scripting problem from JavaScript/Ajax the JavaScript Web application needs to be deployed under the same domain as the microservice runs
2) if the microservice forces a user authentication (which is the case in my scenario) the application need to provide a transparent authentication mechanism.
The situation becomes more complex if the client need to access more than one user-centric microservices forcing user authentication.
I always end up with an architecture where all services and the client application running on the same application server under the same domain.
How can these problems be solved? What is the best practice for such an architecture?
Short answer: check OAUTH, and manage caches of credentials in each microservice that needs to access other microservices. By "manage" I mean, be careful with security. Specially, mind who can access those credentials and let the network topology be your friend. Create a DMZ layer and other internal layers reflecting the dependency graph of your microservices.
Long answer, keep reading. Your question is a good one because there is no simple silver bullet to do what you need although your problem is quite recurrent.
As with everything related with microservices that I saw so far, nothing is really new. Whenever you need to have a distributed system doing things on behalf of a certain user, you need distributed credentials to enable such solution. This is true since mainframe times. There is no way to violate that.
Auto SSH is, in a sense, such a thing. Perhaps it may sound like a glorified way to describe something simple, but in the end, it enables processes in one machine to use services in another machine.
In the Grid world, the Globus Toolkit, for instance, bases its distributed security using the following:
X.509 certificates;
MyProxy - manages a repository of credentials and helps you define a chain of certificate authorities up to finding the root one, which should be trusted by default;
An extension of OpenSSH, which is the de facto standard SSH implementation for Linux distributions.
OAUTH is perhaps what you need. It is a way provide authorization with extra restrictions. For instance, imagine that a certain user has read and write permission on a certain service. When you issue an OAUTH authorization you do not necessarily give full user powers to the third party. You may only give read access.
CORS, mentioned in another answer, is useful when the end client (typically a web browser) needs single-sign-on across web sites. But it seems that your problem is closer to a cluster in which you have many microservices that are managed by you. Nevertheless, you can take advantage of solutions developed by the Grid field to ensure security in a cluster distributed across sites (for high availability reasons, for instance).
Complete security is something unattainable. So all this is of no use if credentials are valid forever or if you do not take enough care to keep them secret to whatever received them. For such purpose, I would recommend partitioning your network using layers. Each layer with a different degree of secrecy and exposure to the outside world.
If you do not want the burden to have the required infrastructure to allow for OAUTH, you can either use basic HTTP or create your own tokens.
When using basic HTTP authentication, the client needs to send credentials on each request, therefore eliminating the need to keep session state on the server side for the purpose of authorization.
If you want to create your own mechanism, then change your login requests such that a token is returned as the response to a successful login. Subsequent requests having the same token will act as the basic HTTP authentication with the advantage that this takes place at the application level (in contrast with the framework or app server level in basic HTTP authentication).
Your question is about two independent issues.
Making your service accessible from another origin is easily solved by implementing CORS. For non-browser clients, cross-origin is not an issue at all.
The second problem about service authentication is typically solved using token based authentication.
Any caller of one of your microservices would get an access token from the authorization server or STS for that specific service.
Your client authenticates with the authorization server or STS either through an established session (cookies) or by sending a valid token along with the request.

What is the best suited authentication technique for this scenario?

Please suggest me the best authentication way to implement in the scenario mentioned below:
The requirement is I have to deploy a WCF web service in multiple countries across the world.
NOTE : All the machines on which the service is deployed are on the same domain.
1.The clients that access this service should fall in the same domain else the authentication should fail.
Currently I am using Message Security mode using "Windows"
I am curous why you would want the domain to be the same if it needs to be deployed in different countries around the world. Unless you are talking about hosting the service on an internal network that is not publicly exposed, enforcing the same domain name might be difficult. Different countries have different domain standards. America has a much richer set of domain roots to choose from. Other countries often have a country specific root, possibly with a regional subroot.
I would not couple your service to the domain that hosts it, nor would I recommend using the domain as a factor in authentication. If your service needs to be publicly exposed on the internet in each of these countries, I would recommend using something other than Windows security. A Claims-based security mechanism might work best. Internally inside the service implementation, claims can be checked, and if necessary, the windows identity can be authenticated separately from WCF authentication. Claims also allow you to utilize more than just a username/password or certificate to fully authenticate and authorize a client request. You can request the callers domain, country, region, and other evidence be included in the claim, allowing you to verify that calls are being made from the appropriate location and by the appropriate clients with much more flexibility than with Windows authentication (and if you publicly expose your service, Windows authentication will likely not be available anyway.)
Since you are running on an intranet and assuming that your Windows application will connect directly to the service, I would go with Transport Security using Windows authentication.
For some guidance consult patterns & practices Improving Web Services Security Guide.
I still question whether or not you need authorization. If you go with Windows authentication without any authorization it will simplify your service but will allow any domain user to access your service whether or not they are using the Windows application. Granted, they would have to have knowledge of the endpoint and the message structure but it would still be possible for them to do.
If Windows authentication is really all that is required, I would still raise the authorization issue and document it (and get sign off if applicable). On the one hand this covers you but also makes people explicitly aware of the decision and the possible risks.

Using OpenID as a login for my website - redundant providers

How do I support redundancy on my OpenID login website?
For instance, I have users that demand 100% uptime (yeah, right, but let's get as close as we can).
Some of them use less available providers (ie, myphpid on their own website, or an ID on a startup which has frequent downtime). Now I can shuttle them to a more reliable provider, but I also want to have some redundancy.
One solution I hope I can deploy, but don't understand enough about OpenID to try is:
Set up several phpmyids on different hosting services with the same credentials (hash/key/etc) but different domains (ideally I'd have a round robin DNS and the same name, but I also want to account for the case where they have different domains).
Will this work? In other words, I have the exact same phpmyid files, including the credentials, on different servers. Can I use example.com/id and example2.com/id and expect it to look the same on my end so I don't have to link multiple OpenId accounts to each user in my system?
While I use the example of phpmyid, the question is more general - are the credentials what's important, or is the domain/ip/??? also linked in such a way as to prevent this?
Is there, or can there be, a standard that would allow one to move one's OpenID from one provider to another without having to delink and relink on each website they used that openid on?
-Adam
An OpenID is a URL. example.com/id and example2.com/id are two different OpenIDs, regardless of which provider hosts them or what credentials the users share with those providers. The reliability of an OpenID really comes down to the reliability of hosting that URL. Yes, you can define fallback providers in your XRDS document, but you still have to be able to discover that document from the OpenID URL in the first place.
So the reliability techniques for OpenID are, for the most part, the same as for any other web resource with a fixed URL. And, as a relying party, there isn't a lot you can do about that. As you said, it's up to your users to choose an OpenID provider that suits their own requirements. You might suggest to your users that they ask their OpenID provider if a service level agreement is available.
The one thing you can do is allow your users to associate their account in your application with multiple OpenIDs.
I recently posted an answer to the question "How do I use more than one OpenID?" which may address your concerns.
A professional provider should have a redundant service commonly archieved by having a load balancer in front of two separate servers, but users should host an XRDS document on a custom domain (or by using an i-name) with references to different providers (for fallback purposes)
You could also host identical data on different servers but round robin wont help you. This is because round robin just acts as a simple load balancer (it doesn't check whether a host is accessible).