I think there is no solution for problem we're facing, but for confirmation I would like to you ask here.
We have REST API that is consumed from:
administration website (AngularJS)
customer website (AngularJS)
other REST API clients (own API applications)
Because requests from administration/customer website are made by AngularJS at a client side, users are able to determine (e.g. via FireBug) resources URLs and are able to consume all these resources from their own applications - what we don't want to and we cannot to restrict it e.g. with IP address because requests go from client. We would like to offer some group of resources only for customer / administration website and some resources for own REST API clients and some resources for both, but from the principle of JS requests made from AngularJS (and resource URL visibility) it cannot be done(?).
What could be the best practice for this issue?
Note: Your REST resources should always be secure. You should never depend on any client side javascript code. If your security is breached because someone knows the URI of an REST api; something is very wrong.
Angular can be made modular is such a way that the customers only see the customer modules and the administrators see the customer and administrator modules. This way you can let the administrators play in a WYSIWYG environment without having them to switch back and forth between the websites.
// Customers.js
// Module only containing customer code
angular.module('myCustomerProducts', ['myMainApp']);
// Administrators.js
// Module only containing administrator code
angular.module('myEditCustomerProducts', ['myCustomerProducts', 'myMainApp']);
Since your website is already separated for administrators and customers you can simply only include and deploy the javascript code for the specified target site. If this is not the case you'll need some server side transformation (eg. ASP.NET, PHP, Jade, ...) to build the index.html dynamically based on the credentials of the user.
Depending on the hosting platform, you can also deny access to everyone not in the administrator group when requesting anything from the administrator website (area).
But again; the server side security is way more of value. You can't secure an insecure server with javascript (on the client side).
Related
We are developing asp.net core applications that will require sending emails with gmail among other providers (client accounts). This applications will not be hosted by us but by many of our clients.
The application may be hosted on their domain or no domain at all (IP address only).
I am struggling with which authorization process I should use since I can not use the Javascript client or any redirect URI method due to unknown origin/domain. I also can not use localhost origin since there is no way for me to start a local http server from a browser. Programmatic extraction is deprecated. The only thing I am left with is manual copy/paste but apart from it being very user unfriendly Google documentation states that that might be deprecated in the future as well.
Am I missing something?
Please point me in the right direction as to how I should proceed.
I'm researching different approaches to build a web app that integrates Active Directory login into Business Catalyst. I'm wanting to implement a single sign on for active directory in an intranet environment. Specifically, users should be able to use their active directory credentials to login to Business Catalyst.
Workflow:
User provides username, password, and domain to the form.
Form sends request for authentication (I'm thinking via Soap, HTTP)
Gets a response based on the status of the AD User account (if they disabled then notify them otherwise continue)
Create a user in Business Catalyst if one is not created and login with that user. (Optionally: use a pre-existing account that has a matching username or some other matching criteria)
Optional:
detect if user is logged in with their ad account and auto-login with those credentials.
Option 1:
Communication with AD server via Liquid:
I reviewed the docs and saw the social media and the security zone docs but neither had a login api call. I know that liquid has access to server side data but I'm not sure if there is a server side call for handling authentication.
Option 2:
Build middleware that handles the Active Directory authentication and communicates the results via client-side http:
If I can’t do it through Liquid then I’m thinking I’d have to create a stand-alone service that is exposed externally (thinking node.js) and communicates between AD and client-side code via http.
Something similar to this example
https://github.com/adobebc/web-apps-sdk/tree/master/samples/bc-external-service
Additional Notes:
My active directory server is located on a machine in my intranet so the azure stuff doesn’t apply.
I know it is possible because there are products that can do this and more. I’m just not sure about all the details.
https://www.bitium.com/adobe-business-catalyst-active-directory-ad-integration
https://www.onelogin.com/connector/businesscatalyst-single-sign-on
Could you point me in the right direction to do this?
Option 1 or Option 2 or something else? Am I totally off here?
In terms of Option 1:
You can not write API with liquid markup - it is not for this. It is to render output of the BC data on the front end. It is not a server side language, its a template language basically.
Your only option is through full API, a middle-ware handling the login and interconnections.
We made a CMS that allows users to connect to Google Analytics via a connector. I'm in the process of porting this connector to OAuth2 and am wondering what kind of application I need to register.
The issue is that the CMS is installed by our clients at arbitrary URL so we don't know the complete set of redirect URLs that I would need to register a Web Server application. Google's OAuth won't let me redirect to an arbitrary URL that I pass in during the authorization request?
Would registering an installed application and then using the urn:ietf:wg:oauth:2.0:oob special redirect URI be best? Seems like this allows the user copy/paste their authorization code from the browser back into our application.
Thanks in advance!
Indeed the installed application will allow users to copy and paste and not register. This is appropriate if the clients are end users of your application, and not say, configuring it as a plugin which will then provide web services to the client's users (where such users will then be prompted via the OAuth2 consent dialog). In the latter case you probably want to ask your clients to register their own web site as web application with Google and use a configuration tool for your CMS application to set the client's redirect URLs.
Why the distinction? Because in the first case the consent action is about your relationship with your clients, but in the latter case it expresses trust between your clients and their users. For instance, you don't want your CMS application to be disabled for abuse because one of your clients has misbehaved, as it'd affect all your clients. However, if you intermediated the consent you made it difficult for Google to understand the distinction.
Hello and thanks for looking.
Background
I am designing an application that will host certain pieces of information/data for third-party websites via an API and must be accessed via authenticated requests.
Is OAuth the way to go about this or is there something better out there? I will not know the domains of the third-party sites up front so I can not rely on host-headers (which can be spoofed anyway).
Requests to the API will most likely originate in jQuery or regular JavaScript on the client side.
Question
What is the best way to ensure that third-party websites requesting data from my API are who they say they are, and are allowed to access the information they are requesting?
Many thanks!
Matt
OAuth, particularly OAuth 2 (which isn't yet finalized), will likely work well for you. But since the web requests are coming from the browser rather than the web server hosting these web sites, each individual browser will have to be authorized rather than each domain.
So let's step back and ask this question:
Is the data your API will be exposing unique per individual user or unique per web site domain? Or in other words, are you as the API owner going to contractually authorize domains to access your data, or will individual users have data accessible via your API, and those users need to authorize these other domains to access to their own data on your service?
If you're authorizing domains (and not users) then the browser simply cannot be the initiator of these authorized requests to your API. This is because the web server on those domains would have to issue their secret key to the client, at which point they've lost control of it and anyone can make these authorized calls -- not just the domains you've intended to authorize. This is the "you can't trust the client" principle in security.
If you're authorizing users, then each user who visits one of these 3rd party sites will have to go through a one-time setup where the web site redirects their browser to your service to log in and say "yes, [3rd party site] can access my data", after which they're redirected back to that site. After that, any time they visit that site, the site can download a secret key that's unique to that user and can be used from javascript on the client to make these authorized API calls.
I have a portal application that loads external content (widgets) via an iframe. Users login to CAS via the portal itself. There are a few portal APIs, though, that need to be called from that external content. What information do I have to pass from the portal to the widgets that the widgets can use to make these calls without being rejected by CAS?
UPDATE
The more I investigate, the more I think that my question boils down to how CAS actually does what it's supposed to do. In other words, how can I go from one site where I've authenticated to another and tell it that I've already done the authentication thing. What's the mechanism behind that and how can I employ in in a web context.
The portal scenario you describe is exactly what CAS' proxy ticketing was designed for. We use it with an iframe-based web portal system and it works fine.
The CAS proxy ticketing mechanism allows a client (your portal) to dish out service tickets to other clients (the widgets loaded in your portal's iframes). This saves your users a trip through the CAS server for each widget that their browser loads. Proxing is also useful if you're trying to use CAS for web service authentication (i.e. when one web service needs to connect to another CAS-protected web service).
Note though that for your purpose proxy ticketing isn't actually necessary. Your portal-iframe setup should work without it. But without proxy ticketing, each widget will have to go through the CAS server as it loads. At the very least this would slow down load times.
A while back I wrote a guide for setting up CAS proxy ticketing for RubyCAS-Client. The instructions are specific to the Ruby client, but they should give you a good overview of how CAS proxying works. Admittedly the implementation is a bit complicated -- mostly due to the "Proxy Granting Ticket" negotiation process:
http://rubycas-client.rubyforge.org/files/README_txt.html
(scroll down to the "How to act as a CAS proxy" section about 2/3rd down)
It looks like I may be asking CAS to do more than it's capable of doing. I've been thinking of it as an SSO engine where a given session can be passed around so that authentication only happens once. Instead, it seems that CAS is primary geared to be a centralized auth service (yes, I see the irony that this is what the acronym actually stands for). By handing authentication requests off to a central server, a single cookie can be read by that server. Stateless connections like APIs, then, cannot be validated this way.
It looks like CAS' proxy tickets may offer some hope, but I'm not ready to venture down that path just yet.