I have created an API that should only be accessed by certain client applications.
The users of these applications do not (necessarily) have to log in to use the client application. I will hand out API keys, but these will be visible on the client app, so they can also be used by other applications (?).
Is there any way to make sure the requests are coming from a specific client application, for example because they are hosted on a certain domain? I guess origin headers can easily be spoofed.
API keys are typically used to authenticate applications to a server. You are not saying what type of client you are using (native, web app, JavaScript?) You're correct that the API key may be read by another application on the client if that client has the same permissions (running in the same security context) as your client.
You could use client certificates to have the application identify itself. But this may be a pretty heavy-handed solution depending on the security thread you're trying to mitigate. And even here an application in the same security context has access to the private key.
All other info in an HTTP request is easy to falsify.
Related
I am building a system with an ASP.NET Core web app (incidentally, in Blazor), which let's call "Site", and some domain web services (which might someday be used by other sites), one of which let's call "CustomerService".
Following various guides and articles on how to set up authentication with Open ID Connect and Azure Active Directory for this system, I see the following possible different approaches to authentication and authorization, especially with regard to AJAX requests:
Site-only auth, passthrough: Service trusts the site; site authenticates user.
Service-only auth, passthrough: Service authenticates user; site passes through all AJAX requests.
Service-only auth, CORS Service provides site client data via CORS, with authentication; site doesn't handle AJAX requests at all.
Service and site auth, passthrough: Service and site both authenticate user; site passes through some AJAX requests.
These all seem to have significant practical problems. Is there a fifth approach, or a variation I should be considering?
Here's my elaboration of these approaches.
(1) Service trusts the site; site authenticates user:
(1a) Set up Site.Server to use Open ID Connect for users to authenticate, implement all necessary authorization on Site.Server, pass through web API calls to CustomerService, and set up CustomerService to trust requests that come from Site.Server. This looks like a bad idea because then any user can spoof Site.Server and have full access to operations that should be secured on CustomerService. Also, CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(1b) Same as (1a), but Site.Server would know a secret API key that would be passed to CustomerService, either in headers or the API call's querystring or body. This doesn't seem so great because the API key would never change and then could be discovered and spoofed by any user. Still, this could work, as the API key could stay secret, and we could use our secret server for both sides to retrieve it. But still CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(1c) Same as (1b), but we come up with a mechanism for rotating the API key occasionally. This doesn't seem so great because the API key would change and then could be discovered and spoofed by any user. Still, this could work, as the API key could stay secret, and we could use our secret server for both sides to retrieve it. But still CustomerService would not be able to enforce authorization; we'd be trusting Site.Server to get it right, which seems suboptimal.
(2) Service authenticates user; site passes through all AJAX requests: Avoid any authentication on Site.Server and instead enforce authorization/authentication on CustomerService only through Open ID Connect+Azure AD. Site.Server would have to pass through requests including headers to CustomerService. This has the benefit of putting the security in the right place, but it seems unworkable, as the user has no way to authenticate on CustomerService since the user isn't using CustomerService directly; their AJAX requests still go to Site.Server.
(3) Service provides site client data via CORS, with authentication; site doesn't handle AJAX requests at all: Avoid any authentication on Site.Server and instead use CORS to allow the user's browser to connect directly to CustomerService, requiring authentication only through Open ID Connect+Azure AD. This has the benefit of putting the security in the right place, but how can a user authenticate on a AJAX request without having done so in a human-browsable way first? My AJAX request can't redirect to microsoftonline and prompt the user, can it? Plus CORS seems like a bad idea in general--we want to move away from cross-site anything; to the user, it should appear that Site.Server is serving up both AJAX calls and HTML page requests, right?
(4) Service and site both authenticate user; site passes through some AJAX requests. Put authentication on both Site.Server and CustomerService, with the same app ID, making them appear as one and the same site as far as Azure AD knows. Site.Server could do its own authentication and restrict certain service calls from getting to CustomerService, or it could pass through requests, including headers, to CustomerService, which could then deny or grant access as well. This is what we decided to do, but I question it now, as if I add a
second service, now it has to have again the same app ID to keep this approach.
None of these approaches seem to hit the mark. Am I missing another approach that I should be considering? Or is there a variation I am missing?
Here are my thoughts on what an option 5 is:
WEB UI
Code runs in the browser and interacts with the Authorization Server to authenticate the user. A library such as OIDC Client does the security work for you.
Provides best usability and simplest code. The UI uses access tokens to call cross domain APIs. Renewing tokens is tricky though, and browser security requires some due diligence.
WEB BACK END
Is static content only, deployed around the world close to end users - perhaps via Azure CDN. Must execute zero code. Provides best performance.
WEB UI SAME AS MOBILE UI
Your Web UI in effect operates in an identical manner to a mobile UI and is quite a bit simpler, with less need for cookies + double hops.
ENTRY POINT API
The browser UI interacts with an entry point API tailored to UI consumers. This API validates tokens by downloading Azure AD token signing keys. It is also has first say in authorizing requests.
The entry point API orchestrates calls to Core APIs and Azure APIs such as Graph. The Web UI uses a single token scoped to the entry point API and you can strictly control the UI's privileges. Meanwhile the API can use Azure AD's 'on behalf of' feature to get tokens for downstream APIs so that the UI does not need to deal with this.
DOMAIN APIs
These typically run in a locked down private cloud and are not called directly by the outside world. This allows you closer control over which types of caller can invoke which high privilege operations.
BLOG POSTS OF MINE
My blog's index page has further info on these patterns and the goals behind them. Maybe have a browse of the SPA Goals and API Platform Architecture posts.
There are some working code samples on this page. In my case the hosting uses AWS instead of Azure, though concepts are the same.
I have a JAX-RS application deployed under tomcat and a mobile app.
I would like to know how to make the webservice usable only by the mobile application, in other words, allow access only for a given application
When a request comes in across the internet, there's not really a safe way to be sure what application sent the request. Applications can identify themselves however they want. You could, if you want, attempt to hide credentials in your application, somehow, and have it log in with those credentials. But if anyone discovers those credentials, they can write a program that uses them to pretend to the your application. The problem is that you cannot count on any control over the client system. The client system can always be altered to pretend to be something it is not.
From my perspective, you can add username/encrypted password in the invoke request, and then compare it with the ones saved in the server side
If you really want to, you could implement some form of cryptography. For example, the JAX-RS service can send a 401 forbidden and provide a nonce for the client to sign with its private key, and then send back to the server in the Authorization header. Otherwise, stick to HTTP authentication. If you are communicating via HTTPS, you should be fine with basic auth.
I'm new to web services and am creating HTTP services using .NET Web API 2.
The consumers of the services will be other applications, but in the future I foresee web applications (browsers, mobile apps) using them. The services simply serve data to the consumers (no create/update/delete).
All applications, including the API, are located on our enterprise intranet. Nothing outward facing.
I was told to use Integrated Windows Authentication for the services. Can an API key also be used on the same services to authenticate the application that is making the calls?
I'm not even sure doing this makes sense. Can the consuming application (i.e executable run on a server) send account info? My thought is that Windows Authentication isn't necessary and token authentication will suffice. Others have told me to use both. I'm not sure that's possible and haven't found anything showing me it is.
An API key is a parameter passed to the service interface, so it can be passed with any type of auth on the backend.
But usually, and api key is used to determine whether a user is allowed to use a specific API. For example, if only a subset of users that have windows accounts are allowed to use the api, then maybe that might make sense, because, even if they could authenticate with their windows account, they could still be determined to be unauthorized by the fact that they did not pass a valid auth key.
That said, you could also do the same things with some kind of policy, for example, checking if the user has the correct role to call the api method. It makes more sense when you are giving people access to an api through the internet.
I have a set of .NET applications running in a public web environment which connect to a centralized component made up of web pages and web services.
Is there any way to implement a security feature to make the centralized web pages be sure of the caller applications identity? Making a post and supplying a querystring parameter stating the caller application is a naive solution, someone can manually change it.
Any ideas? Tks in advance.
Assign secret keys to each client-server pair and use them to sign messages passed between client and server (using HMAC for example).
TLS/SSL/HTTP. You just need to enable client authentication. SSL is usually only used in the scenario where the server needs to be authenticated. But the server end can be configured to authenticate the client also. Digital certs need to be installed on both ends. This then uses all the appropriate crypto to do the job, ie. public authentication, establishment of secure channel, using Diffie-Hellman, RSA, AES/3DES, whatever you configure.
Take a look at this post. Good place to start.
Another option, perhaps have you look at OpenID?
The current situation:
Servers A, B, and C are trusted and controlled by you. A visitor comes to site A and views a page that sends data to site C, and the data contains something like "origin=A". We're concerned that the user will change that to "origin=B".
A simple fix:
You control all three servers, so let them communicate to verify incoming data. For example, A will change "origin=A" to "origin=A&token=12345", where the token value is random. The user tries to tamper with it and sends "origin=B&token=12345" to server C. C makes a trusted connection to B, saying "Did you send someone to me with token 12345?" B says "Nope" and C knows to reject the request.
This can be arbitrarily elaborate, depending on your needs and whether you're using https. Maybe tokens expire after a certain time period. Maybe they're tied to IP address. The point is that server C verifies any information that comes from the end user with servers A and B.
Are you asking about single-sign-on? (i.e. someone authenticated on AppA should also be able to use AppB and AppC without re-authenticating)
You can do this by configuring the machineKey for your apps so they can share asp.net authentication tokens.
The company I work for currently uses shared forms authentication cookies across the enterprise by using the same machine keys on each web server. However, this is not ideal if you wish to SSO across different domains and it's not very neat for windows app that need to come into the web farm to use the web service methods...
So, where we have to do this we are using SAML
But to clean this all up and make it more unified and more secure we are beginning to implement Geneva
If you communicate with the web services and web pages using http post, you avoid putting the info in a query string.
Send the data over https so that it cannot be tappered with.
You then need to make sure that the call is coming from your public web environment. One way of doing this is to use windows authentication, based on the identity of the application pool.
EDIT 1
Take a look at this link: http://www.codeproject.com/KB/WCF/WCFBasicHttpBinding.aspx
It shows how to set up windows authentication for WCF basic http binding.
Maybe look at the HTTP REFERER field. Under certain conditions this may be treated as reliable. In particular: An A mimic site won't send users from A to C according to HTTP REFERER.
We are developing an app that consists of a web server that hosts a web service (amongst other things) and a client that will be communicating with that web service. Both the client app and the server are expected to be used within a corporate firewall. This application will be packaged up and deployed to organizations across the world—so it needs to be flexible enough to work in multiple types of environments.
My question revolves around web service authentication and what is appropriate for real world scenarios. I know some companies have proxy servers that require a separate authentication. How often is this a requirement across organizations? When does the proxy server force the user to authenticate (can you access internal sites without authenticating.. is the authentication for only external sites)?
Reason I ask these questions, is I’m not sure what kind of capability we should build into our client application for authentication to the web service. By default, we are taking the current user credentials and passing that up to the server. Do you think this is sufficient? In a case where a company will require some form of alternate authentication for internal access, this will not work. My question revolves around this last case—how often does it happen? Why would a company force alternate credentials for internal access?
Thanks!
Why not make it configurable? Further, use WCF and you have the ability to configure just about anything you might need, in most cases without changing your code.
If Internet Explorer can reach a site through the proxy server without prompting the user, your call to the web service should "just work". If the user is prompted by IE, you'll need to put together a way to fill in the proxy server authentication information.
I've run into quite a few problems getting web services rock solid, but never had a proxy server authentication issue.