Can I create a 'proxy' for an ArcGIS service? - arcgis

I am in a situation where we have a Feature Service that is not publicly available. Of course, this means that it is not possible to access it directly from JavaScript (and it is not desirable as well as it has some sensitive information).
Therefore, what I was thinking is that instead of our Web App connecting directly to the ArcGIS services, it would connect to our 'proxy / middleware / man in the middle' system, and this would in turn authenticate and query ArcGIS. This would also allow us to restrict any access to sensitive data while enriching the data through our data that lives somewhere else.
I have a gut feeling that this is completely wrong..and the way to do it is to create some publicly available space and do the enriching on ArcGIS itself. From a business case, this is the least desirable route.
Thanks!

Related

Sharing user login between Blazor WebServer and ASP.NET Core API

I am building a service-oriented system for personal use (plus few friends may have limited access as well) - the aim is to have a dashboard for controlling my apps running on various machines such as Raspberry Pis (and potentially to be expanded to a VPS or few in future).
The architecture itself is pretty simple. For authentication I want to use AWS Cognito. Services would communicate with WebAPI (and potentially with eachother) using gRPC within a VPN, and dashboard would be served by Blazor server-side (may move to Blazor WASM Hosted model if I find a need for it). Each of the processes may or may not be on the same machine as any other (depending on the purpose). Blazor server may or may not run within VPN (I might want to move it to a separate web hosting later).
I created a simple diagram to visualize it:
The problem comes with authentication. I want to have Blazor server-side and API as a separate processes (for now they're going to run on the same machine, but I may want to move it elsewhere eventually). Ideally authentication should be handled by API, so authentication is client-agnostic, and the API can use it to verify if the logged in user can perform an action - which by itself is simple.
However, I want Blazor server to use and validate that token as well in order to determine what to display to the user. I want to do with the least amount of calls possible - ideally avoiding querying API for every 'should I display it or not?' choice.
I could easily do it by sacrificing possibility to move API elsewhere, and just merge Blazor Server and API Gateway into one project. For my current purpose it would be enough, but it's not an ideal solution, so first I want to look into how could I achieve my original vision.
How could I achieve this (with minimal amount of Blazor server to API queries)?
I was googling for solution a lot, but so far only found either examples of using Blazor server and API as one project, or using client-side calls to API directly.
Thank you good folks in advance.

Web API + Client Architecture

We're building:
A bunch of services exposed through a web API.
A mobile app and a browser app.
Is it common practice for the apps to respond to their own conduit servers that end up talking to the API services? We're going to be setting up a reverse proxy - is it enough to directly hit our APIs (instead of setting up a conduit)? This is definitely a general architecture question.
I'm not sure what you mean by a "conduit", but a lot depends on how complete and hardened your APIs are. Do they already handle things like authentication, abuse detection/control, SSL, versioning, etc...
There are companies that specialize in providing this "middleware" of APIs (Apigee, Amazon API Gateway, Azure API Management, and many others). Your reverse proxy is a start, and is probably good enough to get going with (at least you do things like terminate your SSL, and lock down your API servers behind a firewall). If you make your API services stateless, you will probably be able to add new layers at a later date without too much pain and complexity.

How to serve a mobile app

What would be a good way to provide a non-trivial backend for mobile apps, regarding both the protocol used for communication, and the actual hosting?
Most backend platforms (such as parse.com) provides some basic API for performing trivial CRUD data operations, but if the server logic needs to be more complex than that, what would be a good strategy (preferably .NET/C#, secondarily JAVA, but not javascript or any custom scripting approached)? SOAP web services (for example WCF)?
Regarding hosting, I have looked at Azure and AppHarbor, but can't decide between the two. AppHarbor seems like the only place to co-locate the web server and a MongoDB instance in Northern Europe, as Azure (apparently) only provides MongoDB in a US region. Any suggestions?

Fine-grained authorization for web applications

I have a C# .net application which servers both company's internal users and external customers. I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
What comes to my mind is to either:
Implement my own authorization mechanism and sql tables for my .net application
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics)
The problem with the first method is that it is not centralized nor standard so other systems cannot use it for authorization.
The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource). Also I am not sure how widely a standard authorization like XACML is supported by applications in the market to make future integrations easier.
So, in general what are the good practices for fine-grained authorization for web applications that are supposed to serve both internal users and external customers?
I would definitely go for externalized authorization. It doesn't mean it will be slower. It means you have cleanly separated access control from the business logic.
Overview
XACML is a good way to go. The TC is very active and companies such as Boeing, EMC, the Veterans Administration, Oracle, and Axiomatics are all active members.
The XACML architecture guarantees you can get the performance you want. Since the enforcement (PEP) and the decision engine (PDP) are loosely coupled you can choose how they communicate, what protocol they use, whether to use multiple decisions, etc... This means you have the choice to go for the integration which fits your performance needs.
There is also a standard PDP interface defined in the SAML profile for XACML. That guarantees you 'future-proof' access control where you are not locked into any particular vendor solution.
Access control for webapps
You can simply drop in a PEP for .Net webapps by using HTTP Filters in ISAPI and ASP.NET. Axiomatics has got one off-the-shelf for that.
Current implementations
If you check Axiomatics's customers page, you'll see they have Paypal, Bell Helicopter, and more. So XACML is indeed a reality and it can tackle very large deployments (hundreds of millions of users).
Also, Datev eG, a leading financial services provider is using Axiomatics's .Net PDP implementation for its services / apps. Since the .Net PDP is embedded in that case, performance is optimal.
Otherwise, you can always choose from off-the-shelf PEPs for .Net that integration with any PDP - for instance a SOAP-based XACML authorization service.
High levels of performance with XACML
Last July at the Gartner "Catalyst" conference, Axiomatics announced the release of their latest product, the Axiomatics Reverse Query which helps you tackle the 'billion record challenge'. It targets access control for data sources as well as RIA. It uses a pure XACML solution so that it remains interoperable with other solutions.
As a matter of fact, Kuppinger Cole will host a webinar on the topic very soon: http://www.kuppingercole.com/events/n10058
Check out the Axiomatics ARQ press release too here: http://www.axiomatics.com/latest-news/216-axiomatics-releases-new-reverse-query-authorization-product-a-breakthrough-innovation-for-authorization-services.html
Definitely look for a drop-in authorization module for your ASP.NET application. I'm not just saying that because I implement drop-in auth systems at BiTKOO, but because I have had to work with home-grown auth implementations in the past. Building your own authorization system for a single application really is not a good use of your time or resources unless you intend to make a career out of implementing security systems.
Externalizing the authorization decision from your app is a good idea from an architectural standpoint. Externalizing the authz decision gives you an enormous amount of flexibility to change your access criteria on the fly without having to shut down your web service or reconfigure the web server itself. Decoupling the web front-end from the authz engine allows you to scale each independently according to the load and traffic patterns of your application, and allows you to share the authz engine across multiple apps.
Yes, adding a network call to your web app will add some overhead to your web response compared to having no authorization at all or using a local database on the web server. That shouldn't be a reason not to consider external authorization. Any serious authorization product you consider will provide some sort of caching capability to minimize the number of network calls required per web request or even per user session across multiple web requests.
In BiTKOO's Keystone system, for example, the user attributes can be cached on the web server per user-session, so there's really only one back-end network request involved on the first page request as part of establishing a user login. Subsequent page requests (within the lifetime of the cached credentials, usually 5 minutes or so) can be handled by the web server without needing to hit the authz service again. This scales well in cloud web farms, and is built on XACML standards.
I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
Check out this: https://zanzibar.academy/. Zanzibar is a project made at Google to solve fine-grained authorization at scale.
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics). The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource).
Auth0 is working on a solution called FGA (https://fga.dev) that will be optimized for low latency. It's built upon the Zanzibar paper.
Disclaimer: I am employed at Auth0.

Will this WCF setup work?

I'm rather new to the WCF/IIS/MS web stack corner of the world so I'm hoping for some help evaluating my design.
What I need is a system that presents a number of resources as URIs. Each resource is a WCF web service providing a number of read and write operations. I need to provide username/password security for different resources.
How I'm hoping to make this work is to have IIS handle the security using the normal devices it uses for everything else. Then uses URL rewriting to remap everything to a single web service that will provide the correct resource based on the rewritten query string.
Will this work?
Am I missing something?
Is there a better way to do this?
If you happen to known of a really good tutorial for the bits and peaces (like what file does the security settings go in?) I would appreciate links?
For now there will be only a handful (2 to 20) users so static config files would be preferred for that as along as it won't cause problems later.
As I said, I hardly known jack in this domain so I don't really known what I don't known.
A few links I have found (don't even known yet if I'm looking in the right direction)
Fundamentals of WCF Security (assumes a bit more familiarity than I have)
Improving Web Security: Scenarios and Implementation Guidance for WCF (really long, book length)
Yes this sounds sane. For authentication you want to use ASP.NET membership module it provides a generic security API which can use intergrated (windows user), web form login, even LiveID or some custom authentication. In my experience MSDN has proven a good resource, here's a hands on article.
For web http binding WCF provides Uri rewriting out of the box using WebGet attribute.
for SOAP, the end point URL is the same, so I assume you want RESTful endpoint. If so, you need Basic auth over https not WS-Security.