How to serve a mobile app - wcf

What would be a good way to provide a non-trivial backend for mobile apps, regarding both the protocol used for communication, and the actual hosting?
Most backend platforms (such as parse.com) provides some basic API for performing trivial CRUD data operations, but if the server logic needs to be more complex than that, what would be a good strategy (preferably .NET/C#, secondarily JAVA, but not javascript or any custom scripting approached)? SOAP web services (for example WCF)?
Regarding hosting, I have looked at Azure and AppHarbor, but can't decide between the two. AppHarbor seems like the only place to co-locate the web server and a MongoDB instance in Northern Europe, as Azure (apparently) only provides MongoDB in a US region. Any suggestions?

Related

What is the difference between an API and Microservice?

I create my API rest with Django, but I don't understand how convert an API to micro services, I don't understand the real difference between these.
I see an API like a micro service, but I don't know convert an entire API in micro service, I need create micro web servers?
Please, I can't understand a micro services, and I need understand this.
A microservice exposes it's interface, what it can do, by means of an API. The API is the list of all endpoints that a microservice respond when it receives a command/query. The microservice contains the API and other internal+hidden things that it uses to respond to client's requests.
An API is all that the clients see when they look at the microservice, although the microservice is bigger than that. A microservice hides its internal structure, it's technology stack, it's database type (sql, nosql - it could be anything); a microservice could move from sql to nosql, from python to php, but keep it's API unchanged.
API - It a way of exposing functionality over web. Imagine you have developed some functionality in .Net but not you are developing some software in a different language. Would you develop the same functionality again? No. So, just expose it via web service.Web services are not tied to any one operating system or programming language. For example, an application developed in Java can communicate with the one developed in C#, Android, etc., and vice versa.
Microservice - They are used to break a complex software into small pieces of individually deployable, testable, loosely coupled sub-modules. Micro Services are designed to cope with failure and breakdowns of large applications. Since multiple unique services are communicating together, it may happen that a particular service fails, but the overall larger applications remain unaffected by the failure of a single module.
API Vs Microservice - Now that we have broken our complex software into loosely couple sub-modules. These sub-modules communicate with each other via an API. Therefore, Microservices and an API solve different problems but works together!
More Details:
The Difference between Web Services and Micro Services
RESTful API vs Microservice
a microservice is an autonomous RESTful service. It means, there is just one service on each server. In Spring Boot when you bootstrap your RESTful service, it will get an instance of tomcat(it's embedded tomcat) and run your service on it. So, if you have more than one service on a server, it is not a microservice, because these services are not autonomous.

Web API + Client Architecture

We're building:
A bunch of services exposed through a web API.
A mobile app and a browser app.
Is it common practice for the apps to respond to their own conduit servers that end up talking to the API services? We're going to be setting up a reverse proxy - is it enough to directly hit our APIs (instead of setting up a conduit)? This is definitely a general architecture question.
I'm not sure what you mean by a "conduit", but a lot depends on how complete and hardened your APIs are. Do they already handle things like authentication, abuse detection/control, SSL, versioning, etc...
There are companies that specialize in providing this "middleware" of APIs (Apigee, Amazon API Gateway, Azure API Management, and many others). Your reverse proxy is a start, and is probably good enough to get going with (at least you do things like terminate your SSL, and lock down your API servers behind a firewall). If you make your API services stateless, you will probably be able to add new layers at a later date without too much pain and complexity.

Azure WebApi vs Azure Mobile Service vs

I've programmed a lot in asp.net mvc web applications. Now I want to write cross-platform mobile applications with cordova for the frontend and azure for the backend.
I am in doubt whether to use azure mobile services or WebAPI, because I want the power and freedom of WebAPI, but the convenience of azure mobile services. I do not understand concepts such as authentication, push notifications, etc.
My main goal is to focus on the application logic, frontend and backend with a significant weight of that logic in the backend. For this I have great doubts.
1st. I see both good mechanisms in AMS and WebAPI for external authentication, but not to manage your own authentication. What is the best way to manage your own authentication? Is Azure Active Directory solution?
2nd My intention is to create a well-defined API methods that return the exact data (json), rather than a rest api queryable (odata).
Wich is te best for this, WebAPI or AMS?
3rd I have experience with SQL Server, its relationships and Entity framework, but I do not care to learn and use NoSQL technologies, which is better? (However, I'm not comfortable with I can not use many to many relationships in NoSql).
Thank you very much.
there is not a real general answer for that, so take these as advices.
At first, keep in mind that AMS and WebApi are not so far. An AMS project IS a WebApi project with some helpers inside to make you comfortable working with related services (push notification, table entities), but you will lose a bit of control on your application.
The choice depends on your needs
Azure Active Directory IS a solution, but there are a lot more. You can use your preferred framework. AMS has got a pretty integrated login with most known social network and azure active directory as well, and is very easy to set up.
I'd suggest AMS. It will be easier to setup and mantain.
AMS is just WebApi castrated. To get all these services easier for you, you cannot for example
Customize startup of your application
Use a dependency injection framework
Run background tasks
And other stuff like that.
Hope it helps!

Multi-client architecture advice for RavenDB

When catering for multiple .NET Client Applications - say - Web, Desktop and then throw in an Android app (Java), placing the business logic behind some WCF REST API services can make it easier and quicker to build applications, as there is no business logic to implement client side for each technology.
(I know that there will be a point of changing the UI to cater for new business logic, but the idea is the core of the system sits behind an API, not in the client application.)
Although RavenDB serves as the Storage Mechanism...
What is the general architectural advice of using RavenDB behind SOA services? Is it just your standard IDocumentStore/IDocumentSession behind the WCF instance and go from there?
Yes, you can just use it like that.
Note that RavenDB comes with clients for both .NET and Java.

Fine-grained authorization for web applications

I have a C# .net application which servers both company's internal users and external customers. I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
What comes to my mind is to either:
Implement my own authorization mechanism and sql tables for my .net application
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics)
The problem with the first method is that it is not centralized nor standard so other systems cannot use it for authorization.
The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource). Also I am not sure how widely a standard authorization like XACML is supported by applications in the market to make future integrations easier.
So, in general what are the good practices for fine-grained authorization for web applications that are supposed to serve both internal users and external customers?
I would definitely go for externalized authorization. It doesn't mean it will be slower. It means you have cleanly separated access control from the business logic.
Overview
XACML is a good way to go. The TC is very active and companies such as Boeing, EMC, the Veterans Administration, Oracle, and Axiomatics are all active members.
The XACML architecture guarantees you can get the performance you want. Since the enforcement (PEP) and the decision engine (PDP) are loosely coupled you can choose how they communicate, what protocol they use, whether to use multiple decisions, etc... This means you have the choice to go for the integration which fits your performance needs.
There is also a standard PDP interface defined in the SAML profile for XACML. That guarantees you 'future-proof' access control where you are not locked into any particular vendor solution.
Access control for webapps
You can simply drop in a PEP for .Net webapps by using HTTP Filters in ISAPI and ASP.NET. Axiomatics has got one off-the-shelf for that.
Current implementations
If you check Axiomatics's customers page, you'll see they have Paypal, Bell Helicopter, and more. So XACML is indeed a reality and it can tackle very large deployments (hundreds of millions of users).
Also, Datev eG, a leading financial services provider is using Axiomatics's .Net PDP implementation for its services / apps. Since the .Net PDP is embedded in that case, performance is optimal.
Otherwise, you can always choose from off-the-shelf PEPs for .Net that integration with any PDP - for instance a SOAP-based XACML authorization service.
High levels of performance with XACML
Last July at the Gartner "Catalyst" conference, Axiomatics announced the release of their latest product, the Axiomatics Reverse Query which helps you tackle the 'billion record challenge'. It targets access control for data sources as well as RIA. It uses a pure XACML solution so that it remains interoperable with other solutions.
As a matter of fact, Kuppinger Cole will host a webinar on the topic very soon: http://www.kuppingercole.com/events/n10058
Check out the Axiomatics ARQ press release too here: http://www.axiomatics.com/latest-news/216-axiomatics-releases-new-reverse-query-authorization-product-a-breakthrough-innovation-for-authorization-services.html
Definitely look for a drop-in authorization module for your ASP.NET application. I'm not just saying that because I implement drop-in auth systems at BiTKOO, but because I have had to work with home-grown auth implementations in the past. Building your own authorization system for a single application really is not a good use of your time or resources unless you intend to make a career out of implementing security systems.
Externalizing the authorization decision from your app is a good idea from an architectural standpoint. Externalizing the authz decision gives you an enormous amount of flexibility to change your access criteria on the fly without having to shut down your web service or reconfigure the web server itself. Decoupling the web front-end from the authz engine allows you to scale each independently according to the load and traffic patterns of your application, and allows you to share the authz engine across multiple apps.
Yes, adding a network call to your web app will add some overhead to your web response compared to having no authorization at all or using a local database on the web server. That shouldn't be a reason not to consider external authorization. Any serious authorization product you consider will provide some sort of caching capability to minimize the number of network calls required per web request or even per user session across multiple web requests.
In BiTKOO's Keystone system, for example, the user attributes can be cached on the web server per user-session, so there's really only one back-end network request involved on the first page request as part of establishing a user login. Subsequent page requests (within the lifetime of the cached credentials, usually 5 minutes or so) can be handled by the web server without needing to hit the authz service again. This scales well in cloud web farms, and is built on XACML standards.
I need to do fine-grained authorization like who accesses what resource. So I need something like resource-based or attribute-based rather than a role-based authorization.
Check out this: https://zanzibar.academy/. Zanzibar is a project made at Google to solve fine-grained authorization at scale.
Use/implement a standard mechanism, like a software that has implemented XACML (for instance Axiomatics). The problem with the second approach is that it is potentially slower (due to extra calls needed for each resource).
Auth0 is working on a solution called FGA (https://fga.dev) that will be optimized for low latency. It's built upon the Zanzibar paper.
Disclaimer: I am employed at Auth0.