Proper way to build an Azure Web Site to have multiple services - wcf

I need some guidance here on the Azure paradigm. I am building a multi-user data model. I want the core data model available as one set of services (e.g. a contract) on one URL, I want another set of services for managing the cache, statistics, start, stop, etc. on another URL (e.g. another contract for operator-type functions) and a third site for developer extensions (e.g. non-trivial, user-defined transactions). All three of these services need to share the common data model.
I get the part where an Azure 'Role' is basically a VM. After that, I'm stuck. If I add three WCF Web Service Roles, I get three separate instances that can't communicate. What is the recommended way to construct a solution/Azure Role to handle the case described above?

Related

How do you name API paths of the same microservice but w/ different consumers

Context
Let's imagine a simple microservices architecture (e.g. 2-3 microservices). Microservices are domain-based, API gateway in place and everything is how it should be. At the same time, microservices APIs are consumed by public mobile applications, admin UI, and other services for S2S communication, hence, we have three possible APIs consumers. Depends on the consumer, response DTOs are different but the business process might be the same (e.g. response for GET /users endpoint has different DTOs for a consumer application and admin UI but technically the data is taken from the same DB).
Question
How do you segment APIs in that case? Do you use namespaces like external, internal and etc?
Also, feel free to share your experience of how you segment APIs.
Thanks in advance!
From my point of view, the APIs should be different depending on the type of consumer that is going to use them.
For example, talking about your use case, It couldn't be the same API one that is intended to provide simple user information that the one used by an administrator. You should define two different APIs in this case, with different paths like internal/users/ and external/users as you said, and internally these two endpoints can use the same logic.
This separation is not only good in order to return different dtos in each endpoint but also to define different security (authentication/authorization) mechanisms for each API because I suppose that these requirements will be different for an admin API that for a general user one
It depends a bit on the philosophy you want to adopt.
The one suggested by #JArgente is good, in that you'd get good separation, and the role of each is (or at least should be) very clear.
The other approach is layering, which (for the OO programmers out there) is a bit like developing overloads for a method. It assumes that the data required by the derived API's is provided by the base API. So:
Develop a base API that provides all the data this API family needs to provide. This API might be the one that internal users use (e.g. Admin User), and it could require authentication.
Develop a public facing API that consumes the base API. This one would be your public-facing one.
Each API has a separate API Spec; depending in how you do this you can leverage inheritance at the Spec level.
Each API also has an actual endpoint which triggers some sort of processing - e.g. logic within the API Gateway itself, or logic handled within a downstream component like a microservice.
The public-facing one can be anonymous, as long as something (e.g. the API Gateway) can make an authenticated call against the base API, using some kind of 'service account'.
The advantage here is that you still get good separation between different API's and their consumers, but you also get the advantages of inheritance, so that code duplication is reduced (testing effort isn't so diffuse, etc).
This approach also allows you to run the endpoints on the same API Gateway, or deployed on separate ones (internal vs external).

Reuse microservices across different project

We developed a monolithic API to be used as a SAAS.
In the company we also develop custom build for some customers.
Some of our customers are asking for some features that are already implemented in the monolithic application.
We are thinking about splitting our API into microservices, but our major questions are the following :
Does microservices can be reuse across different projects ?
If we do split, do we create a microservice that everybody use or do we create an instance per custom build ?
E.G :
project A use "User", "Project" so we deploy 2 microservices
project B use "User", "Project", "Store" so we deploy 3 microservices
total number of microservices deployed : 5
If we create an instance of each microservice per custom build, won't be too hard to manage the communication between all the microservices within the same custom build ?
Or do we stick with one instance per microservice that everybody use and we specify the project source ?
As we are using C# GraphQL.
We also thought about creating Nuget package for each component, so each package will contains :
Exposed GraphQL Queries / Mutations
His own db
His own logic
E.G :
- Api A install "User" & "Project" packages
- 3 db are instantiated "Api.A", "Api.A.User", "Api.A.Project"
- Api B install "User", "Project" & "Store" packages
- 4 db are instantiated "Api.B", "Api.B.User", "Api.B.Project" & "Api.B.Store"
But does it make sense to do that ?
In my mind it could very similar from Hangfire https://www.hangfire.io/
Note that we are currently using AWS Serverless to host our applications.
An important point is that we are a small team 2-4.
We are very open minded so any suggestion is good to hear.
Thank you !
First of all, I would like to say is that there is no right way here and I am providing my point of view from the way we have already done things hoping it will guide you in finding a solution best suited for your requirements.
So to understand your dilemma, you have a base vanilla product which is an API SAAS and there is a customized deployment for some customers as well. But as you are having to build custom deployments for each customer you are noticing a common pattern, wherein a lot of the functionality is repeated across the SAAS for each customer.
Now assuming I have the requirement correct, I would say micro-services will provide definite benefits in your case in terms of scaling and customer-specific customization which will be managed by independent teams.
But a lot of this depends on how your business logic is structured and how big and vast your customization is. Some of these questions should drive your solution are.
Can you store Customer-specific data in a central data store or at customers' end ? & How are your databases going to be structured and how many of them?
How big are the customizations ? are they cosmetic or workflow adhering?
How much cross-communication you expect across various services like User, Store, and Project and if there is any communication across A.User - B.User or A.Project - B.Store, etc?
Now moving to some of the important things you might want to consider post answering the above questions.
Consideration 1. If the data stores can be allowed to be in a single central place you can go ahead with a single cluster where all your micro-services can be deployed. But looking at the data provided I can assume you have multiple databases per customer and I would recommend to keep them separate and not introduce any coupling between them. Thus you may end up with one microservice or microservice per customer which talks only to that customer's database. ( more in fig.1)
Consideration 2. The customization as far I the norm goes should be separated from the service itself and your every service should have an input for configuration loading which will drive the service behavior. Again depending on how big your customization is there can be a limit to this configuration and in those cases, I woul recommend creating a new service with customizations built-in.
Consideration 3. This is a major factor for deciding the number of microservices you may have, but the boundary of each service should be defined by the domain, for example, a User service, a Store service, and a Project service. These are the vanilla services that interact with each other to produce a valid business scenario. And each of the customers is just specialized instances of these services.
ok Now that this is done lets gloss over your primary questions...
Des microservices can be reused across different projects?
-- Yes you can, but again it depends on how you have designed the business workflow, configuration injection.
If we do split, do we create a microservice that everybody uses or do we create an instance per custom build?
-- Yes this would be an ideal scenario enabling separation of concerns across projects as we do not want to mix data boundaries and client-specific sensitive configurations. That said there might be a case where the single microservice solution is what is demanded but should be done with caution.
If we create an instance of each microservice per custom build, won't it be too hard to manage the communication between all the microservices within the same custom builds?
-- Communication across microservice is an important part or factor which is more often than not unavoidable in most cases. Thus considering you will be requiring some form of cross microservice communication you can look at an enterprise bus or API communication based on your requirement. But it is a known triviality is my opinion.
Or do we stick with one instance per microservice that everybody uses and we specify the project source?
-- I would not recommend this as the example stated in your question for a module for database injections doesn't sound a good idea to me. This will cause a highly coupled system design. And this might also mean if one service fails all your customer sites go down. you surely wouldn't want that !!!
Now as it is said a picture is worth a thousand words...

Different backend endpoints in APIs depending on Products in Azure API Management

I'm an absolute newbie in Azure API Management and I have a doubt regarding how to manage Products and APIs.
Let's imagine this scenario:
I create 3 diferent Products: One for representing my Development environment (DEV), the second one for representing my Preproduction environment (PRE) and the last one to represent my Production environment (PRO).
I create several APIs which I want to publish in my DEV environment and later promotion to the others. So I need every API in every different Product to point a different backend service, as my backend services are different in every environment.
In example:
I have 3 different versions of my backend service: ServiceDEV, ServicePRE and ServicePRO. As I develop my API, I use as backend service the one named ServiceDEV, and so my API is assigned to the Product DEV. Later I want to keep this DEV version for my API but I also want to "deploy" that API in the Product PRE to make it act as a façade for ServicePRE, and the same would happen when promotioning it to PRO.
The problem with this approach is that I need to clone the APIs and change their settings to make them point to the correct backend endpoint every time I want to promotion one of them from one environment to another, thus losing all the versioning for that API, as the cloning operation just clones the current version of the API.
I don't know if policies would meet my needs in this subject.
I hope you get what I'm trying to mean...
How can I manage this situation?
Am I focusing this subject in a wrong way?
Any idea about how to overcome this?
Thank you!
If you follow this approach then you indeed could use policies to manage different backends for different products. You could create APIs without specifying backends ervice URL entirely and later use set-backend-service policy at product level to direct call to a proper endpoint.
One limiting factor of this approach is that whatever changes you may want to do to an API in dev environment (think change signature of an operation, or policy) will be immediately visible in other environments as well as this is a single API in all of them. If this is an issue, then consider having duplicate (triplicate) APIs - one per environment and later move their configuration via Azure API call.

Application Insights strategies for web api serving multiple clients

We have a back end API, running ASP.Net Core, with two front ends: A SPA web site (Vuejs) and a progressive web page (for mobile users). The front ends are basically only client code and all services are on different domains. We don't use cookies as authentication uses bearer tokens.
We've been playing with Application Insights for monitoring, but as the documentation is not very descriptive for our situations, I would like to get some more inputs for what is the best strategy and possibilities for:
Tracking users and metrics without cookies from e.g. the button click in the applications to the server call, Entity Framework/SQL query (I see that this is currently not supported, How to enable dependency tracking with Application Insights in an Asp.Net Core project), processing data and presentation of the result on the client.
Separating calls from mobile and standard web in an easy manner in Application Insights queries. Any way to show this in the standard charts that show up initially would be beneficial.
Making sure that our strategy will also fit in situations where other external clients will access the API, and we should be able to identify these easily, and see how much load they are creating for the system.
Doing all of the above with the least amount of code.
this might be worthy of several independent questions if you want specifics on any of them. (and generally your last bullet is always implied, isn't it? :))
What have you tried so far? most of the "best way for you" kinds of things are going to be opinions though.
For general answers:
re: tracking users...
If you're already doing user info/auth for other purposes, you'd just set the various context.user.* fields with the info you have on the incoming request's telemetry context. all other telemetry that occurs using that same telemetry context would then inerit whatever user info you already have.
re: separating calls from mobile and standard...
if you're already doing this as different services/domains, and you are already using the same instrumentation key for both places, then the domain/host info of pageviews or requests is already there, you can filter/group on this in the portal or make custom queries in the analytics portal to analyze that way. if you know which site it is regardless of the host, you could add that as custom properties in the telemetry context, you could also do that to avoid dealing with host info.
re: external callers via an api
similarly, if you're already exposing an api and using auth, you should (ideally) already know who the inbound callers are, and you can set that info in custom properties as well.
In general, custom properties (string:string key value pairs) and custom metrics (string:double key value pairs) are your friends. you can set them on contexts so all the events generated in that context inherit the same properties, you can explicitly set them on individual TrackEvent (or any of the other Track* calls) to send specific properties/metrics with any single event.
You can also use telemetry initializers to augment or filter any telemetry that's being generated automatically (like requests or dependencies on the server side, or page views and ajax dependencies client side)

Is OData intended for use within Government and Financial envrionments? What security precautions do I need?

At first brush, OData seems like it will only appeal to "open" databases, and would never be used in envrionments where security is needed, especially with financial or government clients.
Is this the correct perspective to have with the current version of OData/WCF? If not, can you share whatever I would need to change that perspective?
Update
Examples of current concerns include:
Increased possibility of SQL Injection
Additional Data Validation (complicating business logic)
Unauthorized Access to data
Increased ability to do a "raw dump" of data
by this I mean it is easier to use OData to get to HR data, then it is to screen-scrape a traditional ASP.net page
Update 2
Is it also possible for me to enforce business rules? For example a properly formatted SSN, Phone, or Zip. How about ensuring all fields are filled in?
oData is just a way to expose structured data through an open API. It does not requre any particular form of security; it's possible to have fully open datasets (like a wiki database) or world-readable-but-private-writeable (such as a database of votes by members of Congress, so anyone can read it but only you can update it). It also supports more complex security structures (such as a video rental store allowing customers to query only their own history).
Regarding your specific concerns:
SQL injection is simply not possible if you're using the ADO.NET Data Services as your oData server. The incoming oData request is parsed and then passed to an IQueryable, which properly escapes all values.
The business layer / data layer validation remains the same. oData just provides an API for the data layer (or business layer, if it looks databaseish).
Unauthorized access to data isn't possible unless you allow it. The default for ADO.NET Data Services is to not allow any access (even read-only access), so that forces you to explicitly authorize all access.
The "raw dump" scenario is exactly why oData is so useful! It's a protocol that allows efficient querying of data sources over the web, instead of depending on fragile screen scraping "solutions". If you don't want someone to get the information, don't publish it.
Right now (to my knowledge), ADO.NET Data Services is the only oData provider available, and it's secure by default. I suppose that someone else could write an oData provider that wasn't secure by default or allowed SQL injection, but that would be foolish.
Also, remember that oData is completely divorced from the concept of authentication. It's up to you to use whatever authentication makes sense for your API. There's a great recent series of blog posts from the WCF team that address how oData works with various forms of authentication.
What's your business case for using OData? OData primarily exists to expose your data in a platform agnostic manner... so that .NET, Java, Php, Python, REST, etc clients can all access your data. Is that your use-case?
Or are you trying to expose your data via a service layer (kind of an SOA approach) so that your clients (which you control) are better decoupled from your data sources. In that case, OData may not be the right solution. I looked at OData as part of a data service layer and decided it is too slow. I'm now looking at Devforce which implements service-based access for Entity Framework models (via their BOS service)... full CRUD operations including LINQ to service-hosted model.
Security is to you desired level is possible either view OData or via DevForce. Pick the correct data-remoting solution, then research the correct security implementation.
Sure you can use it in government solution. OData is just a way of accessing data, it has nothing to do with making the information secure. You have to implement the security at the transport level (SSL) and not at the application level (provide login and password to application).
There are many ways to go about this. One example is if you are using SSL, you can force your client to provide a client certificate and have that do the authentication. Once the person has authentication, you can use your application to limit what they can see (maybe they can only see their client information, so all queries automatically limit the person to seeing that.)