Sending central config to microservices in .net core - asp.net-core

I am researching how to have a centralized data store for a multi-tenant application. This is a microservice application.
For example, each tenant will have different values for some keys. Some tenants will have their own unique keys, but let's assume they are all the same for now.
I have one microservice responsible for CRUD of the tenant configuration.
I need that configuration to be pushed to all of the other microservices so that each sub-service can have its own copy of the configuration and there is no reliance on the configuration service being up.
To me, this should be infrastructure instead of custom C# code but I am new to the microservices scene so wondering if there is a known approach.
I considered using MassTransit/messages but then wouldn't every microservice need a duplicated POCO/Contract class? Also if there is a new key required, all microservices would need to be updated and deployed, which breaks the purpose of microservices.
In short, how can I have a config micro service sync tenant-specific data to multiple other microservices?

Related

How to provide credentials to a service which is need to use an thirdparty service

I need some advice about architectural design or best practice approaches.
I have a service that needs some credentials for some third party services.
My Service used by a webapp which currently keeps this credentials in a DB in encrypted mode.
WebApp and MyService are going to communicate over a MessageQueue (RabbitMQ).
How can I provide my Service these credentials from web app. Or should I completely change the design and how?
Thanks in Advance
KR
Timur
This is a complicated area, and different people have different ideas about how to do this; the problem with your design is that an attacker who can sniff the traffic between your web app and your services can get access to your keys.
You also have tight coupling between your apps and your services, as well as all the entertainment of managing credentials between dev, qa and prod environments.
Many hosting strategies include a "key management server" for this purpose - AWS has https://aws.amazon.com/kms/, for instance. I'd suggest reading up on their use cases.
Another popular solution is to store the keys in environment variables, and manage them as part of your build/deploy pipelines.
Finally some frameworks (e.g. Ruby on Rails) store these details in a credentials file, and have workflows for managing them outside the source code control processes.

Kafka + API service Architecture

Im developing a web application using expressjs and wanted to leverage the latest technology and architecture i.e kafka, microservices etc - The frontend is React and is calling the backend microservices to retrieve data.
My current architecture, consists of multiple services serving as rest api endpoints in the backend such as user service, account service, company service etc
All these services work well and fine, but having to introduce kafka into the mix, i now require to publish a 'new user' event when a client registers for an account -> the user service publishes this event but then now require the accounts service to consume it.
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)? or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
I'd like to know what the best approach is with this kind of situation.
In general, the microservices will have rest apis for providing any business/CRUD capabilities and the Kafka broker will mostly be used for achieving eventual consistency and also for triggering any actions(by dedicated Kafka consumers) asynchronously.
Now to your particular question -
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)?
The microservices will have their own data stores which may require to be consistent/in-sync with data stores belonging to other microservices. You can created dedicated Kafka topics for relevant events, for e.g. "User_Resource" could be a Kafka topic where you could publish all the events(CRUD) related to User resource. These topics can be subscribed by other microservices and the consumers will have logic to handle these events ( update account service database, trigger notifications to other down-streams etc.). This will also create clean separation between CRUD and business services.
or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
A service which exposes a rest endpoint can also act as a Kafka producer/consumer. If your application is built using Spring boot and spring cloud framework, you can use spring-cloud-stream to handle Kafka interactions in simplest way. The services need not to be bothered about the state of other services as they are supposed to be independent.

Using RabbitMQ for communication in a Microservice architecture but should I create a API Gateway on top?

I basically have a smaller software that is using the Microservice architecture. I am currently using RabbitMQ to do the communication between UI and services and that works great.
However I am thinking about creating a new microservice, a API Gateway, that basically takes the RabbitMQ logic from the UI and encapsulate into a service, which would become the entry point to all the other services.
The benefit is that I would encapsulate the logic that give access to the services and also being able to add authentication in the API Gateway.
However I would need to use HTTP request to interact with the API as I am moving the messaging logic from the UI. Could there be any major drawbacks in this approach?
I have being able to find examples about RabbitMQ and examples about API Gateways but never those two together, I might just be overthinking it a bit.

What is the difference between an API and Microservice?

I create my API rest with Django, but I don't understand how convert an API to micro services, I don't understand the real difference between these.
I see an API like a micro service, but I don't know convert an entire API in micro service, I need create micro web servers?
Please, I can't understand a micro services, and I need understand this.
A microservice exposes it's interface, what it can do, by means of an API. The API is the list of all endpoints that a microservice respond when it receives a command/query. The microservice contains the API and other internal+hidden things that it uses to respond to client's requests.
An API is all that the clients see when they look at the microservice, although the microservice is bigger than that. A microservice hides its internal structure, it's technology stack, it's database type (sql, nosql - it could be anything); a microservice could move from sql to nosql, from python to php, but keep it's API unchanged.
API - It a way of exposing functionality over web. Imagine you have developed some functionality in .Net but not you are developing some software in a different language. Would you develop the same functionality again? No. So, just expose it via web service.Web services are not tied to any one operating system or programming language. For example, an application developed in Java can communicate with the one developed in C#, Android, etc., and vice versa.
Microservice - They are used to break a complex software into small pieces of individually deployable, testable, loosely coupled sub-modules. Micro Services are designed to cope with failure and breakdowns of large applications. Since multiple unique services are communicating together, it may happen that a particular service fails, but the overall larger applications remain unaffected by the failure of a single module.
API Vs Microservice - Now that we have broken our complex software into loosely couple sub-modules. These sub-modules communicate with each other via an API. Therefore, Microservices and an API solve different problems but works together!
More Details:
The Difference between Web Services and Micro Services
RESTful API vs Microservice
a microservice is an autonomous RESTful service. It means, there is just one service on each server. In Spring Boot when you bootstrap your RESTful service, it will get an instance of tomcat(it's embedded tomcat) and run your service on it. So, if you have more than one service on a server, it is not a microservice, because these services are not autonomous.

azure architecture - handling security

Planning to migrate our existing application to Azure.
Our existing architecture with security flow is as follows
ASP MVC 3.0 UI layer that takes user name password from the user
We are planning to migrate the UI layer onto a compute cloud.
and will be accessible at say uilayerdomainname.com which would have a SSL cert.
WCF REST webservices layer that amongst other things does authentication as well. This is currently on say servicename.cloudapp.net. (We could map it to servicelayername.com and get a SSL for that domain name as well).
SQL Azure database
The UI layer sends the credentials to the service layer which authenticates it against the SQL azure database.
Question
Both the WCF compute cloud and UI Layer are on the same region in Azure. Would the communication between these two be prone to man in the middle attacks? Does my WCF compute cloud need SSL as well? We do have two domain names with SSLs and so could just map the services to one.
Is there any way I can restrict traffic between the UI layer and the WCF compute cloud - allow only the UI layer to access the services layer?
Would the performance be better if I publish both the WCF services and UI layer on the same instance? It sort of shoots down the nice layered architecture but if it improves performance I could go with it. We don't want to jump through too many hoops to accomodate the app to Azure lest it becomes difficult to migrate out of it.
If you host your services in a Worker Role, then they can be available only to your Web Role. You can also host it elsewhere and monitor requests in code. Azure Roles in the same deployment can communicate with one another in a very specific way that is not available outside of the deployment.
In Azure deployments, you need to very specifically define your public endpoint because the roles are hosted behind a load-balancer. If you host your WCF service from within a worker-role it will not be accessible publicly.
Hope this helped
If you configure the WCF service and UI layer to only communicate through internal endpoints then the communication is private. There is no need to purchase or configure an SSL certificate for the WCF service unless it is made public.
Further, the only traffic between these internal endpoints will be between your instances -- so, the traffic is already restricted between your UI layer and the WCF service.
This is the case for both Web roles and worker roles: you can configure a Web role hosting your WCF service to have a private internal endpoint.
Depending on the architecture of your system you may see better performance if you have the UI and WCF layer on the same machine.
If your interface is "chatty" and calls the WCF service several times for each UI request then you'll definitely see a performance improvement. If there's just one or two calls then the improvement is likely to be minimal compared to the latency of your database.