Integration questions when migrate monolithic to microservices using Quarkus - migration

Currently I have a monolithic application with some modules like financial and accounting. This application uses a single database and the modules are divided into schemas, so when I need to display the data on user interface or in a report I just need to do a simple query with a couple joins.
My question is, in a microservices structure where each module has his own database, how do I retrieve this data getting the same result as if I were in a single database?

When talking about splitting the database in the process of migrating a monolith to Microservices, there are some known patterns like:
The shared database
Database view
Database wrapping service
Database as a service
Seems the database view or the database as a service could be a candidate in this case, but of course no one better than you can decide which one is viable in your project.
I highly recommend you to have a look at chapter 4 of "Monolith to Microservices" by Sam Newman.

Related

Best practices for managing changelogs from different applications

We have an API application with 2 REST endpoints, and 2 separate applications writing (independent) tables to the same database to serve as backend for these 2 REST endpoints. You can think of this as an almost-microservices setup with shared DB server.
We want to use liquibase to manage the database tables from each of these 2 applications, but can't find any best practices guidelines about how to manage the changelog setup.
Options we are discussing:
Each application maintains a separate changelog table, with records for the table it maintains.
Upside - completely independent setup, and sharing DB is co-incidental.
Downside - The changelog table numbers will increase as we add more tables / applications to feed to the same database. Need to remember which changelog to use for development and look at for debugging.
Both applications share the same changelog table.
Upside - Single setup, can use the liquibase default. Easy lookup and maintainance.
Downside - Possible conflict / wait in case both applications try to deploy at the same time.
Can someone point me to best practices around this?
This topic is explained in details in the Liquibase University Course https://learn.liquibase.com/catalog/info/id:131.
Check out this Module 6 Tutorial - Managing Changelogs for a Shared Database for Multiple Teams in the course.

Should I make a separate database for each application?

I’m building two applications that need to share some similar data but each will also have unique data. Should I build a separate database for each app or let each app access the same database.
I need the shared data to update automatically on one app if it is changed in another. I’m also using postgresql with react and express with the intent of having both apps be progressive web apps and eventually react native apps.
In general, I would think of this as:
Databases are the unit of backup and recovery.
Databases can contain multiple schemas ("schemata" ?) which are the unit for managing users and objects.
Based on your question:
I need the shared data to update automatically on one app if it is changed in another.
It sounds like you want one database and separate schemas for each application.
It sounds as if you will need to join the database from both applications in a single SQL query. In that case, use one database and multiple schemas to separate the data.
You could have one schema common that contains the data which is shared between all applications and then one schema per application.
Both has pros and cons. But i think keeping them separate will be better. Pro for one can be con for other.
Pros -
separate DB makes maintenance better,faster and easy.
performance wise separate DB is better.
Migrations of code will be easy.
Cons -
Auto synchup can be tricky if tables etc. are different.
If one process need to use tables from both DB, it will be an issue.

Azure cloud. One database per (asp.net registered) client

Good morning,
I am using an asp.net framework with an azure client database.
I am now creating another server on Azure to host databases. On this server, for each customer registering on the website (for which 1 entry is created in my first database), I need to create a database with 8 tables - identical for each customer.
What would be the best thing to map the ASP.NET ID to a new database? Which framework would you recommend?
Thanks
Rather than running a VM where you're going to have to manage a SQL Server installation and write a bunch of code to handle a database per tenant scenario, I highly, highly, highly recommend taking a look at Azure SQL's multi-tenant sharding support. All of this code is already written for you. And it's not that you're paying for one DB per client - check out elastic pooling.
You can read the docs here.
Also note, this option will scale very well.
I have done this three different ways: a database per client where I wrote my own code to manage sharding, a single database with a separate schema per client (a huge pain in the rear), and using Azure SQL sharding support. It's not just the issue of correctly separating client data. You also need to think about querying for reporting across all client databases, and managing schema changes. Under the first two options, if you change a schema, you get to modify N client databases. Azure SQL's sharding tools will manage this for you.

How to isolate SQL Data from different customers?

I'm currently developing a service for an App with WCF. I want to host this data on windows-azure and it should host data from differed users. I'm searching for the right design of my database. In my opinion there are only two differed possibilities:
Create a new database for every customer
Store a customer-id to every table (or the main table when every table is connected via entities)
The first approach has very good speed and isolating, but it's very expansive on windows azure (or am I understanding something of the azure pricing wrong?). Also I don't know how to configure a WCF- Service that way, that it always use another database.
The second approach is low on speed and the isolating is poor. But it's easy to implement and cheaper.
Now to my question:
Is there any other way to get high isolation of data and also easy integration in a WCF- service using azure?
What design should I use and why?
You have two additional options: build multiple schema containers within a database (see my blog post about this technique), or even better use SQL Database Federations (you can use my open-source project called Enzo SQL Shard to access federations). The links I am providing give you access to other options as well.
In the end it's a rather complex decision that involves a tradeoff of performance, security and manageability. I usually recommend Federations, even if it has its own set of limitations, because it is a flexible multitenant option for the cloud with the option to filter data automatically. Check out the open source project - you will see how to implement good separation of customer of data independently of the physical storage.

Querying database from different applications with nHibernate

In this moment, I have two web applications(one application is an MVC2 application for the management of my project and the second is an application with web services). Both applications have to deal with the database and have Nhibernate for querying the database. Is this a good pattern?, if not what can i do?
Edit 1
Both applications can write to the database. I have a dll project that handle the database transactions and have de nhibernate instance named "Repositorio". Nevertheless, each application will have a different instance of Repositorio.dll so there is going to be multiple threats to the database, what do i have to do to make both application use the same instance of Repositorio.dll?
The answer depends on whether or not both applications can write to the database.
If one is read-only, I'd say you're safe.
I not, I'd argue that a service-oriented approach would recommend creating a service that provided an interface for both applications and was the sole owner of the database.
"service-oriented" does not mean that the service has to be a distributed component (e.g., SOAP or REST or RPC). If you encapsulate the database access in a component with a well-defined interface you can choose to share the component as a DLL in both applications. Only make it a distributed component if that makes sense for both applications.
That sounds perfectly fine to me even if both applications write to the database. I would simply recommend you create a third project as a class library with all your nHibernate related stuff to avoid writing any redundant code in both projects.