We use laravel in our company and we follow 2 simple conventions:
Controllers should be thin.
Models represents database entities (user, roles, cars)
Now we're facing a dilemma: we have a screen where complicated data graphs are represented which require some long and heavy logic to produce. But where should we put all of this logic? controllers should be thin - so not in controllers. Models represent data entities, so it can't be model since this screen displays data from all of the other models but don't have an actual table/database entity. Services doesn't sound like a normal place.
I was wondering how did you approach similar situations
I would put the logic into service. In service you can run other services (in case some logic is already in other service or if service is very complicated) and use repositories (or models in case you don't use any repositories). Of course there is no point to put big code or logic into controllers because they could run services which will return desired output.
Related
Is there an optimum maximum depth to nesting?
We are often presented with the option to try to represent complex heirarchical data models with the nesting they demonstrate in real life. In my work this is genetics and modelling protein / transcript / homology relationships where it is possible to have very deep nesting up to maybe 7/8 levels. We use dataloader to make nested batching more efficient and resolver level caching with directives. Is it good practice to model a schema on a real life data model or should you focus on making your resolvers reasonable to query and keep nesting to a maximum ideal depth of say 4 levels?
When designing a schema is it better to create a different parent resolver for a type or use arguments that to direct a conditional response?
If I have two sets of for example ‘cars’ let’s say I have cars produced by Volvo and cars produced by tesla and the underlying data while having similarities is originally pulled from different apis with different characteristics. Is it best practice to have a tesla_cars and volvo_cars resolver or one cars resolver which uses for example a manufacturer argument to act differently on the data it returns and homogenise the response especially where there may then be a sub resolver that expects certain fields which may not be similar in the original data.
Or is it better to say that these two things are both cars but the shape of the data we have for them is significantly different so its better to create seperate resolvers with totally or notably different fields?
Should my resolvers and graphQL apis try to model the data they describe or should I allow duplication in order to create efficient application focused queries and responses?
We often find ourselves wondering do we have a seperate API for application x and y that maybe use underlying data and possibly even multiple sources (different databases or even API calls) inside resolvers very differently or should we try to make a resolver work with any application even if that means using type like arguments to allow custom filtering and conditional behaviour?
Is there an optimum maximum depth to nesting?
In general I'd say: don't restrict your schema. Your resolvers / data fetchers will only get called when the client requests the corresponding fields.
Look at it from this point of view: If your client needs the data from 8 levels of the hierarchy to work, then he will ask for it no matter what. With a restricted schema the client will execute multiple requests. With an unrestricted schema he can get all he needs in a single request. Though the amount processing on your server side and amount of data will still be the same, just split across multiple network requests.
The unrestricted schema has several benefits:
The client can decide if he wants all the data at once or use multiple requests
The server may be able to optimize the data fetching process (i.e. don't fetch duplicate data) when he knows everything the client wants to receive
The restricted schema on the other hand has only downsides.
When designing a schema is it better to create a different parent resolver for a type or use arguments that to direct a conditional response
That's a matter of taste and what you want to achieve. But if you expect your application to grow and incorporate more car manufacturers, your API may become messy, if there are lot's of abc_cars and xyz_cars queries.
Another thing to keep in mind: Even if the shape of data is different, all cars have something in common: They are some kind of type Car. And all of them have for example a construction year. If you now want to be able to query "all cars sorted by construction year" you will need a single query endpoint.
You can have a single cars query endpoint in your api an then use interfaces to query different kinds of cars. Just like GraphQL Relay's node endpoint works: Single endpoint that can query all types that implement the Node interface.
On the other hand, if you've got a very specialized application, where your type is not extensible (like for example white and black chess pieces), then I think it's totally valid to have a white_pieces and black_pieces endpoint in your API.
Another thing to keep in mind: With a single endpoint some queries become extremely hard (or even impossible), like "sort white_pieces by value ascending, and black_pieces by value descending". This is much easier if there are separate endpoints for each color.
But even this is solvable if you have a single endpoint for all pieces, and simply call it twice.
Should my resolvers and graphQL apis try to model the data they describe or should I allow duplication in order to create efficient application focused queries and responses?
That's question of use case and scalability. If you have exactly two types of clients that use the API in different ways, just build two seperate APIs. But if you expect your application to grow, get more different clients, then of course it will become an unmaintainable mess to have 20 APIs.
In this case have a look at schema directives. You can for example decorate your types and fields to make them behave differently for each client or even show/hide parts of your API depending on the client.
Summary:
Build your API with your clients in mind.
Keep things object oriented, make use of interfaces for similar types.
Don't provide endpoints you clients don't need, you can still extend your schema later if necessary.
Think of your data a huge graph ;) that's what GraphQL is all about.
Is it possible to query the ORM of a microservice through its API, and use it as the ORM of another microservice?
E.g. Let's say I have the microservice A with its API (let's call it API_A), its DB (DB_A) and its internal Object-Relation Mapper instances (ORM_A) defining the correspondences between the the classes belonging to the microservice into the structure of the relational DB and managing its access.
Now imagine I want to have a microservice B, with different functionalities respect to A, although with the same ORM as A (and so a DB with the same structure of DB_A, although not necessarily with the same data, as the different functionalities may produce different data).
How do I query/copy/mirror ORM_A into the microservice B in a smart way, so that I have no code duplication and when A changes, also ORM_B changes accordingly with no manual intervention?
Is there the option to query ORM_A into B via its API, and recreate it in the microservice B?
The idea that code changes inside API_A could yield code changes inside API_B creates a coupling between the services and their data that would suggest they shouldn't be two different services.
If API_B does in fact perform wildly different functions than API_A and only needs a few pieces of data from structures surfaced by API_A, you should consider a couple different options to ensure the relevant data is accessible to API_B from API_A:
Surface the data from API_A in an endpoint that is accessible to API_B. This creates an API contract that is easier to enforce and test. This solution is relatively easy to implement, but creates some dependency relationships between the two APIs.
Setup an event topic that you can notify whenever API_A writes data that API_B (or other services) might want to consume. By reading these events, API_B can write the relevant data to its own DB in its own format to avoid coupling with A either by data structures or contracts. This solution requires the creation of event queues, but would be the best solution for the performance of API_B.
One thing that I've seen people struggle with when adopting microservices (I struggled with it myself) was the idea that data duplication is ok. Try not to get stuck thinking of data as relational across multiple services because that's how you'll naturally create the kind of coupling that you'll want to avoid in microservices. Good luck!
How to Model Query classes (CQRS), given that data is accumulated from various places and business logic is then run on top of this data. Currently, we have code to pull out required data in Manager class and business logic in Domain Model. Is there a better way. high level suggestions will help. Hiererachy is webapi Controller-> Manager -> DomainModel |-> Infrastructure( to get required data)
Generally speaking, write models (generated from Commands) are not mirroring the read models (fetch from the Queries).
Write models (Aggregate Roots) are designed to ensure consistency and invariants of a domain, while read models are mostly used to build UI and/or an API.
If you design a simple domain for a Blog, you may have a Post aggregate and PostSummary as well as PostDetails or even a simple Post.
Both named similarly but in a different context of use.
Your Aggregate will probably refers its author only by reference (id) while your read model may be a flattened and pre-built with all the necessary informations required for your UI.
You end up with two models where your Aggregate does not even expose any getters (that's the read model purpose).
It sounds like you're doing just the C part of CQRS, not the Q. In CQRS there are 2 data models, one that is updated via commands (the write model) and one that is custom made just for display purposes (the read model). when a command makes a change to data, it loads a full aggregate with business rules from the write model, makes appropriate changes, and saves. It then (usually by sending a message) requests an update of the read model.
The read model is a collection of tables that are custom built for specific target UI pages. Data duplication is everywhere. The idea is that reads should be very fast because they are just a "select *" query from the read table.
If you had implemented a read model, then your question doesn't make sense because there are no complex query classes. If you've not implemented CQRS, then normal advice would apply, such as creating repositories to contain the query, etc.
I'm designing a C# application
Presentation ( web site + flex apps )
Business Logical Layer (might be WCF to enable multi client platforms)
Data Access Layer ( With NHibernate )
We're going to integrate our solution in many preexistant client's database environnements and we would like to use NHibernate in the DAL.. My colleague pointed out that generating classes from client's DB (like User or Image) with NHibernate would cause the BLL to blow up in our face at each DB changes !
So the question is how do we prevent that from happening ?
We're thinking about creating business objects and map NHibernate objects to these BO (hum, does that make them DTOs ?) with AutoMapper and prevent dal changes from affecting BLL.. Is this the way to go ?
Thanks !
EDIT :
To give a better understanding of what we're trying to achieve, you might need context :
We're building a photo storing/sharing app in Flex for the front-end and C# on the back-end mainly for our company, so we handle every aspects of the code and DB.
But : that product can also be bought by tiers, which eventually have already a database with User table or Image table. I'm thinking here about a new prospect who have an Image table with a few hundred millions of rows and adding columns for our business logic isn't going to be happening because of a too long ALTERing of the table.
Even though it would be possible (User table for example can be modified because of lesser rows), we're asking ourselves how to handle table structure changes without impacting all of our solution each time we have to integrate in a tier database, from BLL to client app in Flex !
in my experience, your business objects (AKA Domain Objects) should be modelled in OO to represent your real life business entities and your tables in 3rd normal form (this may change depending on what design you are after speed vs file size)
NHibernate should map between your BO's and Tables, using its mapping files.
now you have legitimate cases:
You need to add/remove a column, we decided to remove addressline4, this will echo a change in your Address Object, thats fine.
You move a column to a better place, our Client object contains notes, which is currently stored in the Contract_Extra table, which is going to be moved into the Client table. moving the column into a better place will only effect the Mapping file, in this case
I doubt there is a blanket reasoning, however I hope the examples make you think about this
I have not tried NH accross multiple Db's, also should each database have its own service on top?
here are some links
Multi table entites
PoEAA <- look at the Single Table inheritance, Class table inheritance and the other one
Hope this helps
It sounds like you want to design your domain model to be database agnostic. I too am interested in the best approach to having a central domain model that can map over to multiple different database models.
The way you are proposing, to create DTO's from each database using code generators, could be an option. Another would be to create custom NHibernate mappings for each preexisting database. You still may need to use some DTOs to make some of the mappings less difficult but it may give you more control.
These are just some thoughts. More experienced users with NHibernate probably will have better insight to your situation.
I'm working on a cocoa app for syncing data between two folders.
It have profiles (so you can have multiple setups)
It's possible to analyze data
It's possible to sync the data
Im a little confused. First of all i cant really see where to have a model? And how many controller would you suggest? 1 WindowController or AnalyzeController, SyncController etc.
Its quite a while since i have worked with MVC. I've read some articles but i'm missing concrete examples on how to divide it.
Best regards.
The data model handles the data and the abstract relationships between different pieces of the data. The controllers handle the concrete operations of a computer or human interface.
The key division is that the data model doesn't know where the data comes from and doesn't care. For example, it could model a folder and its contents but the actual information in the model could come from a real folder on a disk or it could come from completely made up plist file or it could come from a simulated UI of a folder. The data model doesn't care because it has no direct connection with concrete reality. It just holds an abstract description of the data.
The controllers by contrast are tied to a specific concrete interface. For example, if you have two folders, you would have specific controllers for each folder. Each controller would have concrete knowledge of the real world pathway to the folder as well as the mechanism for reading and writing to the folders. So, if one folder is on the local hard drive and another is remote, each controller would understand the difference. If you have a UI, then the UI would have its own controller.
The controllers job is to translate from the concrete reality to the abstract model. In this case, the controller would handle connecting to a remote server, scanning the folder and then converting that information to an abstract form it would hand off to the data model. However, the controller doesn't save an data and doesn't understand how the pieces of data relate to each other.
In the case of a syncing app, it would be the job of the data model to understand what files where in which folder and what files needed to be copied or updated and to where. It would then tell each controller which files to manipulate. However, the controller wouldn't know why each file was being manipulated.
The design goal is to create a data model that would model the folders and files regardless of where they reside, how they are concretely manipulated or even whether they actually exist at all. That way, you can easily add or remove interfaces just by adding or removing a controller. The controllers themselves are simple because they hold no data and no data manipulation logic.