I'm new to prism and wonder how to organize the projects/modules for a line-of-business application.
In some places I read that the interface to the (WCF) service should be put into the infrastructure project. So, since the service interfaces do need the declarations of the data objects (customers, orders, etc) this would imply that I need to put those into the infrastructure project, too?
Since this project will grow large and contain a lot of data types, wouldn't it be more advisable to group those data objects and service interfaces into different projects??
But these probably would not be "the prism infrastructure project" anymore, wouldn't they?
Right now my guess would be: I need several projects containing service interfaces and data types grouped by domain, and several modules containing the viewmodels and views (grouped by domain, probably the same ones)?
And the infrastructure project would be reserved for some global helper stuff?
I find how to properly group stuff into domains to be one of the toughest challenges about prism.
Infrastructure project should be isolated from your business service, data contracts etc and it should only contain the classes in helping build the application and should be reusable across other projects
Again you could define multiple infrastructure projects, separate for the framework i.e, Wpf, Asp.Net, and Common.
For WPF/Prism, Infrastructure contains implementations of services for Dispatcher, Delegate Commands, Regions, ModuleMapper (loading and unloading of views into regions), etc.
Related
From The Official Guide - Mastering ABP Framework, Chapter 15 Working with Modularity, page 416, there are 4 isolation levels for an applicaion module defined in the book, one of them called 'Bounded contexts' and it's definition is as below:
A module can be part of a large monolith application, but it hides its
internal domain objects and database tables from other modules. Other
modules can only use its integration services and subscribe to the
events published by that module. They can't use the database tables of
the module in SQL queries. The module may even use a different kind of
DBMS for its specific requirements. That is the bounded context
pattern in domain driven design. Such a module is a good candidate to
convert to a microservice if you want to convert your monolith
application to a microservice solution in the future.
My question is:
An ABP application module solution is formed by several projects, for example:
Application
Application.Contracts
Domain Domain.Share
EntityFrameworkCore
HttpApi
HttpApi.Host
Web
As we known, each of those project will be compiled to an individual assembly, so how can i achieve 'The module should hides its internal domain objects and database tables from other modules'?
For a microservice, a bounded context is isolated in the application level, so it still can use the 'public' modifier to declare the aggregate root or other entities and then the application layer can use them. But a bounded context as a application module will be referenced by other modules or applications, once been referenced, all the public members can be directly use, how can I make them 'invisible' to other modules but at the same time 'visible' to the application layer inside??
One way i think of is to use the ‘internal’ modifier to declare all the entities, Aggreget roots and all other domian objects, but in that case, the Application layer will not be able to use the objects from the domain layer as well. so to overcome that, i need to combined the application layer and domain layer into a single project, but doing this really makes me feel very inelegant, and the most harmful side-effect is that the entities inside the aggregate will exposed to the application layer.
I can't find any example from the official guid and the website docs, is there any 'best practice' around?
In the latest UI of spinnaker, there are projects and applications labels. What's the relationship between projects and applications in spinnaker?
Projects provide an aggregated view of a collection of applications, but that's about it. If you're only interested in a subset of applications managed by Spinnaker, it's a decent way to manage them.
The relationship is modeled as many-to-many: a project is made up of one or more applications, and applications can be part of multiple projects.
Being in a project doesn't affect the application in any way, beyond making it present in the project view. There are no notifications to be configured at the project level, or pipelines that span multiple applications. You can set up a project pretty quickly and easily to get an idea what the view looks like.
I'm currently part of a project in which we host a WCF service to be accessed by certain clients. The WCF solution is split up into 4 different C# projects:
Host.csproj
DataContracts.csproj
Infrastructure.csproj
Model.csproj
Upon joining this project, I immediately wondered why there was a separate project for "DataContract" objects and one for "Model" objects. The two projects basically contain duplicates of the same objects. For example, in the DataContract project, there is a Customer object with 4 properties, and the model project also has a Customer object with the same four properties... I noticed that there is A LOT of automapper (mapping) being used in the application code to map datacontact objects to model objects and then re-map model objects back to data-contract objects while flowing through our typical service-repository pattern. The number of mappings necessary to produce results in this service has become extremely annoying.
After asking some teammates about why this route was chosen, I was told that datacontracts should not contain domain logic and that they are strictly objects to be used to send over the wire (and that all domain logic should be done using the model version of the object).
I feel like this approach is a bit unnecessary. Couldn't we just do away with the datacontracts project and use our model objects for both domain logic on the service side and also as datacontracts?
Somebody enlighten me...
Couldn't we just do away with the datacontracts project and use our
model objects for both domain logic on the service side and also as
datacontracts?
Yes it's physically possible for you to expose your domain objects out of your service, and it might save you a mapping or two.
However let's imagine in the future the domain model changes in response to business needs.
The existing consumers are happy with their contracts and don't want to have to change every time you release, so you are limited to a small non-breaking subset of possible changes you can make, or you have to wait until they're ready to release before you can.
One day another business consumer comes along who wants to leverage your domain capabilities. But they don't want the same contract as your existing consumers. How can you offer them what they want without breaking your existing consumers?
Another development team want to use your domain models in-process so you ship them an assembly, but their deployment server is .net 2.0 so it falls over trying to load System.Runtime.Serialization.dll
More generally, how can you evolve your domain capability when you're hard-wired to your external dependents?
If you think none of these situations apply to you and your service will always and forever be a simple facade on a repository for some ancient and unchanging business function then go for it.
Or,
The mappings you find irritating are there to protect you from inevitable change. As a consumer of a service, being coupled to that service's release schedule is a nightmare, and the same is true both ways. The mappings free you to be able to evolve your domain's business capability as you want to without having to worry about breaking anything. Want to rename a field? Do it. Tired of that massive single class? Refactor it into sub-types. The world is your oyster.
If you're worried about efficiency or performance, an in-process type mapping is several orders of magnitude faster than an out-of-process service call, as to be almost negligible.
So I'm going to have to say the advice your colleagues gave you:
datacontracts should not contain domain logic and that they are
strictly objects to be used to send over the wire
sounds pretty smart to me. Lots more here.
If you're finding the mappings tedious, I have used Omu ValueInjector before and it takes alot of the hassle out.
I usually split projects into layers i.e. presentation layer, business logic layer and data logic layer. Sometimes I will separate the layers using namespaces and sometimes I will have three separate DLL's (using tiers).
I see developers splitting tiers into multiple DLL's. For example, I once saw a business logic layer with over one hundred different project files and hence over one hundred different DLLs. Also the MSDN documentation shows that the .NET framework contains multiple DLL's e.g. mscorlib etc.
I believe that the reasoning behind having separate DLLs is that it minimizes the memory footprint and also it allows multiple developers to work on different projects e.g. one team could work on one project and another team on another project etc.
I work in a two developer team. What criteria do developers use deciding to split into separate DLLs?
What is the reasoning for separating layers into multiple DLLs?
There are various reasons to do this.
It adds isolation, which can help the compiler prevent you from mixing concerns. Without adding a reference explicitly, you can't use internal types in the other DLLs "by accident", which allows the compiler to help you keep your code cleaner.
If you don't use an assembly at runtime, it won't be loaded. This can keep the memory footprint smaller. (If all assemblies are used, however, it won't help).
It provides a logical separation within your APIs and projects, which can help with organization and maintainability of your code. Note that too many projects is just as bad (or sometimes worse) than too few, however, as many projects adds complexity that may not be beneficial.
Separating code into more than one assembly is done for many reasons, some more technical than others. Assemblies can be used for logical grouping of code much like namespaces and, in fact, one common pattern is to separate large namespaces (concerns) into separate assemblies for that namespace. But that reason is most definitely not the best reason to use more than one assembly.
Code reuse is like the number one factor for placing code into different assemblies. For example, you may have a console application and all of the code in is the one execute file that is compiled. Later on, you decide to create a web app front-nd for the same application. Instead of copying the core code from your console app to your web app, you would likely refactor the solution into three projects: a class library for the code code (the main implementation), a console app (which already exists) and a web app. The console app and web app projects/assemblies will reference the class library project/assembly and the main code is reused across both implementations. This is an oversimplification, mind you.
Another reason to separate code into multiple assemblies to separate concerns while managing dependencies. In this case, you may have code that requires references to web-oriented dependencies (other assemblies) that you may not want to reference in your core application assemblies. You would do that so that you may reuse your core assemblies without taking unnecesary dependencies when they are not needed by breaking up the app into additional assemblies/projects.
Another reason is to facilitate concurrent development of a large team where sub-teams may each work on a different assembly, helping to reduce the number of "collissions" between developers working on different concerns of the application.
In my SOA architecture, I have several WCF services.
All of my services need to access the database.
Should I create a specialized WCF service in charge of all the database access ?
Or is it ok if each of my services have their own database access ?
In one version, I have just one Entity layer instanced in one service, and all the other services depend on this service.
In the other one the Entity layer is duplicated in each of my services.
The main drawback of the first version is the coupling induced.
The drawback of the other version is the layer duplication, and maybe SOA bad practice ?
So, what do so think good people of Stack Overflow ?
Just my personal opinion, if you create a service for all database access then multiple services depend on ONE service which sort of defeats the point of SOA (i.e. Services are autonomous), as you have articulated. When you talk of layer duplication, if each service has its own data to deal with, is it really duplication. I realize that you probably have the same means of interacting with your relational databases or back from the OOA days you had a common class library that encapsulated data access for you. This is one of those things I struggle with myself, but I see no problem in each service having its own data layer. In fact, in Michele Bustamante's book (Chapter 1 - Page 8) - she actually depicts this and adds "Services encapsulate business components and data access". If you notice each service has a separate DALC layer. This is a good question.
It sounds as if you have several services but a single database.
If this is correct you do not really have a pure SOA architecture since the services are not independant. (There is nothing wrong with not having a pure SOA architecture, it can often be the correct choice)
Adding an extra WCF layer would just complicate and slow down your solution.
I would recommend that you create a single data access dll which contains all data access and is referenced by each WCF service. That way you do not have any duplication of code. Since you have a single database, any change in the database/datalayer would require a redeployment of all services in any case.
Why not just use a dependency injection framework, and, if they are currently using the same database, then just allow them to share the same code, and if these were in the same project then they would all use the same dll.
That way, later, if you need to put in some code that you don't want the others to share, you can make changes and just create a new DAO layer.
If there is a certain singleton that all will use, then you can just inject that in when you inject in the dao layer.
But, this will require that they use the same DI framework controller.
The real win that SOA brings is that it reduces the number of linkages between applications.
In the past I've worked with organizations who have done it a many different ways. Some data layers are integrated, and some are abstracted.
The way I've seen it most successfully done is when you create generic data-layer services for each app/database and you create the higher level services based on your newly created data layer.