In the latest UI of spinnaker, there are projects and applications labels. What's the relationship between projects and applications in spinnaker?
Projects provide an aggregated view of a collection of applications, but that's about it. If you're only interested in a subset of applications managed by Spinnaker, it's a decent way to manage them.
The relationship is modeled as many-to-many: a project is made up of one or more applications, and applications can be part of multiple projects.
Being in a project doesn't affect the application in any way, beyond making it present in the project view. There are no notifications to be configured at the project level, or pipelines that span multiple applications. You can set up a project pretty quickly and easily to get an idea what the view looks like.
Related
I trying to develop Microservice in .Net core.
planning to implement project structure like
Frontend
Services
-Product
-Product.Api
-Product.Application
-Product.Domain
-Product.Infrastructure
-Basket
-Basket.Api
-Basket.Application
-Basket.Domain
-Basket.Infrastructure
-Order
-Order.Api
-Order.Application
-Order.Domain
-Order.Infrastructure
In the above project structure, under service folder currently three module(Product, Basket and Order). many module will added later.
Where each module have 4 projects for Api, Application ,Domain, Infrastructure. if add more module increase number of class library and web project. this will drop Visual studio loading, compile and running time of project due to my hardware is not enough.
Please recommend any other pattern for optimizing number of projects in the microservice?
If the number of class libraries is the determining factor in your architecture performance, maybe it is time to converge the modules into the same module.
If it is absolutely necessary to continue using the microservice architecture and the high number of modules, you should consider investing in more powerful hardware.
Developing software often requires a lot of ram to house all the processes running the stack locally.
Another approach would be to try to develop on a cloud platform such as Azure, and use the corresponding tools to debug against a cloud instance or even in a GitHub Codespace.
If Product, Basket and Order are different microservices, then they should be in different Visual Studio solutions. Each solution will be small and independent and they'll all load and work fast regardless of how many microservices you have.
If Product, Basket and Order are part of the same microservice and you are planning to add many more modules, your microservice design is probably wrong, as a single microservice appears to have far too many responsibilities. In this case, the solution is to limit the responsibilities of each microservice so that they don't grow to enormous sizes.
If what you are building is a modular monolith (a single deployable unit, but with the code organised in modules), then the solutions are a bit different. If it's a single developer application, you probably don't need to split the modules in separate projects. For example, the whole API can be a single project and each module be in a different folder. If there'll be many developers and teams working on the source code, then you might want to create a separate solution for each module, so each team can work on their own code.
Like #abo said: if the number of assemblies in impacting the performance of your application consider one assembly per module.
If your driver for having multiple assemblies per module was governance on dependencies then consider using an additional tool like https://github.com/realvizu/NsDepCop which allows you to enforce architectural/dependency rules without the help of the compiler.
well,
I have some notes on this Architecture you are making.
if you do it right the one of the payoff is going to be less impact on the hardware.
note#1 : make A Kernel Module which has all of the abstraction of the common functionalities. (that other microservices and take a reference from). like the base repository, message queue base handler ,command /command handler. and u you want to disconnect a module from the kernel you can do that and add your own abstractions alone in that module (simplicity is the ultimate sophistication).
note#2 : Not every module needs to have the same projects or layers as u putting for example: -generally speaking- Basket doesn't need infrastructure. all it does to tell the order module if there is any order on going or pending for this user.
note#3 : microservices is notorious of needing a high-end server to handle the massive amount of nodes/application.
finally I have an awesome blueprint for you to study and follow.
which happens to covers the same case youre after.
here is the link
Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!
I'm new to prism and wonder how to organize the projects/modules for a line-of-business application.
In some places I read that the interface to the (WCF) service should be put into the infrastructure project. So, since the service interfaces do need the declarations of the data objects (customers, orders, etc) this would imply that I need to put those into the infrastructure project, too?
Since this project will grow large and contain a lot of data types, wouldn't it be more advisable to group those data objects and service interfaces into different projects??
But these probably would not be "the prism infrastructure project" anymore, wouldn't they?
Right now my guess would be: I need several projects containing service interfaces and data types grouped by domain, and several modules containing the viewmodels and views (grouped by domain, probably the same ones)?
And the infrastructure project would be reserved for some global helper stuff?
I find how to properly group stuff into domains to be one of the toughest challenges about prism.
Infrastructure project should be isolated from your business service, data contracts etc and it should only contain the classes in helping build the application and should be reusable across other projects
Again you could define multiple infrastructure projects, separate for the framework i.e, Wpf, Asp.Net, and Common.
For WPF/Prism, Infrastructure contains implementations of services for Dispatcher, Delegate Commands, Regions, ModuleMapper (loading and unloading of views into regions), etc.
I am involved with several open source projects which taken together provide an application development framework. The question I have is what mechanism(s) should I provide for integrating them with each other?
On the conceptual level the answer is clear - DI/IoC. The "only" problem is to decide which one. In several installations we used StructureMap, but then a user came along who wanted only one of the components and wanted NInject.
So, to qualify the question, how should I go about building my components so that they can be integrated with each other (and 3rd Party) using a variety of DI/IoC containers.
The best I could come up with was to separate out all integration code into separate projects and then have a project per supported IoC container, but this sounds suspiciously like IoC squared.
Any bright ideas? or I am just thinking too hard?
P.S. for the curious: NDjango; Bistro; Workflow Server
As long as you develop reusable components, you can implement them in a DI-friendly way without ever referencing any particular DI Container.
It' only when you need to compose an actual, running application that you need the DI Container, but as I understand, you are developing a framework, and it's best to keep it DI-neutral.
See this very related question (almost a duplicate).
For inspiration about integrating several projects while keeping them independent, see the Castle Project.
Our core business application uses a library (C# project) of business objects. Data access is done using the Wilson O/R Mapper (we're migrating to NHibernate this summer). The application has 3 front-end UIs: Windows Forms, ASP.NET, and a Windows Forms app that is installed on tablet PCs. The three front-ends perform different functions but they all access a core subset of the business classes.
The tablet PC application is the problem. We try to limit the amount of data pushed to the tablets to reduce the time it takes them to sync using SQL Server merge replication. The problem we've run into is when we add new functionality to the main application that we have no need to distribute to the tablet PCs or, if it's sensitive data, a strong need to not distribute it. Some of this can be controlled through replication, but we occasionally introduce dependencies in the core business objects that must be present in order for the O/R mapper to work.
Ideally, we would have two versions of the core business object library, Full and Compact. This seems like it would be a maintenance nightmare. Are there any strategies for managing this? Or alternatives? How does Microsoft manage the full and compact .NET Frameworks?
Your question talks about Tablet PC, which is really just XP and therefore the CF really isn't relevant, but for the sake of the question subject itself we can still talk about maintaining code used by the CF and the FFx (assuming you actually meant Windows Mobile or Windows CE).
First thing to know is that CF assemblies are retargetable. This means that a CF assembly can be directly used by a full-framework app without any recompiling (assuming it doesn't use any device specific stuff like P/Invoking coredll witout checking the runtime environment, using the WindowsMobile namespace, etc).
If using retargeting doesn't get you all the way there, then you can deal with the maintennace using compiler directives as well as partial classes. Daniel Moth covers tips on these quite well in his MSDN article.
One thing you may be able to do is if you can compile for each platform seperately you may be able to use compiler directives to limit what is needed by the Tablet PC platform. However with you using an OR mapper that may prove to be difficult.
Now in an ideal world you would actually have your Domain objects (the ones that map to the OR) with very very little business logic shared. Then have a BO layer that consumes these Domain objects. If you managed to break out your code base this way you could in theory then have just seperate layers you need to deploy depending on your need.
However it sounds to me more like you need to perform an intelligent split.
What you probably need to do is segment your code such that the Tablet PC BO are in the core root BO asseymbly. Then have a BO extension assembly that has the additional objects, rules, etc that are needed for the Winform / Web app versions.
So while you would have two domain level business object components at this point you would not actually have any duplication. As your Tablet PC BO object would also be the base for the Winform / Asp.net app. Then the extension dll would only contain the extras needed for the bigger versions of hte applications.
If you followed this approach it might make things easier to manage. Just look at it from the Common stuff needed everywhere and the specialized approach. :)
I can go into much greater detail if you want, just wanted to give you a basic hit.