ASP.NET 5/MVC 6 area functionality in multiple projects - asp.net-core

We are building a very large web site that will consist of a main site with many sub sites. These could typically be implemented in areas, but the development cycle and teams for these sub sites are disparate. We want to be able to deploy only a single sub site without taking an outage for the entire thing. We are trying to determine if there is a good, clean way to have a project for the main site and projects for each sub site.
In this case, the main site has all the core layout and navigation menus. The user experience should be that of a single site. Ideally, the sub site projects could be used just like areas in MVC utilizing the layout pages and other assets from the main site.
While I think this is doable by patching things together on the server, there needs to be a good development and debugging story. Ideally, the main site and a subsite could be loaded into Visual Studio for development. Additionally, it would be nice to be able to do a regular web deploy without duplicating core files in each sub site.
Like I mentioned, we could use areas, but would like to know if there are other viable options.
Answers to questions:
The sites will probably will reuse some contexts and models. Do they share the actual objects in memory, I don't think so. Each would have their own instances.
There will be several databases partitioned by domain. One for the core site and several more, as many as one per sub site. For example sub site A might need to access some data from sub-site B. This would be handled via a data or service layer.
The site URLs would ideally be as follows:
Core site: http://host
Sub site A: http://host/a
Sub site B: http://host/b
Specific things to share: _layout files, css, js, TypeScript, images, bower packages, etc. Maybe authentication, config, etc.
The authorize attribute would be the preferred approach. A unified security infrastructure that behaved like a single site would be the best option. Not sure if that is possible.

This seems like a good architecture question. I wouldn’t know how to properly answer your question since I’m no architect and also because it seems to raise more questions than answers...
Assuming a typical layered application looks somewhat like this:
Contoso.Core (Class Library)
Contoso.Data (Class Library)
Contoso.Service (Class Library)
Contoso.Web.Framework (Class Library)
Contoso.Web (asp.net MVC application)
For now, I’m disregarding the fact that you want this in asp.net 5/MVC 6.
Contoso.Core:
This layer would hold your entities/pocos in addition to anything else that may be used in the other layers. For example, that could be Enums, Extension methods, Interfaces, DTOs, Helpers, etc...
Contoso.Data:
This layer would be where you’d store your DbContext (if you’re using EntityFramework) and all the DbSets<>, it would also hold the implementation of your repositories (while the interfaces could be living in the Contoso.Core layer...you’ll see why later).
This layer has a dependency on Contoso.Core
Contoso.Service:
This layer would be your Service layer where you define your services and all the business rules. The methods in this layer would be returning Entities/Pocos or DTOs. The Services would invoke the database thru the repositories assuming you use the Repository Design Pattern.
This layer has a dependency on Contoso.Core (for the entities/pocos/dtos and for the Interfaces of the repositories since I assume you’ll be injecting them). In addition, you’d also have a dependency on the Contoso.Data layer where the implementation of your repositories lives.
Contoso.Web.Framework:
This layer would have a dependency on Contoso.Core, Contoso.Data and Contoso.Service.
This layer is where you’d configure your IoC Container (Autofac, Unity, etc…) since it can see all the Interfaces and their implementation.
In addition, you can think of this layer as “This is where I configure stuff that my web application will use/might use”.
Since that layer is for the web layer, you could place stuff that is relevant to the web such as custom Html Extensions, Helpers, Attribute, etc...
If tomorrow you have a second web application Contoso.Web2, all you’d need to do from Contoso.Web2 is to add a reference to Contoso.Web.Framework and you’d be able to use the custom Html Extensions, Helpers, Attributes, etc...
Contoso.Web:
This layer is your UI/client layer.
This layer has a dependency on Contoso.Core (for the entities/pocos/dtos). It also has a dependency on Contoso.Services since the Controllers would invoke the Service Layer which in turn would return entities/pocos/dtos. You’d also have a dependency on the Contoso.Web.Framework since this is where your custom html extensions lives and more importantly, where your IoC container is configured.
Notice how this layer does not have a dependency on Contoso.Data layer. That’s because it doesn’t need it. You’re passing by the Service Layer.
For the record, you could even replace the Contoso.Service Layer by a WebAPI (Contoso.API) for example allowing you to create different types of applications (winform, console, mobile, etc...) all invoking that Contoso.API layer.
In short...this is a typical layered architecture you often see in the MVC world.
So what about your question regarding the multiple sub sites? Where would it fit in all of this?
That’s where the many questions come in...
Will those sub sites have their own DbContext or share the same one as the Main site?
Will those sub sites have their own database or the same one as the Main site? Or even different scheme name?
Will those sub sites have their own URL since you seem to want the ability to deploy them independently?
What about things that is common to all those sub sites?
What about security, Authorize Attribute and many more things?
Perhaps the approach of Areas and keeping everything in one website might be less error prone.
Or what about looking at NopCommerce and using the plugin approach? Would that be an alternative?
Not sure I’ve helped in any way but I’d be curious to see how others would tackle this.

You need an IIS website configured in your dev machine. You can automatize its creation with VS Tasks. You can have also a task to build and publish your solution there as well. This will take some time, but you'll have the advantage it could be reused in your CD/CI build server, with proper configuration.
After creating your main web project in your solution, create a subsite as a new web MVC project, naming it in a way that makes sense. For example, if your main web project is called MySite.Web.Main, Your subsite could be MySite.Web.MySubsite.
Delete global.asax and web.config from all your subsites, and there you go. Once published, all your subsites will rely on the main site global.asax and web.config. If you need to add configuration changes to your main web.config from your subsites, rely on web.config transformation tasks to be triggered after the build complete successfully. You can have different transform files for different environments.
Remember that you'll need to add all that automation to your CI/CD build server as well.
NOTE: when you add a new nuget dependency on your subsite projects, there is a chance it'll create a new web config. It's crucial that all subsite web.configs are either deleted or modified in a way that their "Build Action" property is set to "none", or it'll override the main web config during the publication process. One way to work around this is, instead of deleting the subsite web.config, you delete its content and set "Build Action" to "none" as soon as you create the project.

Related

Zend Framework 3 singletons

I'm creating a new application in Zend Framework 3 and i have a question about a design pattern
Without entering in much details this application will have several Services, as in, will be connecting to external APIs and even in multiple databases, the workflow is also very complex, a single will action can have multiple flows depending on several external information (wich user logged in, configs, etc).
I know about dependency injections and Zend Framework 3 Service Manager, however i am worried about instanciating sereval services when the flow will actually use only a few of them in certain cases, also we will have services depending on other services aswell, for this, i was thinking about using singletons.
Is singleton really a solution here? I was looking a way to user singletons in Zend Framework 3 and haven't figured out a easy way since i can't find a way to user the Service Manager inside a service, as I can't retrive the instance of the Service Manager outside of the Factory system.
What is an easy way to implement singletons in Zend Framework 3?
Why use singletons?
You don't need to worry about too many services in your service manager since they are started only when you get them from the service manager.
Also don't use the service manager inside another class except a factory. In ZF3 it's removed from the controllers for a reason. One of them is testability. If all services are inject with a factory, you can easily write tests. Also if you read your code next year, you can easily see what dependencies are needed inside a class.
If you find there are too many services being injected inside a class which are not always needed you can:
Use the ProxyManager. This lazy loads a service but doesn't start it until a method is called.
Split the service: Move some parts from a service into a new service. e.g. You don't need to place everything in an UserService. You can also have an UserRegisterService, UserEmailService, UserAuthService and UserNotificationsService.
In stead of ZF3, you can also think about zend-expressive. Without getting into too much detail, it is a lightweight middleware framework. You can use middleware to detect what is needed for a request and route to the required action to process the request. Something like this can probably also done in ZF3 but maybe someone else can explain how to do it there.

Ember adapter and serializer

I'm building an Ember application with ember-cli and, as a persistence layer, an HTTP API using rails-api + Grape + ActiveModelSerializer. I am at a very basic stage but I want to setup my front-end and back-end in as much standard and clean way as possible before going on with developing further API and ember models.
I could not find a comprensive guide about serialization and deserialization made by the store but I read the documentation about DS.ActiveModelSerializer and DS.ActiveModelAdapter (which says the same things!) along with their parent classes.
What are the exact roles of adapter and serializer and how are they related?
Considering the tools I am using do I need to implement both of them?
Both Grape/ActiveModelSerializer and EmberData offer customization. As my back-end and front-end are for each other and not for anything else which side is it better to customize?
Hmmm...which side is better is subjective, so this is sort of my thought process:
Generally speaking, one would want an API that is able to "talk to anything" in case a device client is required or in case the API gets to be consumed by other parties in the future, so that would suggest that you'd config your Ember App to talk to your backend. But again, I think this is a subjective question/answer 'cause no one but you and your team can tell what's good for a given scenario you are or might be experiencing while the app gets created.
I think the guides explain the Adapter and Serializer role/usage and customization pretty decently these days.
As for implementing them, it may be necessary to create an adapter for your application to define a global namespace if you have one (if your controllers are behind another area like localhost:3000/api/products, then set namespace: 'api' otherwise this is not necessary), or similarly the host if you're using cors, and if you're doing with cli you might want to set the security policy in the environment to allow connections to other domains for cors and stuff like that. This can be done per model as well. But again, all of this is subjective as it depends on what you want/need to achieve.

Moving MVC-style service layer under WCF

Recently I've been working with MVC4 and have grown quite comfortable with the View > View Model > Controller > Service > Repository stack with IoC and all. I like this. It works well. However, we're moving towards company wide application platform that will serve the majority of all business applications needs within the company.
Basic architecture goals:
Client facing MVC site
Internal Admin web site
Plethora of scheduled jobs importing/exporting data/etc to third parties
Service Bus sitting in the middle to expose business events
Public API for customer consumption
My initial thoughts are to introduce an "enterprise service layer" by applying my service interfaces to WCF contracts and registering the WCF proxy classes in my IoC. This would allow me to reuse the same pattern I'm currently using, butt I've not found a lot of examples of this in practice. Except this guy.
Admittedly though, I'm unsure what the best solution is for a project this scale.
1) What are the considerations when centralizing business services?
2) How does this affect cross cutting concerns like validation, authorization, etc? I thought I had that figured out already, but putting DTOs between the layers changes all this.
3) I'm experienced with WCF, but I hear Service Stack is all the rage...Should SS be a consideration with its RESTful goodness?
This guy here. I am not an expert in this area by any means, but hopefully I can provide a bit more context around things.
The main problem with using IoC to resolve WCF ChanelFactory dependencies as per my post is that the client also needs to have access to the service contracts. This is fine for a View > View Model > Controller > Service > Repository type architecture but is not likely to be possible (or desirable) for a shared public API.
In an attempt to cover your other questions:
1) Some of the concerns are already mentioned in your second question. Add to that things like security, discoverability, payload type (XML, JSON etc), versioning, ... The list goes on. As soon as you centralize you suddenly gain a lot more management overhead. You cannot just change a contract without understanding the consequences.
2) All the cross cutting stuff needs to be catered for in your services. You cannot trust anything that comes in from clients, especially if they are public. Clients can add some validation for themselves but you have to ensure that your services are locked down correctly.
3) WCF is an option, especially if your organisation has a lot of existing WCF. It is particularly useful in that it supports a lot of different binding types and so means you can migrate to a new architecture over time by changing the binding types of the contracts.
It is quite 'enterprisey' and has a bewildering set of features that might be overkill for what you need.
ReST is certainly popular at the moment. I have not used Service Stack but have had good results with Asp.Net Web Api. As an aside, WCF can also do ReST.
I've previously provided a detailed explanation of the technical and philosophical differences between ServiceStack and WCF on InfoQ. In terms of API Design, this earlier answer shows the differences between ServiceStack Message-based approach and WCF / WebApi remote method approach.
SOAP Support
ServiceStack also has Soap Support but you really shouldn't be using SOAP for greenfield web services today.
HTML, Razor, Markdown and MVC
ServiceStack also has a great HTML Story which can run on stand-alone own with Razor support as seen in razor.servicestack.net or Markdown Razor support as seen in servicestack.net/docs/.
ServiceStack also integrates well with ASP.NET MVC as seen in Social Bootstrap Api, which is also able to take advantage of ServiceStack's quality alternative components.

good practice: REST API as the interface between the interface layer and business layer?

I was thinking about the architecture of a web application that I am planning on building and I found myself thinking a lot about a core part of the application. Since I will want to create, for example, an android application to access it, I was already thinking about having an API.
Given the fact that I will want to have an external API to my application from day one, is it a good idea to use that API as an interface between the interface layer (web) and the business layer of my application? This means that even the main interface of my application would access the data through the API. What are the downsides of this approach? performance?
In more general terms, if one is building a web application that is likely to need to be accessed in different ways, is it a good architectural design to have an API (web service) as the interface between the interface layer and business layer? Is REST a good "tool" for that?
Sounds like you've got two questions there, so my answer is in two parts.
Firstly, should you use an API between the interface layer and the business layer? This is certainly a valid approach, one that I'm using in my current project, but you'll have to decide on the benefits yourself, because only you know your project. Possibly the largest factor to consider is whether there will be enough different clients accessing the business layer to justify the extra development effort in developing an API? Often that simply means more than 1 client, as the benefits of having an API will be evident when you come to release changes or bug fixes. Also consider the added complexity, the extra code maintenance overhead and any benefits that might come from separating the interface and business layers such as increased testability.
Secondly, if you implement an API, should you use REST? REST is an architecture, which says as much about how the remainder of your application is developed as it does the API. It's no good defining resources at the API level that don't translate to the Business Layer. Rest tends to be a good approach when you want lots of people to be able to develop against your API (like NetFlix for example). In the case of my current project, we've gone for XML over HTTP, because we don't need the benefits that Rest generally offers (or SOAP for that matter).
In general, the rule of thumb is to implement the simplest solution that works, and without coding yourself into a corner, develop for today's requirements, not tomorrow's.
Chris
You will definitely need need a Web Service layer if you're going to be accessing it from a native client over the Internet.
There are obviously many approaches and solutions to achieve this however I consider the correct architectural guideline to follow is to have a well-defined Service Interface on the Server which is accessed by the Gateway on the client. You would then use POCO DTO's (Plain old DTO's) to communicate between the endpoints. The DTO's main purpose is to provide optimal representation of your web service over the wire, it also allows you to avoid having to deal with serialization as it should be handled transparently by the Client Gateway and Service Interface libraries.
It really depends on how to big your project / app is whether or not you want want to go through the effort to mapping your DTO's to the client and server domain models. For large applications the general approach would be on the client to map your DTO's to your UI Models and have your UI Views bind to that. On the server you would map your DTO's to your domain models and depending on the implementation of the service persist that.
REST is an architectural pattern which for small projects I consider an additional overhead/complexity as it is not as good programattic fit compared to RPC / Document Centric web services. In not so many words the general idea of REST is to develop your services around resources. These resources can have multiple representations which your web service should provide depending on the preferred Content-Type indicated by your HTTP Client (i.e. in the HTTP ACCEPT HEADER). The canonical urls for your web services should also be logically formed (e.g. /customers/reports/1 as opposed to /GetCustomerReports?Id=1) and your web services would ideally return the list of 'valid states your client can enter' with each response. Basically REST is a nice approach promoting a loosely-coupled architecture and re-use however requires more effort to 'adhere' to than standard RPC/Document based web services whose benefits are unlikely to be visible in small projects.
If you're still evaluating what web service technology you should use, you may want to consider using my open source web framework as it is optimized for this task. The DTO's that you use to define your web services interface with can be re-used on the client (which is not normally the case) to provide a strongly-typed interface where all the serialization is taken for you. It also has the added benefit of enabling each web service you create to be called by SOAP 1.1/1.2, XML and JSON web services automatically without any extra configuration so you can choose the most optimal end point for every client scenario, i.e. Native Desktop or Web App, etc.
My recent preference, which is based on J2EE6, is to implement the business logic in session beans and then add SOAP and RESTful web services as needed. It's very simple to add the glue to implement the web services around those session beans. That way I can provide the service that makes the most sense for a particular user application.
We've had good luck doing something like this on a project. Our web services mainly do standard content management, with a high proportion of reads (GET) to writes (PUT, POST, DELETE). So if your logic layer is similar, this is a very reasonable approach to consider.
In one case, we have a video player app on Android (Motorola Droid, Droid 2, Droid X, ...) which is supported by a set of REST web services off in the cloud. These expose a catalog of video on demand content, enable video session setup and tear-down, handle bookmarking, and so on. REST worked out very well for this.
For us one of the key advantages of REST is scalability: since RESTful GET responses may be cached in the HTTP infrastructure, many more clients can be served from the same web application.
But REST doesn't seem to fit some kinds of business logic very well. For instance in one case I wrapped a daily maintenance operation behind a web service API. It wasn't obvious what verb to use, since this operation read data from a remote source, used it to do a lot of creates and updates to a local database, then did deletes of old data, then went off and told an external system to do stuff. So I settled on making this a POST, making this part of the API non-RESTful. Even so, by having a web services layer on top of this operation, we can run the daily script on a timer, run it in response to some external event, and/or have it run as part of a higher level workflow.
Since you're using Android, take a look at the Java Restlet Framework. There's a Restlet edition supporting Android. The director of engineering at Overstock.com raved about it to me a few years ago, and everything he told us was true, it's a phenomenally well-done framework that makes things easy.
Sure, REST could be used for that. But first ask yourself, does it make sense? REST is a tool like any other, and while a good one, not always the best hammer for every nail. The advantage of building this interface RESTfully is that, IMO, it will make it easier in the future to create other uses for this data - maybe something you haven't thought of yet. If you decide to go with a REST API your next question is, what language will it speak? I've found AtomPub to be a great way for processes/applications to exchange info - and it's very extensible so you can add a lot of custom metadata and yet still be eaily parsed with any Atom libraries. Microsoft uses AtomPub in it's cloud [Azure] platform to talk between the data producers and consumers. Just a thought.

Add Service Reference to WCF Service within Same Project

Is it an acceptable programming practice to add a Service Reference to a Project where the Service being referenced is defined within the same VS Project? (Service and Service Reference are in the same Project)
example:
MyWebAppProj
-Services
--MyService
-Service References
--MyServiceServiceReference.MyServiceClient
-Default.aspx.cs uses MyServiceServiceReference.MyServiceClient
The rational behind this is that a Silverlight App may be added to the Project. If it is, we would have to expose all the Business Logic methods through a service layer, so why not just do that first and use them everywhere to stay standardized between web pages and Silverlight pages.
I can't see why you would want to do that at all.
If you're already inside the same project as the service, at the very least you've already got access to all the service/data contracts, so really, calling the service is already very, very easy. You can either use a ChannelFactory directly, or write your own custom ClientBase<T>-derived client proxy class (which is trivial), there's no reason why you'd want to add service reference in this case.
Furthermore, if you added a service reference, you'd then be stuck with a bunch of duplicate definitions in the same project, which makes little sense (yes, you can isolate the generated code into a separate namespace, but still).