Most of my programming has been in web-based applications, although I have done some desktop application development for personal projects. A recurring design issue has been on how to manage the configuration. For example, we all know that in ASP.NET the web.config is used frequently to hold configuration information, whether it be generated or manually configured.
Before I even knew what design patterns were, I would implement what would be in essence a Singleton pattern. I'd write a static class that, upon startup, would read the configuration file and store the information, along with any auto-generated information, in fields, which were then exposed to the rest of the application through some kind of accessor (properties, get() methods...).
Something in the back of my mind keeps telling me that this isn't the best way to go about it. So, my question is, are there any design patterns or guidelines for designing a configuration system? When and how should the configuration system read the configuration, and how should it expose this information to the rest of the application? Should it be a Singleton? I'm not asking for recommended storage mechanisms (XML vs database vs text file...), although I am interested in an answer to that as well.
Not to sound like a cop out, but it's really going to completely depend on your app. Very simple apps (sounds like your talking about Web based apps so I'll skip fat clients), usually need nothing more than global config (you can use web.config and a singleton for that) and a per user config (a user table, and possibly a linked config table or name/value pair table can handle that.
More sophisticated apps might need a full hierarchy of configuration that's protectable and overridable. For instance, I might have several app defined defaults, that can be overridden for each group that a user belongs to, the user themselves and finally an administrator defined value for a specific group or user that can't be override by the user.
For that, I generally use a singleton "root config" object that has methods that expose additional levels of the hierarchy and config properties at each level. The root is responsible for resolving the heirarchy , but if necessary (for setting config for instance) you can traverse the heirarchy yourself to deal with settings specific to a single level in the heirarchy.
And finally, there's the issue of latency. If you expect config settings to change often, reading them from storage each time they're requested is best, but most expensive.
If not, you can cache settings, along with a "last read" date, and simply reread setting values into the cache after an expiry time.
Guidlines:
Consider management of the settings
How often are they likely to be changed?
How can change them and how do they do it?
E,G web.config changes require someone with technical skill and result in your app being reset when it's changed; where-as settings accessed by a UI provided by the system won't have those issues.
Security
Do you need to encrypt any settings?
Do you need to segregate settings so that only certain people can access certain ones?
Do you need to provide any auditing of changes?
Naming Conventions
Do have some.
I strongly recommend using a URI based approach for setting keys - this helps to avoid collision between different settings between different components (likely when you start adding third party components to your system - or when your's is added to someone else's.
URI based keys also make it easy to maintain as it's easier to see what the setting applies to.
There are different naming "schema's" you can use, I like to mirror the code structure if I can, but sometimes you might want to follow something else like business proces.
Don't use ambiguous names like "DatabaseConnection".
URI based key example:
<appSettings>
<add key="Morphological.RoboMojo.BusinessLogic.IJobDataProvider" value="Morphological.RoboMojo.XmlDataProvider.JobDataProvider, Morphological.RoboMojo.XmlDataProvider"/>
<add key="Morphological.RoboMojo.BusinessLogic.ITaskDataProvider" value="Morphological.RoboMojo.XmlDataProvider.TaskDataProvider, Morphological.RoboMojo.XmlDataProvider"/>
<add key="Morphological.RoboMojo.XmlDataProvider.NameAndPathOfDataFile" value="C:\Program Files (x86)\Morphological Software Solutions\RoboMojo\RoboMojoState.txt"/>
<add key="Morphological.RoboMojo.XmlDataProvider.PathOfDataFileBackups" value="C:\Program Files (x86)\Morphological Software Solutions\RoboMojo\RoboMojoState Backups"/>
<add key="Morphological.RoboMojo.BusinessLogic.ITaskExecutorProvider" value="Morphological.RoboMojo.TaskExecutorMSRoboCopy.RoboMojo, Morphological.RoboMojo.TaskExecutorMSRoboCopy"/>
<add key="Morphological.RoboMojo.TaskExecutorMSRoboCopy.LocationOfRoboCopyEXE" value="C:\Windows\System32\RoboCopy.EXE"/>
</appSettings>
Related
I read a lot about caches in ASP.NET and it seems that there is no cache service I need, but may be you can help me find it.
My requirements:
The cache must be isolated per-component because I use the same key in different services to store different data.
It must support expiration settings like IMemoryCache does.
Why I need it? Because I have several services that request data from remote sources, and they all have to cache that data to show decent performance.
Maybe I don't get it and it's ok to use IMemoryCache, but I didn't find a way to isolate one service cache data from another service cache data. Of course, I can add a prefix to every key with the name of my service, but it's an ugly way to use it.
I am planning to use grpc to build my search API, but I am wondering how the grpc services definitions files (e.g .proto) is synced between the server and the clients (assuming all use different technologies).
Also if the server had changed one of the .proto, how the clients will be notified to regenerate their stubs in accordance to those changes.
To summarize: how to share the definitions (.proto) with clients and how clients are notified if any changes to those files had occurred?
Simple: they aren't. All sync here is manual and usually requires a rebuild and redeploy, after you've become aware of a change, and have updated your .proto files.
Without updating, the fields and methods that you know about should at least keep working. You just won't have the new bits.
Note also: while you can extend schemas by adding new fields and services / methods, if you change the meaning of a field, or the field type, or the message types on a service: expect things to go very badly wrong.
We are building a very large web site that will consist of a main site with many sub sites. These could typically be implemented in areas, but the development cycle and teams for these sub sites are disparate. We want to be able to deploy only a single sub site without taking an outage for the entire thing. We are trying to determine if there is a good, clean way to have a project for the main site and projects for each sub site.
In this case, the main site has all the core layout and navigation menus. The user experience should be that of a single site. Ideally, the sub site projects could be used just like areas in MVC utilizing the layout pages and other assets from the main site.
While I think this is doable by patching things together on the server, there needs to be a good development and debugging story. Ideally, the main site and a subsite could be loaded into Visual Studio for development. Additionally, it would be nice to be able to do a regular web deploy without duplicating core files in each sub site.
Like I mentioned, we could use areas, but would like to know if there are other viable options.
Answers to questions:
The sites will probably will reuse some contexts and models. Do they share the actual objects in memory, I don't think so. Each would have their own instances.
There will be several databases partitioned by domain. One for the core site and several more, as many as one per sub site. For example sub site A might need to access some data from sub-site B. This would be handled via a data or service layer.
The site URLs would ideally be as follows:
Core site: http://host
Sub site A: http://host/a
Sub site B: http://host/b
Specific things to share: _layout files, css, js, TypeScript, images, bower packages, etc. Maybe authentication, config, etc.
The authorize attribute would be the preferred approach. A unified security infrastructure that behaved like a single site would be the best option. Not sure if that is possible.
This seems like a good architecture question. I wouldn’t know how to properly answer your question since I’m no architect and also because it seems to raise more questions than answers...
Assuming a typical layered application looks somewhat like this:
Contoso.Core (Class Library)
Contoso.Data (Class Library)
Contoso.Service (Class Library)
Contoso.Web.Framework (Class Library)
Contoso.Web (asp.net MVC application)
For now, I’m disregarding the fact that you want this in asp.net 5/MVC 6.
Contoso.Core:
This layer would hold your entities/pocos in addition to anything else that may be used in the other layers. For example, that could be Enums, Extension methods, Interfaces, DTOs, Helpers, etc...
Contoso.Data:
This layer would be where you’d store your DbContext (if you’re using EntityFramework) and all the DbSets<>, it would also hold the implementation of your repositories (while the interfaces could be living in the Contoso.Core layer...you’ll see why later).
This layer has a dependency on Contoso.Core
Contoso.Service:
This layer would be your Service layer where you define your services and all the business rules. The methods in this layer would be returning Entities/Pocos or DTOs. The Services would invoke the database thru the repositories assuming you use the Repository Design Pattern.
This layer has a dependency on Contoso.Core (for the entities/pocos/dtos and for the Interfaces of the repositories since I assume you’ll be injecting them). In addition, you’d also have a dependency on the Contoso.Data layer where the implementation of your repositories lives.
Contoso.Web.Framework:
This layer would have a dependency on Contoso.Core, Contoso.Data and Contoso.Service.
This layer is where you’d configure your IoC Container (Autofac, Unity, etc…) since it can see all the Interfaces and their implementation.
In addition, you can think of this layer as “This is where I configure stuff that my web application will use/might use”.
Since that layer is for the web layer, you could place stuff that is relevant to the web such as custom Html Extensions, Helpers, Attribute, etc...
If tomorrow you have a second web application Contoso.Web2, all you’d need to do from Contoso.Web2 is to add a reference to Contoso.Web.Framework and you’d be able to use the custom Html Extensions, Helpers, Attributes, etc...
Contoso.Web:
This layer is your UI/client layer.
This layer has a dependency on Contoso.Core (for the entities/pocos/dtos). It also has a dependency on Contoso.Services since the Controllers would invoke the Service Layer which in turn would return entities/pocos/dtos. You’d also have a dependency on the Contoso.Web.Framework since this is where your custom html extensions lives and more importantly, where your IoC container is configured.
Notice how this layer does not have a dependency on Contoso.Data layer. That’s because it doesn’t need it. You’re passing by the Service Layer.
For the record, you could even replace the Contoso.Service Layer by a WebAPI (Contoso.API) for example allowing you to create different types of applications (winform, console, mobile, etc...) all invoking that Contoso.API layer.
In short...this is a typical layered architecture you often see in the MVC world.
So what about your question regarding the multiple sub sites? Where would it fit in all of this?
That’s where the many questions come in...
Will those sub sites have their own DbContext or share the same one as the Main site?
Will those sub sites have their own database or the same one as the Main site? Or even different scheme name?
Will those sub sites have their own URL since you seem to want the ability to deploy them independently?
What about things that is common to all those sub sites?
What about security, Authorize Attribute and many more things?
Perhaps the approach of Areas and keeping everything in one website might be less error prone.
Or what about looking at NopCommerce and using the plugin approach? Would that be an alternative?
Not sure I’ve helped in any way but I’d be curious to see how others would tackle this.
You need an IIS website configured in your dev machine. You can automatize its creation with VS Tasks. You can have also a task to build and publish your solution there as well. This will take some time, but you'll have the advantage it could be reused in your CD/CI build server, with proper configuration.
After creating your main web project in your solution, create a subsite as a new web MVC project, naming it in a way that makes sense. For example, if your main web project is called MySite.Web.Main, Your subsite could be MySite.Web.MySubsite.
Delete global.asax and web.config from all your subsites, and there you go. Once published, all your subsites will rely on the main site global.asax and web.config. If you need to add configuration changes to your main web.config from your subsites, rely on web.config transformation tasks to be triggered after the build complete successfully. You can have different transform files for different environments.
Remember that you'll need to add all that automation to your CI/CD build server as well.
NOTE: when you add a new nuget dependency on your subsite projects, there is a chance it'll create a new web config. It's crucial that all subsite web.configs are either deleted or modified in a way that their "Build Action" property is set to "none", or it'll override the main web config during the publication process. One way to work around this is, instead of deleting the subsite web.config, you delete its content and set "Build Action" to "none" as soon as you create the project.
I have a MVC 6 application on which I need to connect to a different database (i.e. physical file but same schema) depending on who is accessing to it.
That is: each customer of the web application will have it's data isolated in an SQL database (on Azure, with different performances, price levels, etc.) but all those databases will share the same relational schema and of course, the Entity Framework context class.
var cadConexion = #"Server=(localdb)\mssqllocaldb;Database=DBforCustomer1;Trusted_Connection=True;";
services.AddEntityFramework().AddSqlServer().AddDbContext<DAL.ContextoBD>(options => options.UseSqlServer(cadConexion));
The problem is that if I register the service this way I've tied it to a concrete database for a concrete customer, and I don't know if I can change latter when the middleware execution starts (this would be a good point as I can know then who is ringing at the door).
I know I can construct the Database Context passing the connection string as a parameter but this would imply that I should be creating the Database Context at runtime (early in the pipeline) for every request adn I don't know if this could be potentially unefficient or a bad practice. Furthermore I think this way I can't register the Database Context as a service for injecting it on my controllers...
What is the correct approach for this? Anybody has a similar configuration working on production?
Thanks in advance
I would have preferred not to answer my own question, but I feel that I must offer guidance to those with a similar problem, after a long and deep research over internet so I can save them a lot of time testing multi-connection scenarios, wich is quite laborious...
I've finally used a (very recent) feature and APIs of Azure called "Elastic Database Tools" wich, to be concise, is a set of tools from Microsoft aimed to address this concrete problem, specially for SaaS (software as a service) scenarios (as mine is).
Here is a good link to start with:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-get-started/
Good luck with your projects!
First of all, I do not recommend swapping connection strings per request.
But that's not the question. You can do this. You will need to pass your DbContext a new connection string.
.AddDbContext caches the connection string in the dependency injection container, so you cannot use DI to make this scenario work. Instead, you will need to instantiate your DbContext yourself and pass it a new connection string.
I'm about to begin writing a suite of WCF services for a variety of business applications. This SOA will be very immature to begin with and eventually evolve into a strong middle-ware layer.
Unfortunately I do not have the luxury of writing a full set of services and then re-factoring applications to use them, it will be a iterative process done over time. The question I have is around evolving (changing, adding, removing properties) business objects.
For example: If you have a SOA exposing a service that returns obj1. That service is being consumed by app1, app2, app3. Imagine that object is changed for app1, I don't want to have to update app2 and app3 for changes made for app1. If the change is an add property it will work fine, it will simply not be mapped but what happens when you remove a property? Or change a property from a string to an int? How do you manage the change?
Thanks in advance for you help?
PS: I did do a little picture but apparently I need a reputation of 10 so you will have to use your imagination...
The goal is to limit the changes you force your clients to have to make immediately. They may eventually have to make some changes, but hopefully it is only under unavoidable circumstances like they are multiple versions behind and you are phasing it out altogether.
Non-breaking changes can be:
Adding optional properties
Adding new operations
Breaking changes include:
Adding required properties
Removing properties
Changing data types of properties
Changing name of properties
Removing operations
Renaming operations
Changing the order of the properties if explicitly specified
Changing bindings
Changing service namespace
Changing the meaning of the operation. What I mean by this, for example, is if the operation always returned all records but it was changed to only return certain records. This would break the clients expected response.
Options:
Add a new operation to handle the updated properties and logic. Modify code behind original operation to set new properties and refactor service logic if you can. Just remember to not change the meaning of the operation.
If you are wanting to remove an operation that you no longer want to support. You are forcing the client to have to change at some point. You could add documentation in the wsdl to let client know that it is being deprecated. If you are letting the client use your contract dll you could use the [Obsolete] attribute (it is not generated in final wsdl so that's why you can't just use it for all)
If it is a big change altogether, a new version of the service and/or interface and endpoint can be created easily. Ie v2, v3, etc. Then you can have the clients upgrade to the new version when the time is right
Here is also a good flowchart from “Apress - Pro WCF4: Practical Microsoft SOA Implementation” that may help.