Like a lot of things, I'm sure there's a good reason for this, so please help me understand...
Why, by default, do WCF services store settings in app.config?
This has been so frustrating trying to work with multiple Silverlight class libraries. These class libraries are supposed to be completely independent from each other, and this dependency on the app.config seems to cause the following headaches:
Single Responsibility Principle - I should be able to add a reference to a class library and go. If that class library uses a service reference, this idea is shot before I even start coding against it.
Muddy Configuration - To get other libraries to work, I have to copy and paste the service configurations into the "main" application configs. If an endpoint changes in any way, I can't just worry about a new version of that class DLL - I have to worry about anything that uses it, too.
Complex Alternatives - Programmatically creating the endpoint isn't pretty. Period.
There has to be a better way. Why doesn't WCF at least separate the service configurations into a ServiceName.config or something that gets copied to an output directory. What am I missing? How do you deal with this?
Because the alternatives aren't pretty either. The problem with "ServiceName.config" is that ServiceName also needs to be configurable.
The root problem is having Service references in libraries to start with. And a library component cannot dictate a binding for an App. So your SRP argument does not hold.
I concur with #Henk - library assemblies shouldn't have WCF references. If for some reason it does require one, i would use dependency injection, and pass the service reference in to the library function - this is vitally important for maximum testing benefit.
I also don't buy your argument of "Programmatically creating the endpoint isn't pretty". Creating and assigning an end point is just a couple of lines of code, and is a technique i use almost exclusively with my Silverlight components (e.g. if no address is specified within the ServiceReferences.ClientConfig file then i fall back to known service locations within the hosting application, in which case those endpoints are programmatically created).
Basically, if you don't mind the couple of lines of code required to programmatically create an end point, then you can store your address details anywhere, in any config file. You only need to store the addresses in the app.config if you are going for a purely declarative approach.
Related
I am trying to understand Microsoft.practices.Unity.
So, I have this solution:
webproject
business classlibrary project as my logic tier
data classlibrary project as my data access tier
And I want to use Unity to separate web tier from logic and separate logic tier from data, using DI.
I have created an unity.config file in my web project, cause I wanna control the registration from a configuration file, and not inside binary code. This is OK for me. I am using Unity.MVC4.
But, with that, I only resolve my dependency injection only from web to business tier. And how can I make the same thing for business to data tier ?
I have already seen some web examples but I am still confused, because no example shows me the process through the web tier to data tier, step by step, to understand how to implement the Unity DI.
I would like to see a simple example, with a n-tier solution with total DI implementation with Unity.
Prevent from using the config file for registration of dependencies. This is brittle and error prone and you can only do a subset of things that you can do in code. If you're doing this because you want to prevent dependency references, please note that by using the config file, the same referencing still applies, but now it's implicit and there's no compile time checking to help you.
This doesn't mean though that you should never use the config file, but you should only use it to configure things that can actually change during or after deployment. Most things shouldn't change during that time, since most changes must be changed by a developer, either manually by starting the application, or in an automated fashion using unit tests.
Neither would place class names in the config file for the same reason as it is brittle. Using configuration switches is usually much better, since this allows you to move the class names to the code (with a switch case statement or if statement to change configuration based on the config setting) and enables compile time checking.
For the rest of your questions, Tuzo's link will probably give you enough information.
I am trying to create a structure for a large .NET application I am developing. I am planning to create three projects:
DataAccessLayer
BusinessLogicLayer
UserInterfaceLayer
I have two questions.
What would you do with functionality that is common to all three layers e.g. logging errors to a text file. Circular dependencies are not allowed in .NET. I believe the best approach is to create a forth project called Utilities.
Would you have .config files in all of the projects or just the user interface layer (passing all the config parameters as arguements to constructors in the BLL and DLL)
What would you do with functionality that is common to all three layers e.g. logging errors to a text file. Circular dependencies are not allowed in .NET. I believe the best approach is to create a forth project called Utilities.
Cross cutting concerns usually ends up in a forth assembly. But in the logger case just use one of the existing frameworks that devs are used to. for instance nlog or log4net.
Circular dependencies is a smell (high coupling or low cohesion) and should not be allowed anywhere.
Someone else suggested Dependency Injection and it's a great way to reduce coupling and therefore increase maintainability. I've written an article here: http://www.codeproject.com/Articles/386164/Get-injected-into-the-world-of-inverted-dependenci
Would you have .config files in all of the projects or just the user interface layer (passing all the config parameters as arguements to constructors in the BLL and DLL)
I would rather create an configuration abstraction. Something like IConfigurationRepository. Then it doesnt matter if the configuration is stored in web.config or somewhere else.
Having a fourth project is one solution, another is to place that in the data layer, and have methods in the business layer that lets the UI layer access them.
You should have each setting in one place only, so the UI layer seems to be a good place.
You could create a single logging project and add it to all the other projects but in my opinion you should add a logger configuration file for each one becouse modeling a three tier architectures as you are doing means first modeling three layers logically separated so you should be able to develop and test each of them separately.
if you have specific layer configuration settings(e.g. one or more layer stay on different servers for strong performance contraints required) use a different configuration file for each layer. If you have the same configuration settings you could use an only one configuration file in the user interface but be aware that if you change the user interface you will have to replace all your settings and this in my opinion might be a serious problem.
Yes, create another project for logging. I would recommend using Log4Net within that new project.
I would keep config settings at the top level - the UI layer - and pass anything necesssary down to the other layers.
You don't mention DI, I would definitely use DI - that should be a priority.
NInject's module architecture seems useful but I'm worried that it is going to get in a bit of a mess.
How do you organise your modules? Which assembly do you keep them in and how do you decide what wirings go in which module?
Each subsystem gets a module. Of course the definition of what warrants categorisation as a 'subsystem' depends...
In some cases, responsibility for some bindings gets pushed up to a higher level as a lower-level subsystem/component is not in a position to make a final authoritative decision - in some cases this can be achieved by passing parameters into the Module.
Replying to my own post after a couple of years of using NInject.
Here is how I organise my NInjectModules, using a Book Store as an example:
BookStoreSolution
Domain.csproj
Services.csproj
CustomerServicesInjectionModule.cs
PaymentProcessingInjectionModule.cs
DataAccess.csproj
CustomerDatabaseInjectionModule.cs
BookDatabaseInjectionModule.cs
CustomSecurityFramework.csproj
CustomSecurityFrameworkInjectionModule.cs
PublicWebsite.csproj
PublicWebsiteInjectionModule.cs
Intranet.csproj
IntranetInjectionModule.cs
What this is saying is that each project in the system comes prepackaged with one or more NInject modules that know how to setup the bindings for that project's classes.
Most of the time an individual application is not going to want to make significant changes to the default injection modules provided by a project. For example, if I am creating a little WinForm app which needs to import the DataAccess project, normally I am also going to want to have all the project's Repository<> classes bound to their associated IRepository<> interfaces.
At the same time, there is nothing forcing an individual application to use a particular injection module. An application can create its own injection module and ignore the default modules provided by a project that it is importing. In this way the system still remains flexible and decoupled.
Apologies if this is a duplicate, but I've not managed to find this question being asked directly.
The general opinion here (that's me and him across from me) is that they shouldn't, the reason being that DLLs can be shared; therefore the idea of having application-specific information in a DLL is nonsense. If the information is not application-specific, then constants can be used.
A further question is, assuming that DLLs do not have their own config file, whether DLLs should use the configuration files of the executable that loaded the DLL, or instead be passed the relevant data as part of some kind of constructor. Our opinion here is the latter, as it makes it more testable, the downside being that it will sometimes be necessary to pass a significant amount of data to the dll.
Opinions?
There's no reason why you can't have the best of both worlds in terms of "simple to configure with config files" and "testable". Have a static method which can create instances from the configuration file, but also provide a constructor for more control and testability. The static method would just grab the settings and call the constructor.
I believe it's possible to create settings classes for DLLs just like any other project, and then you just need to put the actual text into the application's config file instead of one for the DLL. Basically ignore the app.config generated for the library project, except to use as a template for the application's central one.
Alternatively, use something like Spring.NET to manage this sort of thing :)
Usually, I guess you should pass relevant information to the functions you're calling or set relevant properties in objects you're creating that are defined within the DLL. I guess that's why .NET does not really support config files for DLLs (you can create them, but they'll not be used when running).
I have one scenario, where DLLs are reading a config file, but that is very special: The .NET DLL exports objects as COM objects for use by Microsoft Navision. It communicates with a factoring bank using an XML-RPC interface.
While the DLL is installed on every user's machine, the configuration for the interface is common to all users, so I have a configuration placed on a network drive that's mapped on every PC and the configuration (URL, credentials, etc.) is read from that common file.
Whether that's good practice is up to the reader, but in that scenario having a common config file just made sense...
We have developed a number of custom dll's which are called by third-party Windows applications. These dlls are loaded / unloaded as required.
Most of the dlls call web services and these need to have urls, timeouts, etc configured.
Because the dll is not permanently in memory, it has to read the configuration every time it is invoked. This seems sub-optimal to me.
Is there a better way to handle this?
Note: The configurable information is in an xml file so that the IT department can alter as required. They would not accept registry edits.
Note: These dll's cater for a number of third-party applications, It esentially implements an external EDMS interface. The vendors would not accept passing the required parameters.
Note: It’s a.NET application and the dll is written in C#. Essentially, there are both thick (Windows application) and thin clients that access this dll when they need to perform some kind of EDMS operation. The EDMS interface is defined as a set of calls that have to be implemented in the dll and the dll decides how to implement the EDMS functions e.g. for some clients, “Register Document” would update a DB and for others the same call would utilise a third-party EDMS system. There are no ASP clients.
My understanding is that the dll is loaded when the client wants to access an EDMS operation and is then unloaded when the call is finished. The client may not need to do another EDMS operation for a while (in some cases over an hour).
Use the registry to store your configuration information, it's definitely fast enough.
I think you need to provide more information. There are so many approaches at persisting configuration information. We don't even know the development platform. .Net?
I wouldn't rely on the registry unless I was sure it would always be available. You might get away with that on client machines, but you've already mentioned webservices.
XML file in the current directory seems to be very popular now for server side third-party dlls. But those configurations are optional.
If this is ASP, Your Trust Level will be very important in choosing a configuration persistance method.
You may be able to use your Application server's "Application Scope". Which gets loaded once per lifetime of the application. Your DLL can invalidate that data if it detects it needs too.
I've used text files, XML files, database, various IPC like shared memory segments, application scope, to persist configuration information. It depends a lot on the specifics of your project.
Care to elaborate further?
EDIT. Considering your clarifications, I'd go with an XML file. This custom XML file would be loaded using a search path that has been predefined and documented. If this is ASP.Net you can use Server.MapPath() for example to check various folders like App_Data. The DLL would check the current directory for the configuration file first though. You can then use a "manager" thread that holds the configuration data and passes it to any child threads that require it. The sharing can use IPC like a shared memory segment.
This seems like hassle, but you have to store the information in some scope... Either from disk, memory ( application scope, session scope, DLL global scope, another process/IPC etc. )
ASP.Net also gives you the ability to add custom configuration sections to standard configuration files like web.config. You can access those sections at will and they will not depend on when your DLL was loaded.
Why do you believe your DLL is being removed from memory?
Why don't you let the calling application fill out a data-structure with the stuff you need? Can be done as part of an init-call or so.
How often is the dll getting unloaded? COM dlls can control when they are unloaded via the DllCanUnload method. If these are COM components you could look at implementing some kind of timeout here to prevent frequent loads and unloads. Unless the dll is reload the configuration at a significant frequency it is unlikely to be a real performance bottleneck.
Knowing that the dll will reload its configuration at certain points is a useful feature, since it prevents the users wondering if they have to restart the host process, reboot the machine, etc for the configuration to take effect. You could even watch the file for changes to keep it up to date.
I think the best way for a DLL to get configuration information is via the application that is using it - either via implicit "Init"-calls, like Nils suggested, or via their configuration files.
DLLs shouldn't usually "configure themselves", as they can never be sure in which context they are used. Different users (as in applications) may have different configuration settings to make.
Since you said that the application is written in .NET, you should probably simply require them to put the necessary configuration for your DLL's functions in their configuration file ("whatever.exe.config") and access it from your DLL via AppSettings or even better via a custom configuration section.
Additionally, you may want to provide sensible default values for settings where that is possible (probably not for network addresses though).
If the dlls are loaded and unloaded from memory only at a gap of every 1 hour or so the in-efficiency due to mslal initializations (read file / registry) will be negligible.
However if this is more frequent, a higher inefficiency would be the physical action of loading and unloading of dlls. This could be more of an in-efficiency than small initializations.
It might therefore be better to keep them pinned in memory. That way the initialization performed at the load time, does not get repeated and you also avoid the in-efficiency of load and unload. You solve 2 issues this way.
I could tell you how to do this in C++. Not sure how you would do this in C#. GetModuleHandle + making an extra a LoadLibrary call on this handle is how i would do this in C++.
One way to do it is to have an Interface in the DLL which specify the required settings.
Then it's up to the "application project" to have a class that implements this interface and pass it to the DLL at initiation, this makes you free to change the implementation depending on project. One might read from web.config while another reads from DB.