Why does the author need to set dependency injection in Domain Layer with Clean Architecture? - kotlin

I'm learning Clean Architecture with the artical .
I know Domain Layer is the most INNER part of the onion (no dependencies with other layers) and it contains Entities, Use cases & Repository Interfaces.
The following code is from the project https://github.com/lopspower/CleanRxArchitecture
GetListRepo.kt and RepoRepository.kt are located in Domain Layer, you can see Image 1
1: I think the GetListRepo class should be abstract or interface, right?
2: There are three parameters for constructor of the class GetListRepo. I don't understand why the author add dependency injection #Inject for the class's constructor.
I think I can instance GetListRepo with any way in Data Layout, why does the author need to set dependency injection in Domain Layer with Clean Architecture ?
GetListRepo.kt
class GetListRepo
#Inject internal constructor(
private val repoRepository: RepoRepository,
useCaseScheduler: UseCaseScheduler? = null,
logger: Logger? = null
) : SingleUseCase<List<Repo>, String>(useCaseScheduler, logger) {
...
}
RepoRepository.kt
interface RepoRepository {
val isConnected: Boolean
...
}
Image 1

This is similar to your another question about interfaces/abstract classes. I will quote myself:
With such architecture you could create alternative implementations of GetAlbumListUseCase in the future and switch them smoothly. You could even use multiple implementations at the same time, for example different objects use different implementations GetAlbumListUseCase. Note that in your current architecture all objects directly depend on a specific implementation, so switching to another one requires to modify half of your code.
Imagine you did as you suggested, you didn't use dependency injection, but you created GetListRepo object everywhere in your code. Then in the future you need to have two alternative ways of providing the data, e.g. with local files and with remote server. Imagine you need to make it configurable in the application settings. Or imagine that you need to create unit tests and it would be good to provide a fake, testing variant of the GetListRepo.
How would you do this if your code everywhere would just instantiate GetListRepo directly? You would need to modify many different places in the code and put some logic related to loading of application settings, etc. everywhere. By using dependency injection all components receive their dependencies from outside, they don't know how they're being created and you can put your creation logic in one place only.
Making long story short: using DI lets us decouple components of our application. It makes our code more flexible and adaptable to different scenarios.

Related

Dependency injection on child object

I'm trying to find how I can use dependency injection to inject a ConnectionString or a custom AppSetting object so far i end up in the startup using
services.Configure<IConnectionStrings>(Configuration.GetSection("MyConnection"));
example layer
Web MVC application
Business Logic (class library)
Repository (class library)
DAL (class library)
Model (class library)
where web see only Business logic and so on, on model is available for all.
In the DAL project I have an object that takes care of connecting and running queries against my database (CDbTools object).
Now, how can I inject directly into CDbTools without going from controller down to DAL.
Thank you.
Dependency injection definitely takes a little getting used to, and you won't be creating objects quite the way you're used to. What you want to do is first is modify your CDbTools to take the injected strings.
public CDbTools(IConnectionStrings strings)
{
_connectionString = strings
}
The next step will be to actually inject the CDbTools into the classes that need it as well. First, register it in the startup.
services.AddScoped<CDbTools>();
You'll need to follow this up the chain. Don't think of it as passing the objects from the top level down - that will mess up your separation of concerns. Each layer has the lower layer injected in. This won't just get you the injection of your string you are looking for. It will let you mock things easier, swap layers easier, and a slew of other benefits.
I believe you should add this to your ConfigureServices method:
services.Configure<CustomSettings>(settings =>
{
Configuration.GetSection("CustomSettings").Bind(settings);
});
Where services is your IServiceCollection object and CustomSettings is your custom configuration class that you want to inject. That custom object should map to your settings fields.
Hope this helps!

Modular design and intermodule references

I'm not so sure the title is a good match for this question I want to put on the table.
I'm planning to create a web MVC framework as my graduation dissertation and in a previous conversation with my advisor trying to define some achivements, he convinced me that I should choose a modular design in this project.
I already had some things developed by then and stopped for a while to analyze how much modular it would be and I couldn't really do it because I don't know the real meaning of "modular".
Some things are not very cleary for me, like for example, just referencing another module blows up the modularity of my system?
Let's say I have a Database Access module and it OPTIONALY can use a Cache module for storing results of complex queries. As anyone can see, I at least will have a naming dependency for the cache module.
In my conception of "modular design", I can distribute each component separately and make it interact with others developed by other people. In this case I showed, if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
And so, I was wondering if this is really a modular design yet.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning. To extend functionalities, I could create some structure based on Decorators and Adapters.
To clarify things a little bit, here is an example (in PHP):
Before
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public doComplexOperation();
}
class DbAccess implements DbAccessInterface {
private $cacheManager;
public function __construct(..., CacheManager $cacheManager = null) {
// ...
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
if ($this->cacheManager !== null) {
// return from cache if valid
}
// complex operation
}
}
After
interface Cache {
public function isValid();
public function setValue();
public function getValue();
}
interface CacheManager {
public function get($name);
public function put($name, $value);
}
// Some concrete implementations...
interface DbAccessInterface {
public function doComplexOperation();
}
class DbAccess implements DbAccessInterface {
public function __construct(...) {
// ...
}
public function doComplexQuery() {
// complex operation
}
}
// And now the integration module
class CachedDbAcess implements DbAccessInterface {
private $dbAccess;
private $cacheManager;
public function __construct(DbAccessInterface $dbAccess, CacheManager $cacheManager) {
$this->dbAccess = $dbAccess;
$this->cacheManager = $cacheManager;
}
public function doComplexOperation() {
$cache = $this->cacheManager->get("Foo")
if($cache->isValid()) {
return $cache->getValue();
}
// Do complex operation...
}
}
Now my question is:
Is this the best solution? I should do this for all the modules that do not have as a requirement work together, but can be more efficient doing so?
Anyone would do it in a different way?
I have some more further questions involving this, but I don't know if this is an acceptable question for stackoverflow.
P.S.: English is not my first language, maybe some parts can get a little bit confuse
Some resources (not theoretical):
Nuclex Plugin Architecture
Python Plugin Application
C++ Plugin Architecture (Use NoScript on that side, they have some weird login policies)
Other SO threads (design pattern for plugins in php)
Django Middleware concept
Just referencing another module blows up the modularity of my system?
Not necessarily. It's a dependency. Having a dependencies is perfectly normal. Without dependencies modules can't interact with each other (unless you're doing such interaction indirectly which in general is a bad practice since it hides dependencies and complicates the code). Modular desing implies managing of dependencies, not removing them.
One tool - is using interfaces. Referencing module via interface makes a so called soft dependency. Such module can accept any implementation of an interface as a dependency so it is more independant and as a result - more maintainable.
The other tool - designing modules (and their interfaces) that have only single responcibility. This also makes them more granular, independant and maintainable.
But there is a line which you should not cross - blindly applying these tools may leed to a too modular and too generic desing. Making things too granular makes the whole system more complex. You should not solve universe problems, making generic modules, that all developers can use (unless it is your goal). First of all your system should solve your domain tasks and make things generic enough, but not more than that.
I came up with an alternative that is something like creating each component singly, without don't even knowing about the existance of other components that are not absolutely required for its functioning
It is great if you came up with this idea by yourself. The statement itself, is a key to modular programming.
Plugin architecture is the best in terms of extensibility, but imho it is hard to maintenance especially in intra application. And depending the complexity of plugin architecture, it can make your code more complex by adding plugin logics, etc.
Thus, for intra modular design, I choose the N-Tier, interface based architecture. Basically, the architecture relays on those tiers:
Domain / Entity
Interface [Depend on 1]
Services [Depend on 1 and 2]
Repository / DAL [Depend on 1 and 2]
Presentation Layer [Depend on 1,2,3,4]
Unfortunately, I don't think this is achieveable neatly in php projects as it need separated project / dll references in each tier. However, following the architecture can help to modularize the application.
For each modules, we need to do interface-based design. It can help to enhance the modularity of your code, because you can change the implementation later, but still keep the consumer the same.
I have provided an answer similiar to this interface-based design, at this stackoverflow question.
Lastly but not least, if you want to make your application modular to the UI, you can do Service Oriented Architecture. This is simply make your application as bunch of services, and then make the UI to consume the service. This design can help to separate your UI with your logic. You can later use different UI such as desktop app, but still use the same logic. Unfortunately, I don't have any reliable source for SOA.
EDIT:
I misunderstood the question. This is my point of view about modular framework. Unfortunately, I don't know much about Zend so I will give examples in C#:
It consist of modules, from the smallest to larger modules. Example in C# is you can using the Windows Form (larger) at your application, and also the Graphic (smaller) class to draw custom shapes in the screen.
It is extensible, or replaceable without making change to base class. In C# you can assign FormLoad event (extensible) to the Form class, inherit the Form or List class (extensible) or overridding form draw method to create a custom window graphic (replaceable).
(optional) it is easy to use. In normal DI interface design, we usually inject smaller modules into a larger (high level) module. This will require an IOC container. Refer to my question for detail.
Easy to configure, and does not involve any magical logic such as Service Locator Pattern. Search Service Locator is an Anti Pattern in google.
I don't know much about Zend, however I guess that the modularity in Zend can means that it can be extended without changing the core (replacing the code) inside framework.
If you said that:
if someone wants to use my Database Access module, they will have to take the Cache as well, even if he will not use it, just for referencing/naming purposes.
Then it is not modular. It is integrated, means that your Database Access module will not work without Cache. In reference of C# components, it choose to provide List<T> and BindingList<T> to provide different functionality. In your case, imho it is better to provide CachedDataAccess and DataAccess.

Alternatives for the singleton pattern?

I have been a web developer for some time now using ASP.NET and C#, I want to try and increase my skills by using best practices.
I have a website. I want to load the settings once off, and just reference it where ever I need it. So I did some research and 50% of the developers seem to be using the singleton pattern to do this. And the other 50% of the developers are ant-singleton. They all hate singletons. They recommend dependency injection.
Why are singletons bad? What is best practice to load websites settings? Should they be loaded only once and referenced where needed? How would I go about doing this with dependency injection (I am new at this)? Are there any samples that someone could recommend for my scenario? And I also would like to see some unit test code for this (for my scenario).
Thanks
Brendan
Generally, I avoid singletons because they make it harder to unit test your application. Singletons are hard to mock up for unit tests precisely because of their nature -- you always get the same one, not one you can configure easily for a unit test. Configuration data -- strongly-typed configuration data, anyway -- is one exception I make, though. Typically configuration data is relatively static anyway and the alternative involves writing a fair amount of code to avoid the static classes the framework provides to access the web.config anyway.
There are a couple of different ways to use it that will still allow you to unit test you application. One way (maybe both ways, if your singleton doesn't lazily read the app.cofnig) is to have a default app.config file in your unit test project providing the defaults required for your tests. You can use reflection to replace any specific values as needed in your unit tests. Typically, I'd configure a private method that allows the private singleton instance to be deleted in test set up if I do make changes for particular tests.
Another way is to not actually use the singleton directly, but create an interface for it that the singleton class implements. You can use hand injection of the interface, defaulting to the singleton instance if the supplied value is null. This allows you to create a mock instance that you can pass to the class under test for your tests, but in your real code use the singleton instance. Essentially, every class that needs it maintains a private reference to the singleton instance and uses it. I like this way a little better, but since the singleton will be created you may still need the default app.config file, unless all of the values are lazily loaded.
public class Foo
{
private IAppConfiguration Configuration { get; set; }
public Foo() : this(null) { }
public Foo( IAppConfiguration config )
{
this.Configuration = config ?? AppConfiguration.Instance;
}
public void Bar()
{
var value = this.Config.SomeMaximum;
...
}
}
There's a good discussion of singleton patterns, and coding examples here... http://en.wikipedia.org/wiki/Singleton_pattern See also here... http://en.wikipedia.org/wiki/Dependency_injection
For some reason, singletons seem to divide programmers into strong pro- and anti- camps. Whatever the merits of the approach, if your colleagues are against it, it's probably best not to use one. If you're on your own, try it and see.
Design Patterns can be amazing things. Unfortunately, the singleton seems to stick out like a sore thumb and in many cases can be considered an anti-pattern (it promotes bad practices). Bizarely, the majority of developers will only know one design pattern, and that is the singleton.
Ideally your settings should be a member variable in a high level location, for example the application object which owns the webpages you are spawning. The pages can then ask the app for the settings, or the application can pass the settings as pages are constructed.
One way to approach this problem, is to flog it off as a DAL problem.
Whatever class / web page, etc. needs to use config settings should declare a dependency on an IConfigSettingsService (factory/repository/whatever-you-like-to-call-them).
private IConfigSettingsService _configSettingsService;
public WebPage(IConfigSettingsService configSettingsService)
{
_configSettingsService = configSettingsService;
}
So your class would get settings like this:
ConfigSettings _configSettings = _configSettingsService.GetTheOnlySettings();
the ConfigSettingsService implementation would have a dependency which is Dal class. How would that Dal populate the ConfigSettings object? Who cares.
Maybe it would populate a ConfigSettings from a database or .config xml file, every time.
Maybe it do that the first time but then populate a static _configSettings for subsequent calls.
Maybe it would get the settings from Redis. If something indicates the settings have changed then the dal, or something external, can update Redis. (This approach will be useful if you have more than one app using the settings.
Whatever it does, your only dependency is a non-singleton service interface. That is very easy to mock. In your tests you can have it return a ConfigSettings with whatever you want in it).
In reality it would more likely be MyPageBase which has the IConfigSettingsService dependency, but it could just as easily be a web service, windows service, MVC somewhatsit, or all of the above.

In what namespace should you put interfaces relative to their implementors?

Specifically, when you create an interface/implementor pair, and there is no overriding organizational concern (such as the interface should go in a different assembly ie, as recommended by the s# architecture) do you have a default way of organizing them in your namespace/naming scheme?
This is obviously a more opinion based question but I think some people have thought about this more and we can all benefit from their conclusions.
The answer depends on your intentions.
If you intend the consumer of your namespaces to use the interfaces over the concrete implementations, I would recommend having your interfaces in the top-level namespace with the implementations in a child namespace
If the consumer is to use both, have them in the same namespace.
If the interface is for predominantly specialized use, like creating new implementations, consider having them in a child namespace such as Design or ComponentModel.
I'm sure there are other options as well, but as with most namespace issues, it comes down to the use-cases of the project, and the classes and interfaces it contains.
I usually keep the interface in the same namespace of as the concrete types.
But, that's just my opinion, and namespace layout is highly subjective.
Animals
|
| - IAnimal
| - Dog
| - Cat
Plants
|
| - IPlant
| - Cactus
You don't really gain anything by moving one or two types out of the main namespace, but you do add the requirement for one extra using statement.
What I generally do is to create an Interfaces namespace at a high level in my hierarchy and put all interfaces in there (I do not bother to nest other namespaces in there as I would then end up with many namespaces containing only one interface).
Interfaces
|--IAnimal
|--IVegetable
|--IMineral
MineralImplementor
Organisms
|--AnimalImplementor
|--VegetableImplementor
This is just the way that I have done it in the past and I have not had many problems with it, though admittedly it might be confusing to others sitting down with my projects. I am very curious to see what other people do.
I prefer to keep my interfaces and implementation classes in the same namespace. When possible, I give the implementation classes internal visibility and provide a factory (usually in the form of a static factory method that delegates to a worker class, with an internal method that allows a unit tests in a friend assembly to substitute a different worker that produces stubs). Of course, if the concrete class needs to be public--for instance, if it's an abstract base class, then that's fine; I don't see any reason to put an ABC in its own namespace.
On a side note, I strongly dislike the .NET convention of prefacing interface names with the letter 'I.' The thing the (I)Foo interface models is not an ifoo, it's simply a foo. So why can't I just call it Foo? I then name the implementation classes specifically, for example, AbstractFoo, MemoryOptimizedFoo, SimpleFoo, StubFoo etc.
(.Net) I tend to keep interfaces in a separate "common" assembly so I can use that interface in several applications and, more often, in the server components of my apps.
Regarding namespaces, I keep them in BusinessCommon.Interfaces.
I do this to ensure that neither I nor my developers are tempted to reference the implementations directly.
Separate the interfaces in some way (projects in Eclipse, etc) so that it's easy to deploy only the interfaces. This allows you to provide your external API without providing implementations. This allows dependent projects to build with a bare minimum of externals. Obviously this applies more to larger projects, but the concept is good in all cases.
I usually separate them into two separate assemblies. One of the usual reasons for a interface is to have a series of objects look the same to some subsystem of your software. For example I have all my Reports implementing the IReport Interfaces. IReport is used is not only used in printing but for previewing and selecting individual options for each report. Finally I have a collection of IReport to use in dialog where the user selects which reports (and configuring options) they want to print.
The Reports reside in a separate assembly and the IReport, the Preview engine, print engine, report selections reside in their respective core assembly and/or UI assembly.
If you use the Factory Class to return a list of available reports in the report assembly then updating the software with new report becomes merely a matter of copying the new report assembly over the original. You can even use the Reflection API to just scan the list of assemblies for any Report Factories and build your list of Reports that way.
You can apply this techniques to Files as well. My own software runs a metal cutting machine so we use this idea for the shape and fitting libraries we sell alongside our software.
Again the classes implementing a core interface should reside in a separate assembly so you can update that separately from the rest of the software.
I give my own experience that is against other answers.
I tend to put all my interfaces in the package they belongs to. This grants that, if I move a package in another project I have all the thing there must be to run the package without any changes.
For me, any helper functions and operator functions that are part of the functionality of a class should go into the same namespace as that of the class, because they form part of the public API of that namespace.
If you have common implementations that share the same interface in different packages you probably need to refactor your project.
Sometimes I see that there are plenty of interfaces in a project that could be converted in an abstract implementation rather that an interface.
So, ask yourself if you are really modeling a type or a structure.
A good example might be looking at what Microsoft does.
Assembly: System.Runtime.dll
System.Collections.Generic.IEnumerable<T>
Where are the concrete types?
Assembly: System.Colleections.dll
System.Collections.Generic.List<T>
System.Collections.Generic.Queue<T>
System.Collections.Generic.Stack<T>
// etc
Assembly: EntityFramework.dll
System.Data.Entity.IDbSet<T>
Concrete Type?
Assembly: EntityFramework.dll
System.Data.Entity.DbSet<T>
Further examples
Microsoft.Extensions.Logging.ILogger<T>
- Microsoft.Extensions.Logging.Logger<T>
Microsoft.Extensions.Options.IOptions<T>
- Microsoft.Extensions.Options.OptionsManager<T>
- Microsoft.Extensions.Options.OptionsWrapper<T>
- Microsoft.Extensions.Caching.Memory.MemoryCacheOptions
- Microsoft.Extensions.Caching.SqlServer.SqlServerCacheOptions
- Microsoft.Extensions.Caching.Redis.RedisCacheOptions
Some very interesting tells here. When the namespace changes to support the interface, the namespace change Caching is also prefixed to the derived type RedisCacheOptions. Additionally, the derived types are in an additional namespace of the implementation.
Memory -> MemoryCacheOptions
SqlServer -> SqlServerCatchOptions
Redis -> RedisCacheOptions
This seems like a fairly easy pattern to follow most of the time. As an example I (since no example was given) the following pattern might emerge:
CarDealership.Entities.Dll
CarDealership.Entities.IPerson
CarDealership.Entities.IVehicle
CarDealership.Entities.Person
CarDealership.Entities.Vehicle
Maybe a technology like Entity Framework prevents you from using the predefined classes. Thus we make our own.
CarDealership.Entities.EntityFramework.Dll
CarDealership.Entities.EntityFramework.Person
CarDealership.Entities.EntityFramework.Vehicle
CarDealership.Entities.EntityFramework.SalesPerson
CarDealership.Entities.EntityFramework.FinancePerson
CarDealership.Entities.EntityFramework.LotVehicle
CarDealership.Entities.EntityFramework.ShuttleVehicle
CarDealership.Entities.EntityFramework.BorrowVehicle
Not that it happens often but may there's a decision to switch technologies for whatever reason and now we have...
CarDealership.Entities.Dapper.Dll
CarDealership.Entities.Dapper.Person
CarDealership.Entities.Dapper.Vehicle
//etc
As long as we're programming to the interfaces we've defined in root Entities (following the Liskov Substitution Principle) down stream code doesn't care where how the Interface was implemented.
More importantly, In My Opinion, creating derived types also means you don't have to consistently include a different namespace because the parent namespace contains the interfaces. I'm not sure I've ever seen a Microsoft example of interfaces stored in child namespaces that are then implement in the parent namespace (almost an Anti-Pattern if you ask me).
I definitely don't recommend segregating your code by type, eg:
MyNamespace.Interfaces
MyNamespace.Enums
MyNameSpace.Classes
MyNamespace.Structs
This doesn't add value to being descriptive. And it's akin to using System Hungarian notation, which is mostly if not now exclusively, frowned upon.
I HATE when I find interfaces and implementations in the same namespace/assembly. Please don't do that, if the project evolves, it's a pain in the ass to refactor.
When I reference an interface, I want to implement it, not to get all its implementations.
What might me be admissible is to put the interface with its dependency class(class that references the interface).
EDIT: #Josh, I juste read the last sentence of mine, it's confusing! of course, both the dependency class and the one that implements it reference the interface. In order to make myself clear I'll give examples :
Acceptable :
Interface + implementation :
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyDependentClass
{
private IMyInterface inject;
public MyDependentClass(IMyInterface inject)
{
this.inject = inject;
}
public void DoJob()
{
//Bla bla
inject.MyMethod();
}
}
Implementing class:
namespace B;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
NOT ACCEPTABLE:
namespace A;
Interface IMyInterface
{
void MyMethod();
}
namespace A;
Interface MyImplementing : IMyInterface
{
public void MyMethod()
{
Console.WriteLine("hello world");
}
}
And please DON'T CREATE a project/garbage for your interfaces ! example : ShittyProject.Interfaces. You've missed the point!
Imagine you created a DLL reserved for your interfaces (200 MB). If you had to add a single interface with two line of codes, your users will have to update 200 MB just for two dumb signaturs!

How can I avoid global state?

So, I was reading the Google testing blog, and it says that global state is bad and makes it hard to write tests. I believe it--my code is difficult to test right now. So how do I avoid global state?
The biggest things I use global state (as I understand it) for is managing key pieces of information between our development, acceptance, and production environments. For example, I have a static class named "Globals" with a static member called "DBConnectionString." When the application loads, it determines which connection string to load, and populates Globals.DBConnectionString. I load file paths, server names, and other information in the Globals class.
Some of my functions rely on the global variables. So, when I test my functions, I have to remember to set certain globals first or else the tests will fail. I'd like to avoid this.
Is there a good way to manage state information? (Or am I understanding global state incorrectly?)
Dependency injection is what you're looking for. Rather than have those functions go out and look for their dependencies, inject the dependencies into the functions. That is, when you call the functions pass the data they want to them. That way it's easy to put a testing framework around a class because you can simply inject mock objects where appropriate.
It's hard to avoid some global state, but the best way to do this is to use factory classes at the highest level of your application, and everything below that very top level is based on dependency injection.
Two main benefits: one, testing is a heck of a lot easier, and two, your application is much more loosely coupled. You rely on being able to program against the interface of a class rather than its implementation.
Keep in mind if your tests involve actual resources such as databases or filesystems then what you are doing are integration tests rather than unit tests. Integration tests require some preliminary setup whereas unit tests should be able to run independently.
You could look into the use of a dependency injection framework such as Castle Windsor but for simple cases you may be able to take a middle of the road approach such as:
public interface ISettingsProvider
{
string ConnectionString { get; }
}
public class TestSettings : ISettingsProvider
{
public string ConnectionString { get { return "testdatabase"; } };
}
public class DataStuff
{
private ISettingsProvider settings;
public DataStuff(ISettingsProvider settings)
{
this.settings = settings;
}
public void DoSomething()
{
// use settings.ConnectionString
}
}
In reality you would most likely read from config files in your implementation. If you're up for it, a full blown DI framework with swappable configurations is the way to go but I think this is at least better than using Globals.ConnectionString.
Great first question.
The short answer: make sure your application is a function from ALL its inputs (including implicit ones) to its outputs.
The problem you're describing doesn't seem like global state. At least not mutable state. Rather, what you're describing seems like what is often referred to as "The Configuration Problem", and it has a number of solutions. If you're using Java, you may want to look into light-weight injection frameworks like Guice. In Scala, this is usually solved with implicits. In some languages, you will be able to load another program to configure your program at runtime. This is how we used to configure servers written in Smalltalk, and I use a window manager written in Haskell called Xmonad whose configuration file is just another Haskell program.
An example of dependency injection in an MVC setting, here goes:
index.php
$container = new Container();
include_file('container.php');
container.php
container.add("database.driver", "mysql");
container.add("database.name","app");
...
$container.add(new Database($container->get('database.driver', "database.name")), 'database');
$container.add(new Dao($container->get('database')), 'dao');
$container.add(new Service($container->get('dao')));
$container.add(new Controller($container->get('service')), 'controller');
$container.add(new FrontController(),'frontController');
index.php continues here:
$frontController = $container->get('frontController');
$controllerClass = $frontController->getController($_SERVER['request_uri']);
$controllerAction = $frontController->getAction($_SERVER['request_uri']);
$controller = $container->get('controller');
$controller->$action();
And there you have it, the controller depends on a service layer object which depends on
a dao(data access object) object which depends on a database object with depends on the
database driver, name etc