Repository to execute Doctrine Dbal queries inside module - prestashop

My goal is to be able to execute SQL queries from inside a PS 1.7.4.2 module. It is encouraged form the docs to use Doctrine Dbal services.
From the documentation:
Even if using old way to retrieve data is still valid
(Product::getProducts or through the webservice), we’d like to
introduce a best practice here: using a repository and get ride of the
Object model. This has a lot of advantages, you rely on database
instead of model and you’ll have better performances and control on
your data.
I don't think it respects PS philosophy if I put the repository class in src/Prestashop/Entity/Repository.So where to put the repository class?

Problem resolved by executing composer init inside module and mapping src in module to Foo namespace.
Then inside module files, you can access services from the container, i.e Doctrine services.
You can additionnaly define your own Repository classes as services and get them from the container.

Related

Why does the author need to set dependency injection in Domain Layer with Clean Architecture?

I'm learning Clean Architecture with the artical .
I know Domain Layer is the most INNER part of the onion (no dependencies with other layers) and it contains Entities, Use cases & Repository Interfaces.
The following code is from the project https://github.com/lopspower/CleanRxArchitecture
GetListRepo.kt and RepoRepository.kt are located in Domain Layer, you can see Image 1
1: I think the GetListRepo class should be abstract or interface, right?
2: There are three parameters for constructor of the class GetListRepo. I don't understand why the author add dependency injection #Inject for the class's constructor.
I think I can instance GetListRepo with any way in Data Layout, why does the author need to set dependency injection in Domain Layer with Clean Architecture ?
GetListRepo.kt
class GetListRepo
#Inject internal constructor(
private val repoRepository: RepoRepository,
useCaseScheduler: UseCaseScheduler? = null,
logger: Logger? = null
) : SingleUseCase<List<Repo>, String>(useCaseScheduler, logger) {
...
}
RepoRepository.kt
interface RepoRepository {
val isConnected: Boolean
...
}
Image 1
This is similar to your another question about interfaces/abstract classes. I will quote myself:
With such architecture you could create alternative implementations of GetAlbumListUseCase in the future and switch them smoothly. You could even use multiple implementations at the same time, for example different objects use different implementations GetAlbumListUseCase. Note that in your current architecture all objects directly depend on a specific implementation, so switching to another one requires to modify half of your code.
Imagine you did as you suggested, you didn't use dependency injection, but you created GetListRepo object everywhere in your code. Then in the future you need to have two alternative ways of providing the data, e.g. with local files and with remote server. Imagine you need to make it configurable in the application settings. Or imagine that you need to create unit tests and it would be good to provide a fake, testing variant of the GetListRepo.
How would you do this if your code everywhere would just instantiate GetListRepo directly? You would need to modify many different places in the code and put some logic related to loading of application settings, etc. everywhere. By using dependency injection all components receive their dependencies from outside, they don't know how they're being created and you can put your creation logic in one place only.
Making long story short: using DI lets us decouple components of our application. It makes our code more flexible and adaptable to different scenarios.

The plugin design pattern explained (as described by Martin Fowler)

I am trying to understand and exercise the plugin pattern, as explained by Martin Fowler.
I can understand in which way it makes use of the separated interface pattern, and that it requires a factory to provide the right implementation of the interface, based on the currently used environment (test, prod, dev, etc). But:
How exactly does the factory read the environment values and decide which object (implementing the IdGenerator interface) to create?
Is the factory a dependency of the domain object (DomainObject)?
Thank you very much.
The goal of the Plugin pattern is to provide a centralized configuration runtime to promote modularity and scalability. The criteria that determines which implementation to select can be the environment, or anything else, like account type, user group, etc. The factory is just one way to create the desired plugin instance based on the selection criteria.
Implementation Selection Criteria
How your factory reads the selection criteria (environment state) depends on your implementation. Some common approaches are:
Command-Line Argument, for example, CLI calls from different CI/CD pipeline stages can pass a dev/staging/production argument
YAML Config Files could be deserialized into an object or parsed
Class Annotations to tag each implementation with an environment
Feature Flags, e.g. SaaS like Launch Darkly
Dependency Injection framework like Spring IoC
Product Line Engineering software like Big Lever
REST Endpoint, e.g. http://localhost/test/order can create a test order object without notifying any customers
HTTP Request Parameter, such as a field in the header or body
Dependency on Factory
Since the DomainObject calls the factory to create an object with the desired implementation, the factory will be a dependency of the domain object. That being said, the modern approach is to use a dependency injection (DI) library (Guice, Dagger) or a framework with built-in DI (Spring DI, .Net Core). In these cases, there still is a dependency on the DI library or framework, but not explicitly on any factory class.
Note: The Plugin design pattern described on pp.499-503 of PEAA was written by Rice and Foemmel, not Martin Fowler.
You will want to get a full PFD of the "Patterns of Enterprise Application Architecture". What is visible on Fowler's site is basically first half-page of any chapter :)
What is being describes is basically the expanded version of idea behind polymorphism.
I don't think "plugin" can actually be described as a "pattern". It's more like result of other design choices.
What you have are .. emm ... "packages", where the main class in each of them implements a third party interface. Each of those packages also have their internal dependencies (other classes or even other libraries), which are used for some specific task. Each package has it's of configuration (which might be added through DIC config) ans each of them get "registered" in your main application.
The mentioning of a factory is almost a red herring, because these days that functionality would be applied using DIC.

Getting lazy instance via kernel (Ninject)

I am using Ninject in substitution of MEF and I was wondering if it's possible to get lazy instances via standard kernel methods and not via [inject] .
I need this since when building up my application's menu I have to pass all particular view models and then if the user is enabled on that function to add it to the menu
Thanks
Sure thing, you can inject a Lazy<T> and the value will only be instanciated when you access Lazy<T>.Value.
You can also inject a Func<T> and use it to create T whenever you like (with the func, every call creates a new instance).
Of course you can also do IResolutionRoot.Get<Lazy<T>>() or IResolutionRoot.Get<Func<T>>(), but usually that's a sign of bad design (service locator), so use constructor injection when it's feasible.
EDIT: When is the "enabling of the user" happening? Is it a one time thing? What is being displayed before and after?
There might be other/better designs to achieve this but it's hard to say with that little information.

Where should my objects/models live if they contain domain functionality?

I've designed my classes using CRC cards and I have a lovely set of objects that contain domain/business logic AND data (properties). Some of the classes require saving to and reading from a database.
My repository should exist in a separate project to my domain objects, but needs to reference them in order to create them.
However, the domain objects/entities need to be able to reference the repository.
I could put the objects in the repository, but as they contain domain functionality, that doesn't feel right at all.
I could put the objects that require persistence in a common shared project, but again it feels wrong to single them out.
Where should I put them? I cant help feeling I'm missing something obvious.
Domain objects/entities should not use repositories. Its domain/applications services should use repositories. And that's done very simple - you should define repository interfaces in your Domain Model assembly and use them in domain/application services.
Domain library should contain
Domain Model
Repository Interfaces
Domain Services (use only interfaces of repositories)
This library does not reference other libraries - it sits at the core of your system.
Persistence library should contain implementation of repositories specific to your data provider. E.g. it can use Entity Framework. This library should reference your domain library. Thus it will know about interfaces it should implement and about entities it should work with.
However, the domain objects/entities need to be able to reference the repository.
Do they? Or do they need to reference the interface of the repository? Then the repository itself is just an implementation of that interface, a low-level detail not needed by the domain logic code.
The way I normally structure a repository pattern in my projects is:
Domain Core Project (business models, core business logic, interfaces for dependencies)
Dependency Projects (references Domain Core Project, implements interfaces)
Application Projects (references Domain Core Project, references Dependency Projects either directly, or through configuration, or through an intermediary project which handles dependency injection)
As an example, suppose I'm using a Service Locator for my dependency injection (which I very often do). Then the business models only need to reference the Service Locator object (which itself is supplied by a factory and can be injected). So internal to a business model I might have something like this:
public class SomeBusinessModel
{
private ISomeDependency SomeProperty
{
get
{
return DIFactory.Current.Resolve<ISomeDependency>();
}
}
}
The DIFactory has a static property called Current which is basically a factory method returning a dependency injection resolver, and its interface has a method called Resolve which takes a type and returns an instance.
So in this case...
SomeBusinessModel is in the Domain Core Project
ISomeDependency is in the Domain Core Project
IDIContainer (the return type for Current) is in the Domain Core Project
DIFactory is in the Domain Core Project, and is initialized (it has an Initialize method that sets the current injection container) by the Application Project for a specific dependency injection container
SomeDependency (the actual instance type being returned by the resolver) is in a Dependency Project
In this setup, the business models know that there needs to be a repository, and require that one be supplied, but they don't have a hard dependency on them. The application supplies the actual implementations for those repositories, either directly by providing an instance or indirectly by configuring a dependency injection container to provide an instance.
All actual dependencies point inward from the implementation details (applications and dependencies) to the core business logic. Never outward.

How to wire up WCF Service Application, Unity and AutoMapper

I have been playing around the last couple of days with different solutions for mapping DTO's to entities for a VS2013, EF6, WCF Service App project.
It is a fairly large project that is currently undergoing a major refactoring to bring the legacy code under test (as well as port the ORM from OpenAccess to EF6).
To be honest I had never used AutoMapper before but what I saw I really liked so I set out to test it out in a demo app and to be honest I am a bit ashamed that I have been unable to achieve a working solution after hours of tinkering and Googling. Here is a breakdown of the project:
WCF Service Application template based project (.svc file w/code behind).
Using Unity 3.x for my IoC container and thus creating my own ServiceHostFactory inheriting from UnityServiceHostFactory.
Using current AutoMapper nuget package.
DTO's and DAL are in two separate libraries as expected, both of which are referenced by the service app project.
My goal is simple (I think): Wire up and create all of my maps in my composition root and inject the necessary objects (using my DI container) into the class that has domain knowledge of the DTO's and a reference to my DAL library. Anyone that needs a transformation would therefore only need to reference the transformation library.
The problem: Well, there are a couple of them...
1) I cannot find a working example of AutoMapper in Unity anywhere. The code snippet that is referenced many times across the web for registering AutoMapper in Unity (see below) references a Configuration class that doesn't seem to exist anymore and I cannot find any documentation on its deprecation:
container.RegisterType<AutoMapper.Configuration, AutoMapper.Configuration>(new PerThreadLifetimeManager(), new InjectionConstructor(typeof(ITypeMapFactory),
AutoMapper.Mappers.MapperRegistry.AllMappers())).RegisterType<ITypeMapFactory,
TypeMapFactoy>().RegisterType<IConfiguration, AutoMapper.Configuration>().RegisterType<IConfigurationProvider,
AutoMapper.Configuration>().RegisterType<IMappingEngine, MappingEngine>();
2) Where to create the maps themselves... I would assuming that I could perform this operation right in my ServiceHostFactory but is that the correct place? There is a Bootstrapper project out there but I have not gone down that road (yet) and would like to avoid it if possible.
3) Other than the obviously necessary reference to AutoMapper in the DTO lib, what would I be injecting into the instantition, the configuration object (assuming IConfiguration or IConfigurationProvider) and which class I am injecting into the constructor of the WCF service to gain access to the necessary object.
I know #3 is a little vague but since I cannot get AutoMapper bound in my Unity container, I cannot test/trial/error to figure out the other issues.
Any pointers would be greatly appreciated.
UPDATE
So I now have a working solution that is testing correctly but would still like to get confirmation that I am following any established best practices.
First off, the Unity container registration for AutoMapper (as of 11/13/2013) v3.x looks like this:
container
.RegisterType<ConfigurationStore, ConfigurationStore>
(
new ContainerControlledLifetimeManager()
, new InjectionConstructor(typeof(ITypeMapFactory)
, MapperRegistry.AllMappers())
)
.RegisterType<IConfigurationProvider, ConfigurationStore>()
.RegisterType<IConfiguration, ConfigurationStore>()
.RegisterType<IMappingEngine, MappingEngine>()
.RegisterType<ITypeMapFactory, TypeMapFactory>();
Right after all of my container registrations, I created and am calling a RegisterMaps() method inside of ConfigureContainer(). I created a test mapping that does both an auto mapping for like named properties as well as a custom mapping. I did this in my demo app for two reasons primarily:
I don't yet know AutoMapper in a WCF app hosted in IIS and injected with Unity well enough to fully understand its behavior. I do not seem to have to inject any kind of configuration object into my library that does the transformations and I am still reading through the source to understand its implementation.
As I understand it, there is a caching mechanism at play here and that if a mapping is not found in cache that it will create it on the fly. While this is great in theory, the only way I could then test my mappings that were occurring in my composition root was to do some sort of custom mapping and then call Mapper.Map in the library that performs mapping and returns the DTO.
All of that blathering aside, here is what I was able to accomplish.
WCF Service App (composition root) injects all of the necessary objects including my DtoConversionMapper instance.
The project is made up of the WCF Service App (comp root), DtoLib, DalLib, ContractsLib (interfaces).
In my ServiceFactoryHost I am able to create mappings, including custom mappings (i.e. map unlike named properties between my DTO and EF 6 entity).
The DtoConversionMapper class lives in the DtoLib library and looks like this: IExampleDto GetExampleDto(ExampleEntity entity);
Any library with a reference to the DtoLib can convert back and forth, including the Service App where the vast majority of these calls will take place.
Any guiding advice would be greatly appreciated but I do have a working demo now that I can test things out with while I work through this large refactoring.
Final Update
I changed the demo project just a little by adding another library (MappingLib) and moved all of my DTO conversions and mappings to it in a static method. While I still call the static method in my composition root after the Unity container is initialized, this gives me the added flexibility of being able to call that same map creation method in my NUnit unit test libraries, effectively eliminating any duplication of code surrounding auto mapper and makes it very testable.