Maven - When to use parent/child relation versus just adding a dependency - maven-2

I am very new to Maven (or for that matter the whole CICD thing) so please forgive me for this theoretical question without providing any examples.
I just want to understand the difference between adding another project/artifact as a dependency to my project vs adding it as a parent. I mean, if I just include the required project/artifact as a dependency then that will also bring in the transitive dependencies as well, right? Especially, now that we have import scope available to us, this question totally baffles me.
Why do I need the parent/child relationship in POM after all?
Any examples or explanation would be a great help.

Related

Why chained variables not involved anymore in the taskassigning example solution from Optaplanner 8.17?

From OptaPlanner 8.17, it seems that the code of task assigning example project has been refactored a lot. I didn't succeed in finding in the release notes nor on Github any comment about these changes.
In particular, the implementation of the problem to solve doesn't involve chained variables anymore since this version. Could someone from the OptaPlanner team explain why ? I'm also a bit confuse because the latest version of the documentation related to this example project is still referencing the previous deleted classes from version before 8.17 (eg.
org/optaplanner/examples/taskassigning/domain/TaskOrEmployee.java).
It's using #PlanningListVariable, an new (experimental) alternative to chained planning variables, which is far easier to understand and maintain.
Documentation for this new feature hasn't been written yet. We're finishing up the ListVariableListener interface and then the documantation will be updated to cover #PlanningListVariable too. At that time, it will be ready for announcement.
Unlike a normal feature, this big, complex feature took more than a year to bake. That's why it's been delivered in portions. One could argue the task assignment example shouldn't have escaped the feature branch, but it was proving extremely expensive to not merge the stable feature branches in sooner rather than later.

Dependency Inversion Principle - Where should the interfaces go?

I've been scratching my head about this for a few months and I've still been able to satisfactorily convince myself that I have the right answer. We have a, very typical, situation where we have dependencies between multiple layers of our application where each layer is in its own assembly. As an example, our application layer uses the repository layer to retrieve data so pretty standard. My question is, where would the abstraction (interface in this case) live and why? In the example given, should it go in the Application layer or the Repository layer or a separate abstractions assembly?
Based on the diagram and description in The Clean Architecture description (not something we're particularly adhering to) I've placed them in the Application layer so that all of the dependencies are pointing inwards but I'm not sure if this is right. I've read quite a few other articles and looked at countless examples but there is very little in the way of reasoning as to where the abstractions should live.
I've seen this question but I don't believe it answers my question unless of course the actual answer is it doesn't matter.
It is called Dependency Inversion Principle, because the classic dependency direction from a higher level module to a lower level is inverted as follows:
| HigherLevelClass -> RequiredInterface | <= LowerLevelClassImplementingTheInterface |
So the inverted dependency is pointing from the lower level module to the required abstraction for your higher level module.
As the client module (your application layer) requires a certain lower level functionality, the related abstraction (your repository interface) is placed near the client module.
All descriptions I know use the package construct for explaining this.
However, I see no reason why this should not be true for modules or layers.
For details, e.g. see: http://en.wikipedia.org/wiki/Dependency_inversion_principle

ZF2. Alternative to having the AbstractController (or another classes) implementing the ServiceLocatorAwareInterface?

At this blog post one can read three reasons to avoid $this->getServiceLocator() inside controllers. I think that those reasons are valid not just into a controller class but in whatever class that implement the ServiceLocatorAwareInterface interface.
Most times is considered an anti pattern get the dependencies injected using the ServiceLocatorAwareInterface? In what cases this pattern could not be considered an anti pattern and why?
Can anybody elaborate on how an alternative solution (presumably using Zend\DI I think) could be? More specifically, how to avoid the use of ServiceLocatorAwareInterface Modules, Controllers and Bussiness/Domain classes. I'm interesting in know about performance issues around Zend\DI and its solutions.
EDIT
Worth define factories for classes with two or three dependencies when the only thing I will get at the end is move the "injector code" (former $this->getServiceLocator()->get($serviceName)) to factories without solving the testing problem at all? Of course that I will want test my factories too? or no?
I think that factories must be reserved to situations where objects build involve complex tasks. Seem to me that when classes have few dependencies and zero logic (beside the dependency resolving) factories solutions is an overkill of this problem. Beside, with this solution I will end with more code to tests (factories) and with more tests (to test factories) while trying avoid less code in tests implementation. Mocking service locator is an easy thing, cos the interface just have two method and the mocking code could be shared between all tests cases.
Pls, rectify me if I'm wrong ;)
Zend\DI could help, but I will be graceful if someone elaborate about the specifics of this kind of solution.
EDIT 2
Few weeks ago this Zend webinar ("An introduction to Domain Driven Design with ZF2") used this anti-pattern (39:05). I'm right now wandering until what point this's really an anti-pattern ;)
And here more info about this issue.
What Fowler have to said about is here
It's actually really easy to solve this problem, inject your dependencies via the constructor. This removes a lot of the magic that will cause problems in larger applications.
So this means you need to create factories instead of using invokable classes.
As seen in the documentation here: http://framework.zend.com/manual/2.2/en/modules/zend.service-manager.intro.html
Last ZF2 version (zendframework/zend-mvc 2.7.0 and +) throws Depracated warnings if you use the ServiceLocatorAwareInterface, and ZF docs (at the time writing this answer) is not clear and still use this interface. ZF 'gurus' don't talk about updating the docs. It's hard to find a concrete exemple if you are not a ZF2 expert (at the time writing this answer).
But fortunately, Rob Allen wrote a post some years ago to explain how to remove the SL dependency and inject it in your controller : https://akrabat.com/injecting-dependencies-into-your-zf2-controllers/ This solve the problem. Hope this will help !

Should Maven dependency version ranges be considered deprecated?

Given that it's very hard to find anything about dependency version ranges in the official documentation (the best I could come up with is http://docs.codehaus.org/display/MAVEN/Dependency+Mediation+and+Conflict+Resolution), I wonder if they're still considered a 1st class citizen of Maven POMs.
I think most people would agree that they're a bad practice anyway, but I wonder why it's so hard to find anything official about it.
They are not deprecated in the formal sense that they will be removed in a future version. However, their limitations (and the subsequent lack of wide adoption), mean that they are not as useful as originally intended, and also that they are unlikely to get improvements without a significant re-think.
This is why the documentation is only in the form of the design doc - they exist, but important use cases were never finished to the point where I'd recommend generally using them.
If you have a use case that works currently, and can accommodate the limitations, you can expect them to continue to work for the forseeable future, but there is little beyond that in the works.
I don't know why you think that version ranges are not documented. There is a concrete abstract in the Maven Complete Reference documentation.
Nevertheless - a huge problem (in my opinion) is that it is documented that "Resolution of dependency ranges should not resolve to a snapshot (development version) unless it is included as an explicit boundary." (the link you provided) but the system behaves different. If you use version ranges you will get SNAPSHOT versions if they exists in your range (MNG-3092). The discussion if this is wanted or not has not ended yet.
Currently - if you use version ranges - you might get SNAPSHOT dependencies. So you really have to be careful and decide if this is wanted. It might be useful for your own developed depedencies but I doubt that you should use it for 3rd party libraries.
Version ranges are the only reason that Maven is still useful. Even considering not using them is bad practice as it leads you into the disaster of multi-module builds, non-functional parent poms, builds that take 10 minutes or longer, badly structured projects like Spring, Hibernate and Wicket as we cover on our Illegal Argument podcast.
To answer your question, they are not deprecated and are actively used in many projects successfully (except when Sonatype allows corrupt metadata into Apache Maven Central).
If you want a really good example of a non-multi-module build (reactor.xml's only) where version ranges are used extensively, go look at Sticky code (http://code.google.com/p/stickycode/)

OSGi and the Modularity of Persistence: The Effect of Relationships

Most questions revolving around the title of this post ask about making Hibernate, or some other access layer, run in an OSGi container. Or they ask about making the data source run in an OSGi container.
My questions concern the effect of OSGi modularity on the structure of the database itself. Specifically:
How do we make the structure of a database itself modular, so that
when we load a module--say, Contact Management--the schema is
updated to include tables specifically associated with that module?
What is the effect of the foregoing approach on relationships?
I think the second question is the more interesting. Let's say that Contact Management and Project Management are two distinct OSGi modules. Each would have its own set of tables in the schema. But what if, at the database level, we need to form cross-module relationships between two or more tables? Maybe we wish to see a list of projects that a certain contact is, or has been, working on.
Any solution seems to lead down the path of the various modules' having to know too much about each other. We could write into the Project Management specification that that module expects a source of contacts, and then abstract such an expectation through services, interfaces, pub-sub etc. Seems like a lot of work, though, to avoid a hard-wired relationship between the two modules' underlying tables.
What's the point of being modular up top and in the middle if we may necessarily need to break that modularity with relationships between tables down below? Are denormalization and a service bus really a healthy solution?
Any thoughts?
Thank you.
Regarding first question, LiquiBase can be used. You can update and rollback changesets on bundle activation and deactivation.
Regarding second question, I think it is something that should be considered while designing your architecture. There is no help from some tool.
If PM module depends on CM module, it is safe for PM module to assume CM tables currently exist and make foreign relations to them, but not in the opposite direction. You should make it clear in your architecture that what modules depends on what modules and prevent dependency cycles.
After 5 years of JPA, I decided to leave it and after months of investigation I found Querydsl+Liquibase combo the best.
I worked a lot on developing helper OSGi components and a maven-plugin. The functionality of maven-plugin (code generation) can be easily integrated into other build tools as the maven plugin is only a wrapper around a standalone library.
You can find a detailed article about the solution here: http://bzsoldos.wordpress.com/2014/06/18/modularized-persistence/
In this kind of situation it is important to evaluate how independent are these modules/contexts. In DDD terms these two seem to be independent bounded contexts, so a contact in the pm module is a distinct entity (and also another class) than contact in the cm module. If you keep this distinction you then have some denormalization wrt the contact entity (e.g. you copy the id and name of the contact when adding it to a project, later changes to the contact in the cm module will require some pubsub to keep consistency) but each module will be very independent. I would keep the ui as a separate module, depending on both and providing the necessary glue (i.e. passing the ids and required info between them).
Maybe I misread the question, but in my opinion OSGI modularity has absolutely no impact on database structure. It's a data storage level, it can be modular of course, but for it's very own reasons - performance, data volumes, load, etc and with it's very own solutions - clusters, olap, partitioning and replication.
If you need data integrity between cm and pm, it should be provided by means which were initially designed for such sort of task - RDBMS. If you need software modularity - you select OSGI solution and your modules are communicating on much more higher logical/business level. They can be absolutely unaware of how persistence is provided - plain text file or 100-node Oracle RAC cluster.