What is the best way to pass information from one steps def class to another? - cucumber-jvm

Injecting one Steps def into another can rapidly lead to dependency bloat as the amount of re-use among steps defs grows. Furthermore it couples steps defs very tightly to each-other.
There must be a better way. Any suggestions?
Is passing information between steps defs an anti-pattern that should be avoided anyway?

If your question is about sharing state between different Step Definition classes, you can do this with Dependency Injection frameworks like Spring.
Here's a blog that explains (better than I could paraphrase right here):
http://www.thinkcode.se/blog/2017/06/24/sharing-state-between-steps-in-cucumberjvm-using-spring
If you don't want to use Spring, there are other DI frameworks you can use.

Related

NestJS Schema First GraphQL Serialization

I've done some research into the subject of response serialization for NestJS/GraphQL. There's some helpful information to be found here, but the documentation seems to be completely focused on a code first approach. My project happens to be taking schema first approach, and from what I've read across a few sources, the option available for a schema-first project would be to implement interceptors for the resolvers, and carry out the serialization there.
Before I run off and start writing these interceptors, my question is this; is there any better options provided by nestjs to implement serialization for a schema first approach?
If it's just transformation of values then an interceptor is a great tool for that. Everything shown for "code-first" should work for "schema-first" in terms of high level ideas of the framework (interceptors, pipes, filters, etc). In fact, once the server is running, there shouldn't be a distinguishable difference between the two approaches, and how they operate. The big thing you'd need to be concerned with is that you won't be easily able to take advantage of class-transformer and class-validator because the original class definitions are created via the gql-codegen, but you can still extend those types and add on the decorators necessary if you choose.

Domain services seem to require only a fraction of the total queries defined in repositories -- how to address that?

I'm currently facing some doubts about layering and repositories.
I was thinking of creating my repositories in a persistence module. Those repositories would inherit (or implement/extend) from repositories created in the domain layer module, being kept "persistence agnostic".
The issue is that from all I can see, the necessities of the domain layer regarding its repositories are quite humble. In general, they tend to be rather CRUDish.
It's in general at the application layer level, when solving particular business use-cases that the queries tend to be more complex and contrived (and thus, the number of repository's methods to explode).
So this raises the question of how to deal with this:
1) Should I just leave the domain repository interfaces simple and then just add the extra methods in the repository implementations (such that the application layer, that does know about the repository implementations, can use them)?
2) Should I just add those methods at the domain level repository implementations? I think not.
3) Should I create another set of repositories to be used just at the application layer level? This would probably mean moving to a more CQRSesque application.
Thanks
I think you should react to the realities of your business / requirements.
That is, if your use-cases are clearly not "persistence agnostic" then don't hold on to that particular restriction. Not everything can be reduced to CRUD. In fact I think most things worth implementing can't be reduced to CRUD persistence. Most database systems relational or otherwise have a lot of features nowadays, and it seems quaint to just ignore those. Use them.
If you don't want to mix SQL with other code, there are still a lot of other "patterns" that let you do that without requiring you to abstract access to something you actually don't need abstraction to.
On the flipside, you build a dependency to a particular persistence system. Is that a problem? Most of the time it actually isn't, but you have to decide for yourself.
All in all I would choose option 4: Model the problem. If I need a complicated SQL to build a use-case, and I don't need database independence (I rarely if ever do), then just write it where it is used, end of story.
You can use other tools like refactoring later to correct design issues.
The Application layer doesn't have to know about the Infrastructure.
Normally it should be fine working with just what Repository interfaces declared in the Domain provide. The concrete implementations are injected at runtime.
Declaring repository interfaces in the Domain layer is not only about using them in domain services but also elsewhere.
Should I create another set of repositories to be used just at the
application layer level? This would probably mean moving to a more
CQRSesque application.
You could do that, but you would lose some reusability.
It is also not related to CQRS - CQRS is a vertical division of the whole application between queries and commands, not giving horizontal layers different ways of fetching data.
Given that a repository is not about querying but about working with full aggregates most of the time perhaps you could elaborate on why you may need to create a separate set of repositories that are used only in your application/integration layer?
Perhaps you need to have a read-specific implementation that is optimised for data retrieval:
This would probably mean moving to a more CQRSesque application
Well, you'd probably want to implement read-specific bits that make sense. I usually have my data access separated either by namespace and, at times, even in a separate assembly. I then use I{Aggregate}Query implementations that return the relevant bits of data in as simple a type as possible. However, it is quite possible to even map to a more complex read model that even has relations but it is still only a read model and is not concerned with any command processing. To this end the domain is never even aware of these classes.
I would not go with extending the repositories.

Should I put the Test classes in the UML?

Should I put the test classes in the UML diagram? I can't find any "best practice" about this!
It depends. Firstly "the UML diagram" suggests that you are creating a single diagram. This is definitely not good practice. Create as many diagrams as needed lighting certain aspects of the model. So - test cases would be one of those aspects. That means: put them in (a) separate diagram(s).
To add a suggestion, if you want to model tests, you can look to UML testing profile ( UTP link) it provides needed elements to model tests, requirements and so on.
You can use SysML also since it integrates a part of UTP.
It definitely depends on context. Who is going to use the UML model and what will they use it for? In general I would say that adding test classes is going to clutter a UML model and make it difficult to understand - so no. But if the context is that the testing is what you want to explain, then clearly the test classes are going to be pretty important.
As Thomas Kilian points out, creating a number of diagrams from one underlying model is probably the right answer - and being able to do this is one of the reasons you would use UML rather than a simple diagram.
This is a preference. You can choose to or choose not to.
I would say it's better practice to have the tests modeled into the solution. But I wouldn't claim I always follow best practices 🙊
There are many diagrams needed in modeling a solution. I would focus on three: Analysis, Design, and Implementation. All three are class diagrams. All three define your solution at different abstractions.
In the analysis, you're closest to the requirements and the beginning of your solution. In here, you would want to have broad classes. I would not put tests in here since this diagram is still trying to get the shape of the solution from the user and their requirements. An analysis diagram would only have class names in a box, with lines which show their associations.
The design diagram would go into a little more detail on how classes would be built. The blueprint of the application would take shape in the design. This design can be given to any programmer and they write code which would build the solution. The interesting part of the design diagram is that it could also be given to a test engineer and they would write proper tests for the solution to be created.
The implementation diagram is the lowest level class diagram which is created. Most times, I would create this in retrospect. The implementation diagram should be a verbatim translation of the codebase. In the implementation diagram, I would have my test classes included for completeness.
Note, these are my views which I sometimes do not follow to the letter because of business constraints. However, in an ideal world, this is how I would prefer my modeling done.

ZF2. Alternative to having the AbstractController (or another classes) implementing the ServiceLocatorAwareInterface?

At this blog post one can read three reasons to avoid $this->getServiceLocator() inside controllers. I think that those reasons are valid not just into a controller class but in whatever class that implement the ServiceLocatorAwareInterface interface.
Most times is considered an anti pattern get the dependencies injected using the ServiceLocatorAwareInterface? In what cases this pattern could not be considered an anti pattern and why?
Can anybody elaborate on how an alternative solution (presumably using Zend\DI I think) could be? More specifically, how to avoid the use of ServiceLocatorAwareInterface Modules, Controllers and Bussiness/Domain classes. I'm interesting in know about performance issues around Zend\DI and its solutions.
EDIT
Worth define factories for classes with two or three dependencies when the only thing I will get at the end is move the "injector code" (former $this->getServiceLocator()->get($serviceName)) to factories without solving the testing problem at all? Of course that I will want test my factories too? or no?
I think that factories must be reserved to situations where objects build involve complex tasks. Seem to me that when classes have few dependencies and zero logic (beside the dependency resolving) factories solutions is an overkill of this problem. Beside, with this solution I will end with more code to tests (factories) and with more tests (to test factories) while trying avoid less code in tests implementation. Mocking service locator is an easy thing, cos the interface just have two method and the mocking code could be shared between all tests cases.
Pls, rectify me if I'm wrong ;)
Zend\DI could help, but I will be graceful if someone elaborate about the specifics of this kind of solution.
EDIT 2
Few weeks ago this Zend webinar ("An introduction to Domain Driven Design with ZF2") used this anti-pattern (39:05). I'm right now wandering until what point this's really an anti-pattern ;)
And here more info about this issue.
What Fowler have to said about is here
It's actually really easy to solve this problem, inject your dependencies via the constructor. This removes a lot of the magic that will cause problems in larger applications.
So this means you need to create factories instead of using invokable classes.
As seen in the documentation here: http://framework.zend.com/manual/2.2/en/modules/zend.service-manager.intro.html
Last ZF2 version (zendframework/zend-mvc 2.7.0 and +) throws Depracated warnings if you use the ServiceLocatorAwareInterface, and ZF docs (at the time writing this answer) is not clear and still use this interface. ZF 'gurus' don't talk about updating the docs. It's hard to find a concrete exemple if you are not a ZF2 expert (at the time writing this answer).
But fortunately, Rob Allen wrote a post some years ago to explain how to remove the SL dependency and inject it in your controller : https://akrabat.com/injecting-dependencies-into-your-zf2-controllers/ This solve the problem. Hope this will help !

What is the real difference between "Bastard Injection" and "Poor Man's Injection"

From the Dependency Injection in .NET book I know that the object graph should be created at the Composition Root of the application which makes a lot of sense to me when you are using an IoC Container.
In all the applications I've seen when an attempt to use DI is being made, there are always two constructors:
the one with the dependencies as parameters and
the "default" one with no parameters which in turn calls the other one "newing" up all the dependencies
In the aforementioned book, however, this is called the "Bastard Injection anti-pattern" and that is what I used to know as "Poor Man's Injection".
Now considering all this, I would say then that "Poor Man's Injection" would be just not using an IoC Container and instead coding all the object graph by hand on the said Composition Root.
So my questions are:
Am I understanding these concepts correctly or am I completely off track?
If you still need to register all the dependencies in the IoC container vs. coding them by hand in the exact same Composition Root, what's the real benefit of using an IoC container?
If I have misunderstood what "Poor Man's Injection" really is, could someone please clarify it?
When it comes to DI, there's a lot of conflicting use of terminology out there. The term Poor Man's DI is no exception. To some people, it means one thing and to others it means something different.
One of the things I wanted to do with the book was to supply a consistent pattern language for DI. When it came to all of those terms with conflicting use, I had two options: Come up with a completely new term, or pick the most prevalent use (according to my subjective judgment).
In general, I've preferred to re-use existing terminology instead of making up a completely new (and thus alien) pattern language. That means that in certain cases (such as Poor Man's DI), you may have a different notion of what the name is than the definition given in the book. That often happens with patterns books.
At least I find it reassuring that the book seems to have done its job of explaining exactly both Poor Man's DI and Bastard Injection, because the interpretation given in the O.P. is spot on.
Regarding the real benefit of a DI Container I will refer you to this answer: Arguments against Inversion of Control containers
P.S. 2018-04-13: I'd like to point out that I've years ago come to acknowledge that the term Poor Man's DI does a poor (sic!) job of communicating the essence of the principle, so for years, now, I've instead called it Pure DI.
P.P.S. 2020-07-17: We removed the term Bastard Injection from the second edition. In the second edition we simply use the more general term Control Freak to specify that your code "depend[s] on a Volatile Dependency in any place other than a Composition Root."
Some notes to the part 2) of the question.
If you still need to register all the dependencies in the IoC container vs. coding them by hand in the exact same Composition Root, what's the real benefit of using an IoC container?
If you have a tree of dependencies (clasess which depend on dependencies which depend on other dependencies and so on): you can't do all the "news" in a composition root, because you new up the instances on each "bastard injection" constructor of each class, so there are many "composition roots" spreaded along your code base
Wheter you have a tree of dependencies, or not, using an IoC container will spare typing some code. Imagine you have 20 different classes that depend on the same IDependency. If you use a container you can provide a configuration to let it know which instance to use for IDependency. You'll make this in a single place, and the container will take care to provide the instance in all dependent classes
The container can also control the object lifetime, which offers another advantage.
All of this, apart of the other obvious advantages provided by DI (testability, maintainability, code decouplig, extensibility...)
We've found, when refactoring legacy applications and decoupling dependencies, that things tend to be easier when done with a two step process. The process includes both "poor man" and formal IoC container systems.
First: set up interfaces and establish "poor mans ioc" to implement them.
This decouples dependencies without the added overhead (and learning
curve) of a formal IoC set up.
This also reduces interference with the existing legacy code. Nothing like introducing yet another set of issues to debug.
Development team members are never at the same level of expertise or understanding. So it saves a lot of implementation time.
This also allows a footing for test cases.
This also establishes standards for a formal IoC container system later.
This can be implemented in steps over time by many people.
Secondly: Each IoC system has pros & cons.
Now that an application standard is established, an educated decision
can be made in choosing an IoC container system.
Implementing the IoC system becomes a task of swapping the "poor mans" code with the
new IoC system.
This can be implemented in steps over time and in parallel with "poor man". It is better to head this with one person.