ArchUnit to test actual layered architecture - junit5

Currently in our project we have layered architecture implemented in following way where Controller, Service, Repository are placed in the same package for each feature, for instance:
feature1:
Feature1Controller
Feature1Service
Feature1Repository
feature2:
Feature2Controller
Feature2Service
Feature2Repository
I've found following example of arch unit test where such classes are placed in dedicated packages https://github.com/TNG/ArchUnit-Examples/blob/master/example-junit5/src/test/java/com/tngtech/archunit/exampletest/junit5/LayeredArchitectureTest.java
Please suggest whether there is possibility to test layered architecture when all layers are in single package

If the file name conventions are followed properly across your project, how about you write custom test cases instead of using layeredArchitecture().
For Example:
classes().that().haveSimpleNameEndingWith("Service")
.should().onlyBeAccessed().byClassesThat().haveSimpleNameEndingWith("Controller")
noClasses().that().haveSimpleNameEndingWith("Service")
.should().accessClassesThat().haveSimpleNameEndingWith("Controller")

I know this question is rather old. But for the record, this has been possible for a while using predicates for the layers, e.g.
layeredArchitecture().consideringAllDependencies()
.layer("Controllers").definedBy(HasName.Predicates.nameEndingWith("Controller"))
.layer("Services").definedBy(HasName.Predicates.nameEndingWith("Service"))
.layer("Repository").definedBy(HasName.Predicates.nameEndingWith("Repository"))
.whereLayer("Controllers").mayNotBeAccessedByAnyLayer()
.whereLayer("Services").mayOnlyBeAccessedByLayers("Controllers")
.whereLayer("Repository").mayOnlyBeAccessedByLayers("Services")
However, I'm not sure how well this works in practice. Because usually you don't just have classes following this naming pattern and that's it. A service might also have some POJO as method parameter type (e.g. MyInput) and that should maybe for example not be used by repositories as well. Also, using forward dependency rules (mayOnlyAccessLayers(..)) this might then cause unwanted violations.

Related

OpenTest custom test actors

I'm really impressed with OpenTest project. Found it highly intriguing how many ideas this project is sharing with some projects I created and worked on. Like your epic architecture with actors pulling tasks.. and many others :)
Did you think about including other automation technologies to base Actors on?
I could see two main groups:
1 Established test automation tooling like TestCafe (support for non-selenium gui testing could leverage the whole solution a lot)
2 Custom tooling needed for specific tasks. Would be great to have an actor with some domain-specific capabilities. Now as I can see this could be achieved by introducing another layer of execution worker called by an actor using rest api. What I mean is the possibility of using/including them as new 'actor types' with custom keywords releted.
Thank you for your nice words. We spent a lot of time thinking through the architecture and implementation of OpenTest and it's very rewarding to see that people understand and appreciate the design.
Implementing new keywords (test actions) can be done without creating custom test actors, by creating a new Java class that inherits from the TestAction base class and override its run method. For a simple example, you can take a look at the implementation of the Delay test action. You can then package the new test action in a JAR and drop it (along with any dependencies) in the user-jars subdirectory in your test actor's working directory. The test actor will dynamically load all the JARs it finds in there and will find the new test action class (using reflection) so you can make use of it in your tests. Some useful info and things to look out for:
Your Java project is going to have to define a dependency on the opentest-base project (which is where the TestAction base class is implemented).
When you copy the JAR to where your test actor is, make sure to copy any dependency JARs along with it. Please note that a lot of the dependencies that you might need are already included with the core test actor binaries (you can have a look at the POM.xml to see what they are).
If you happen to have any dependencies that conflict with the other JARs that included with the core test actor binaries, you can apply a technique called shading to "hide" the conflicting classes under a different package name. Most of the times you're not going to need this, but if you do and you get stuck let me know and I'll give you some pointers.
Here's sample project that demonstrates how to build an OpenTest extension that creates a couple of custom keywords: https://github.com/adrianth/opentest-extension-sample
And here's an extensive video tutorial about creating custom OpenTest keywords: https://getopentest.org/tutorials/custom-keywords.html

How to implement Unity 3 + N-Tier architecture?

I am trying to understand Microsoft.practices.Unity.
So, I have this solution:
webproject
business classlibrary project as my logic tier
data classlibrary project as my data access tier
And I want to use Unity to separate web tier from logic and separate logic tier from data, using DI.
I have created an unity.config file in my web project, cause I wanna control the registration from a configuration file, and not inside binary code. This is OK for me. I am using Unity.MVC4.
But, with that, I only resolve my dependency injection only from web to business tier. And how can I make the same thing for business to data tier ?
I have already seen some web examples but I am still confused, because no example shows me the process through the web tier to data tier, step by step, to understand how to implement the Unity DI.
I would like to see a simple example, with a n-tier solution with total DI implementation with Unity.
Prevent from using the config file for registration of dependencies. This is brittle and error prone and you can only do a subset of things that you can do in code. If you're doing this because you want to prevent dependency references, please note that by using the config file, the same referencing still applies, but now it's implicit and there's no compile time checking to help you.
This doesn't mean though that you should never use the config file, but you should only use it to configure things that can actually change during or after deployment. Most things shouldn't change during that time, since most changes must be changed by a developer, either manually by starting the application, or in an automated fashion using unit tests.
Neither would place class names in the config file for the same reason as it is brittle. Using configuration switches is usually much better, since this allows you to move the class names to the code (with a switch case statement or if statement to change configuration based on the config setting) and enables compile time checking.
For the rest of your questions, Tuzo's link will probably give you enough information.

.NET - divorcing layers

I am trying to create a structure for a large .NET application I am developing. I am planning to create three projects:
DataAccessLayer
BusinessLogicLayer
UserInterfaceLayer
I have two questions.
What would you do with functionality that is common to all three layers e.g. logging errors to a text file. Circular dependencies are not allowed in .NET. I believe the best approach is to create a forth project called Utilities.
Would you have .config files in all of the projects or just the user interface layer (passing all the config parameters as arguements to constructors in the BLL and DLL)
What would you do with functionality that is common to all three layers e.g. logging errors to a text file. Circular dependencies are not allowed in .NET. I believe the best approach is to create a forth project called Utilities.
Cross cutting concerns usually ends up in a forth assembly. But in the logger case just use one of the existing frameworks that devs are used to. for instance nlog or log4net.
Circular dependencies is a smell (high coupling or low cohesion) and should not be allowed anywhere.
Someone else suggested Dependency Injection and it's a great way to reduce coupling and therefore increase maintainability. I've written an article here: http://www.codeproject.com/Articles/386164/Get-injected-into-the-world-of-inverted-dependenci
Would you have .config files in all of the projects or just the user interface layer (passing all the config parameters as arguements to constructors in the BLL and DLL)
I would rather create an configuration abstraction. Something like IConfigurationRepository. Then it doesnt matter if the configuration is stored in web.config or somewhere else.
Having a fourth project is one solution, another is to place that in the data layer, and have methods in the business layer that lets the UI layer access them.
You should have each setting in one place only, so the UI layer seems to be a good place.
You could create a single logging project and add it to all the other projects but in my opinion you should add a logger configuration file for each one becouse modeling a three tier architectures as you are doing means first modeling three layers logically separated so you should be able to develop and test each of them separately.
if you have specific layer configuration settings(e.g. one or more layer stay on different servers for strong performance contraints required) use a different configuration file for each layer. If you have the same configuration settings you could use an only one configuration file in the user interface but be aware that if you change the user interface you will have to replace all your settings and this in my opinion might be a serious problem.
Yes, create another project for logging. I would recommend using Log4Net within that new project.
I would keep config settings at the top level - the UI layer - and pass anything necesssary down to the other layers.
You don't mention DI, I would definitely use DI - that should be a priority.

How to provide specific GWT implementations

Suppose I am working on exposing some of my server-side classes to a GWT application, but certain parts could be done much better using GWT-specific components (like JSNI, for instance).
What are some techniques for doing so without being too hacky?
For instance, I am aware of using a subpackage and using the <super-source/> tag, but this requires the package names to be different, which causes eclipse to complain. The general solution in the community is to then tell eclipse to use that as a source folder, but then eclipse complains about there being two classes with the same name.
Ideally, there would just be a way to keep everything in a single source tree, and actually have different classes which apply the alternate implementations. This would feel like a more OO approach.
I would like to add a suffix to a class like _gwt which accomplishes this automatically, and I know I could write a script to do this kind of transformation, but that is a kludge for sure.
I've been considering using Google's GIN/GUICE libraries for my projects in general, and I think there might be some kind of a solution there, but I am not sure as I have not thoroughly investigated it.
What are some solutions you have tried in the past on GWT projects?
The easiest way to have split implementations is to use super-source code, but only enough to instantiate a uniquely-named instance or dispatch to a different method. Ideally, the super-source implementation is just a few lines long, and not so bad that you can't roll it by hand.
To work around the Eclipse / javac double-mapping and package name issues, the GWT source uses two top-level roots for user code: user/src and user/super. For example, the AutoBeans package has a split-implementation of JSON quoting and evaluation, one for the JVM and one for the browser.
There's really no non-kludgy way to implement super-source, as this is a feature way outside what you can specify in the language. There's nothing that lets you say "use this implementation in this environment" without the use of some external tool.

How do you organise your NInject modules?

NInject's module architecture seems useful but I'm worried that it is going to get in a bit of a mess.
How do you organise your modules? Which assembly do you keep them in and how do you decide what wirings go in which module?
Each subsystem gets a module. Of course the definition of what warrants categorisation as a 'subsystem' depends...
In some cases, responsibility for some bindings gets pushed up to a higher level as a lower-level subsystem/component is not in a position to make a final authoritative decision - in some cases this can be achieved by passing parameters into the Module.
Replying to my own post after a couple of years of using NInject.
Here is how I organise my NInjectModules, using a Book Store as an example:
BookStoreSolution
Domain.csproj
Services.csproj
CustomerServicesInjectionModule.cs
PaymentProcessingInjectionModule.cs
DataAccess.csproj
CustomerDatabaseInjectionModule.cs
BookDatabaseInjectionModule.cs
CustomSecurityFramework.csproj
CustomSecurityFrameworkInjectionModule.cs
PublicWebsite.csproj
PublicWebsiteInjectionModule.cs
Intranet.csproj
IntranetInjectionModule.cs
What this is saying is that each project in the system comes prepackaged with one or more NInject modules that know how to setup the bindings for that project's classes.
Most of the time an individual application is not going to want to make significant changes to the default injection modules provided by a project. For example, if I am creating a little WinForm app which needs to import the DataAccess project, normally I am also going to want to have all the project's Repository<> classes bound to their associated IRepository<> interfaces.
At the same time, there is nothing forcing an individual application to use a particular injection module. An application can create its own injection module and ignore the default modules provided by a project that it is importing. In this way the system still remains flexible and decoupled.